iblutil
++ | + |
+ | + |
+ | Tools to generate and identify spacers. |
+
+ | + |
From 98aa337a7b7d9b72c70ae976fe8b6299ed10ddaa Mon Sep 17 00:00:00 2001
From: GitHub Action API Reference
+
+
+
+
+
+
t (float) – Duration of time over which to calculate the firing rate and false positive rate.
Parameters (Keyword) –
------------------ –
Parameters (Keyword)
------------------
min_amp (float) – The minimum mean amplitude (in V) of the spikes in the unit. Default value is 50e-6.
min_fr (float) – The minimum firing rate (in Hz) of the unit. Default value is 0.5.
max_fpr (float) – The maximum false positive rate of the unit (using the fp formula in Hill et al. (2011) diff --git a/_autosummary/brainbox.quality.html b/_autosummary/brainbox.quality.html index c252a455..131cd5b4 100644 --- a/_autosummary/brainbox.quality.html +++ b/_autosummary/brainbox.quality.html @@ -18,7 +18,7 @@ - + @@ -80,6 +80,7 @@
times –
align_times –
pre_time –
post_time –
bin_size –
weights –
times
align_times
pre_time
post_time
bin_size
weights
spike_times –
spike_clusters –
cluster_ids –
align_times –
pre_time –
post_time –
bin_size –
weights –
spike_times
spike_clusters
cluster_ids
align_times
pre_time
post_time
bin_size
weights
Prepares passive receptive field mapping into format for analysis
rf_map (output from brainbox.io.one.load_passive_rfmap) –
+rf_map (output from brainbox.io.one.load_passive_rfmap)
rf_map_times –
rf_map_pos –
rf_stim_frames –
spike_times (array of spike times) –
spike_depths (array of spike depths along probe) –
t_bin (bin size along time dimension) –
d_bin (bin size along depth dimension) –
pre_stim (time period before rf map stim onset to epoch around) –
post_stim (time period after rf map onset to epoch around) –
y_lim (values to limit to in depth direction) –
x_lim (values to limit in time direction) –
rf_map_times
rf_map_pos
rf_stim_frames
spike_times (array of spike times)
spike_depths (array of spike depths along probe)
t_bin (bin size along time dimension)
d_bin (bin size along depth dimension)
pre_stim (time period before rf map stim onset to epoch around)
post_stim (time period after rf map onset to epoch around)
y_lim (values to limit to in depth direction)
x_lim (values to limit in time direction)
Perform SVD on the spatiotemporal rf_map and return the first spatial components
rf_map –
+rf_map
stim_events (dict of different stim events. Each key contains time of stimulus onset) –
spike_times (array of spike times) –
spike_depths (array of spike depths along probe) –
z_score_flag (whether to return values as z_score of firing rate) –
T_BIN (bin size along time dimension) –
D_BIN (bin size along depth dimension) –
pre_stim (time period before rf map stim onset to epoch around) –
post_stim (time period after rf map onset to epoch around) –
base_stim (time period before rf map stim to use as baseline for z_score correction) –
y_lim (values to limit to in depth direction) –
x_lim (values to limit in time direction) –
stim_events (dict of different stim events. Each key contains time of stimulus onset)
spike_times (array of spike times)
spike_depths (array of spike depths along probe)
z_score_flag (whether to return values as z_score of firing rate)
T_BIN (bin size along time dimension)
D_BIN (bin size along depth dimension)
pre_stim (time period before rf map stim onset to epoch around)
post_stim (time period after rf map onset to epoch around)
base_stim (time period before rf map stim to use as baseline for z_score correction)
y_lim (values to limit to in depth direction)
x_lim (values to limit in time direction)
raster –
trial_id –
raster
trial_id
firing_rate –
rec_len_secs –
firing_rate
rec_len_secs
Compute the boundaries between regions on slice
values –
+values
mapping – mapping to use. Options can be found using ba.regions.mappings.keys()
region_values –
ax –
kwargs –
region_values
ax
kwargs
brain_atlas –
+brain_atlas
3 element array x,y,z
@@ -1204,7 +1206,7 @@ intersection of the trajectory and the brain surface (brain_atlas.surface)brain_atlas –
+brain_atlas
3 element array x,y,z
diff --git a/_autosummary/iblatlas.flatmaps.html b/_autosummary/iblatlas.flatmaps.html index 94605e45..8b8dcacb 100644 --- a/_autosummary/iblatlas.flatmaps.html +++ b/_autosummary/iblatlas.flatmaps.html @@ -18,7 +18,7 @@ - + @@ -80,6 +80,7 @@filename –
+filename
filename –
remap –
filename
remap
filename –
folder_cache –
filename
folder_cache
Reads in the Allen gene expression experiments tables
folder_cache –
+folder_cache
Mouse Brain in Stereotaxic Coordinates (MBSC).
Paxinos G, and Franklin KBJ (2012). The Mouse Brain in Stereotaxic Coordinates, 4th edition (Elsevier Academic Press).
-A str array of verbose brain region names.
-numpy.array
-A str array of brain region acronyms.
-numpy.array
-A, (n, 3) uint8 array of brain region RGB colour values.
-numpy.array
-An unsigned integer array indicating the number of degrees removed from root.
-numpy.array
-An integer array of parent brain region IDs.
-numpy.array
-The position within the flattened graph.
-numpy.array
-A str array of verbose brain region names.
-numpy.array
-A str array of brain region acronyms.
-numpy.array
-A, (n, 3) uint8 array of brain region RGB colour values.
-numpy.array
-An unsigned integer array indicating the number of degrees removed from root.
-numpy.array
-An integer array of parent brain region IDs.
-numpy.array
-The position within the flattened graph.
-numpy.array
->>> trj = Trajectory.fit(xyz)
Bases: Insertion
Defines an ephys probe insertion in 3D coordinate. IBL conventions.
To instantiate, use the static methods: Insertion.from_track and Insertion.from_dict.
-ks2_path –
+ks2_path
:param save diff --git a/_autosummary/ibllib.ephys.html b/_autosummary/ibllib.ephys.html index 56eba4ea..1593d5e9 100644 --- a/_autosummary/ibllib.ephys.html +++ b/_autosummary/ibllib.ephys.html @@ -18,7 +18,7 @@ - + @@ -80,6 +80,7 @@
ks_path –
ks_path
bin_path – path of raw data
out_path –
out_path
ses_path –
+ses_path
bool True on a a successful sync
@@ -207,7 +209,7 @@ses_path –
ses_path
type – linear, exact or smooth
ses_path –
ses_path
type – linear, exact or smooth
probe_names – by default will rglob all probes in the directory. If specified, this will filter the probes on which to perform the synchronisation, defaults to None, optional
Same as BiasedTrials except...
ProbaContrasts
Bpod pre-generated values for probabilityLeft, contrastLR, phase, quiescence
Bpod pre-generated values for probabilityLeft, contrastLR, phase, quiescence.
TrialsTableBiased
Extracts the following into a table from Bpod raw data:
session_path –
save –
bpod_trials –
settings –
session_path
save
bpod_trials
settings
extra_classes – additional BaseBpodTrialsExtractor subclasses for custom extractions
Channel times and polarities.
Bunch
+vdaq – dictionary of daq traces.
fs – sampling frequency
df_photometry –
chmap –
v_threshold –
df_photometry
chmap
v_threshold
daq_file –
photometry_file –
daq_file
photometry_file
tolerance – number of acceptable missing frames between the daq and the photometry file
chmap –
v_threshold –
chmap
v_threshold
session_path –
session_path
(optional) (display) – bpod trials read from jsonable file
(optional) – (bool)
spike_times – times of variable
events – trial times to align to
trial_idx – trial idx to sort by
dividers –
colors –
labels –
weights –
fr –
norm –
axs –
dividers
colors
labels
weights
fr
norm
axs
t0 –
t1 –
diff_threshold –
t0
t1
diff_threshold
drift_threshold_ppm –
TODO Document this function
session_path –
+session_path
dict
Examples
+File is in /data/subject/2020-01-01/002/raw_behavior_data. Patch the file then move to new location. +>>> patch_settings(‘/data/subject/2020-01-01/002’, number=’001’) +>>> shutil.move(‘/data/subject/2020-01-01/002/raw_behavior_data/’, ‘/data/subject/2020-01-01/001/raw_behavior_data/’)
+File is moved into new collection within the same session, then patched. +>>> shutil.move(‘./subject/2020-01-01/002/raw_task_data_00/’, ‘./subject/2020-01-01/002/raw_task_data_01/’) +>>> patch_settings(‘/data/subject/2020-01-01/002’, collection=’raw_task_data_01’, new_collection=’raw_task_data_01’)
+Update subject, date and number. +>>> new_session_path = Path(‘/data/foobar/2024-02-24/002’) +>>> old_session_path = Path(‘/data/baz/2024-02-23/001’) +>>> patch_settings(old_session_path, collection=’raw_task_data_00’, +… subject=new_session_path.parts[-3], date=new_session_path.parts[-2], number=new_session_path.parts[-1]) +>>> shutil.move(old_session_path, new_session_path)
diff --git a/_autosummary/ibllib.io.session_params.html b/_autosummary/ibllib.io.session_params.html index f0a92e97..6424a9f8 100644 --- a/_autosummary/ibllib.io.session_params.html +++ b/_autosummary/ibllib.io.session_params.html @@ -18,7 +18,7 @@ - + @@ -80,6 +80,7 @@Finds the datasets required for task based on input signatures -:return:
+Finds the datasets required for task based on input signatures.
Function to optionally overload to cleanup files after running task -:return:
+Function to optionally overload to clean up files after running task.
Bases: DataHandler
Function to upload and register data of completed task
+uploadData(outputs, version, clobber=False, **kwargs)[source] +Upload and/or register output data.
+This is typically called by ibllib.pipes.tasks.Task.register_datasets()
.
outputs – output files from task to register
version – ibllib version
outputs (list of pathlib.Path) – A set of ALF paths to register to Alyx.
version (str, list of str) – The version of ibllib used to generate these output files.
clobber (bool) – If True, re-upload outputs that have already been passed to this method.
kwargs – Optional keyword arguments for one.registration.RegistrationClient.register_files.
output info of registered datasets
+A list of newly created Alyx dataset records or the registration data if dry.
+list of dicts, dict
Empties and returns the processed dataset mep.
+file_list –
kwargs –
file_list
kwargs
Task pipeline creation from an acquisition description.
The principal function here is make_pipeline which reads an _ibl_experiment.description.yaml file and determines the set of tasks required to preprocess the session.
+In the experiment description file there is a ‘tasks’ key that defines each task protocol and the
+location of the raw data (i.e. task collection). The protocol subkey may contain an ‘extractors’
+field that should contain a list of dynamic pipeline task class names for extracting the task data.
+These must be subclasses of the ibllib.pipes.base_tasks.DynamicTask
class. If the
+extractors key is absent or empty, the tasks are chosen based on the sync label and protocol name.
NB: The standard behvaiour extraction task classes (e.g.
+ibllib.pipes.behaviour_tasks.ChoiceWorldTrialsBpod
and ibllib.pipes.behaviour_tasks.ChoiceWorldTrialsNidq
)
+handle the clock synchronization, behaviour plots and QC. This is typically independent of the Bpod
+trials extraction (i.e. extraction of trials data from the Bpod raw data, in Bpod time). The Bpod
+trials extractor class is determined by the ibllib.io.extractors.base.protocol2extractor()
+map. IBL protocols may be added to the ibllib.io.extractors.task_extractor_map.json file, while
+non-IBL ones should be in projects.base.task_extractor_map.json file located in the personal
+projects repo. The Bpod trials extractor class must be a subclass of the
+ibllib.io.extractors.base.BaseBpodTrialsExtractor
class, and located in either the
+personal projects repo or in ibllib.io.extractors.bpod_trials
module.
Functions
Get meta information about task. |
|||||||||||||||||||||||
- | Compute the training status for compute date based on training from that session and two previous days |
+Compute the training status for compute date based on training from that session and two previous days. |
|||||||||||||||||||||
Find the earliest date that we need to compute the training status from. |
|||||||||||||||||||||||
- | Returns the location of the raw behavioral data and extracted trials data for the session path. |
+Return the location of the raw behavioral data and extracted trials data for a given session. |
|||||||||||||||||||||
Extracts the latest training status. |
@@ -187,7 +189,7 @@
|||||||||||||||||||||||
- | Extract the training information needed for plots for each session |
+Extract the training information needed for plots for each session. |
|||||||||||||||||||||
If aws credentials exist on the local server download the latest training table from aws s3 private bucket |
@@ -196,7 +198,7 @@
Load and concatenate trials for multiple sessions. |
||||||||||||||||||||||
- | Load training dataframe from disk, if dataframe doesn't exist returns None |
+Load training dataframe from disk, if dataframe doesn't exist returns None. |
|||||||||||||||||||||
Load trials data for session. |
@@ -226,7 +228,7 @@
|||||||||||||||||||||||
- | Save training dataframe to disk |
+Save training dataframe to disk. |
|||||||||||||||||||||
@@ -243,8 +245,8 @@ | |||||||||||||||||||||||
The post_dlc task takes dlc traces as input and computes useful quantities, as well as qc. |
|||||||||||||||||||||||
+ | |||||||||||||||||||||||
+ | + | ||||||||||||||||||||||
Task to compress raw video data from .avi to .mp4 format. |
|||||||||||||||||||||||
+ | |||||||||||||||||||||||
Task that converts compressed avi to mp4 format and renames video and camlog files. |
|||||||||||||||||||||||
+ | |||||||||||||||||||||||
Task to register raw video data. |
|||||||||||||||||||||||
+ | |||||||||||||||||||||||
Task to sync camera timestamps to main DAQ timestamps N.B Signatures only reflect new daq naming convention, non-compatible with ephys when not running on server |
|||||||||||||||||||||||
+ | |||||||||||||||||||||||
Task to sync camera timestamps to main DAQ timestamps when camlog files are used. |
|||||||||||||||||||||||
+ | |||||||||||||||||||||||
Task to sync camera timestamps to main DAQ timestamps N.B Signatures only reflect new daq naming convention, non-compatible with ephys when not running on server |
Plots raw electrophysiology AP band task = BadChannelsAp(pid, one=one=one) |
||||||||||||||||||||||
- | Behavioural plots |
+Behavioural plots. |
|||||||||||||||||||||
- | Plots coronal and sagittal slice showing electrode locations |
+Plots coronal and sagittal slice showing electrode locations. |
|||||||||||||||||||||
Plots LFP spectrum and LFP RMS plots |
@@ -195,11 +197,11 @@
- | Map for comparing QC outcomes |
-
Functions
- | Creates a dict containing 'sign off' keys for each device and task protocol in the provided experiment description. |
+Create sign off dictionary. |
- | A base class for data quality control |
+A base class for data quality control. |
Map for comparing QC outcomes
-dict
-Bases: object
A base class for data quality control
+A base class for data quality control.
Run the QC tests and return the outcome -:return: One of “CRITICAL”, “FAIL”, “WARNING” or “PASS”
+Run the QC tests and return the outcome.
+One of “CRITICAL”, “FAIL”, “WARNING” or “PASS”
+Load the data required to compute the QC -Subclasses may implement this for loading raw data
+Load the data required to compute the QC.
+Subclasses may implement this for loading raw data.
The overall session outcome.
+Given an iterable of QC outcomes, returns the overall (i.e. worst) outcome.
Example
QC.overall_outcome([‘PASS’, ‘NOT_SET’, None, ‘FAIL’]) # Returns ‘FAIL’
outcomes – An iterable of QC outcomes
agg – outcome code aggregate function, default is max (i.e. worst)
outcomes (iterable of one.alf.spec.QC, str or int) – An iterable of QC outcomes.
agg (function) – Outcome code aggregate function, default is max (i.e. worst).
The overall outcome string
-Given an outcome id, returns the corresponding string.
-Example
-QC.overall_outcome([‘PASS’, ‘NOT_SET’, None, ‘FAIL’]) # Returns ‘FAIL’
-code – The outcome id
+The overall outcome.
The overall outcome string
+Update the qc field in Alyx -Updates the ‘qc’ field in Alyx if the new QC outcome is worse than the current value.
+Update the qc field in Alyx.
+Updates the ‘qc’ field in Alyx if the new QC outcome is worse than the current value.
outcome – A string; one of “CRITICAL”, “FAIL”, “WARNING”, “PASS” or “NOT_SET”
namespace – The extended QC key specifying the type of QC associated with the outcome
override – If True the QC field is updated even if new value is better than previous
outcome (str, int, one.alf.spec.QC) – A QC outcome; one of “CRITICAL”, “FAIL”, “WARNING”, “PASS” or “NOT_SET”.
namespace (str) – The extended QC key specifying the type of QC associated with the outcome.
override (bool) – If True the QC field is updated even if new value is better than previous.
The current QC outcome str on Alyx
+The current QC outcome on Alyx.
+Example
-qc = QC(‘path/to/session’) -qc.update(‘PASS’) # Update current QC field to ‘PASS’ if not set
+>>> qc = QC('path/to/session')
+>>> qc.update('PASS') # Update current QC field to 'PASS' if not set
+
Update the extended_qc field in Alyx -Subclasses should chain a call to this.
+Update the extended_qc field in Alyx.
+Subclasses should chain a call to this.
data – a dict of qc tests and their outcomes, typically a value between 0. and 1.
@@ -279,7 +264,7 @@Creates a dict containing ‘sign off’ keys for each device and task protocol in the provided +
Create sign off dictionary.
+Creates a dict containing ‘sign off’ keys for each device and task protocol in the provided experiment description.
Given a dictionary of results, computes the overall session QC for each key and aggregates +in a single value
+results (dict) – A dictionary of QC keys containing (usually scalar) values.
criteria (dict) – A dictionary of qc keys containing map of PASS, WARNING, FAIL thresholds.
one.alf.spec.QC – Overall session QC outcome.
dict – A map of QC tests and their outcomes.
Update QC values for individual datasets.
+qc (ibllib.qc.task_metrics.TaskQC) – A TaskQC object that has been run.
registered_datasets (list of dict) – A list of Alyx dataset records.
one (one.api.OneAlyx) – An online instance of ONE.
override (bool) – If True the QC field is updated even if new value is better than previous.
The list of registered datasets but with the ‘qc’ fields updated.
+list of dict
+Bases: QC
A class for computing task QC metrics
-Computes the outcome of a single key by applying thresholding.
+qc_value (float) – Proportion of passing qcs, between 0 and 1.
thresholds (dict) – Dictionary with keys ‘PASS’, ‘WARNING’, ‘FAIL’, (or enum +integers, c.f. one.alf.spec.QC).
The outcome.
+Given a dictionary of results, computes the overall session QC for each key and aggregates -in a single value
-results (dict) – A dictionary of QC keys containing (usually scalar) values.
criteria (dict) – A dictionary of qc keys containing map of PASS, WARNING, FAIL thresholds.
str – Overall session QC outcome as a string.
dict – A map of QC tests and their outcomes.
staticmethod(function) -> method
+Convert a function to be a static method.
+A static method does not receive an implicit first argument. +To declare a static method, use this idiom:
++++
+- class C:
@staticmethod +def f(arg1, arg2, …):
++…
+
It can be called either on the class (e.g. C.f()) or on an instance +(e.g. C().f()). Both the class and the instance are ignored, and +neither is passed implicitly as the first argument to the method.
+Static methods in Python are similar to those found in Java or C++. +For a more advanced concept, see the classmethod builtin.
Return map of dataset specific QC values.
+outcomes (dict) – Map of checks and their individual outcomes.
+Map of dataset names and their outcome.
+dict
+Check that the period of gray screen between stim off and the start of the next trial is -0.5s +/- 200%.
-Metric: M = stimOff (n) - trialStart (n+1) - 0.5 -Criterion: |M| < 1 +
Check that the period of grey screen between stim off and the start of the next trial is +1s +/- 10%.
+Metric: M = stimOff (n) - trialStart (n+1) - 1. +Criterion: |M| < 0.1 Units: seconds [s]
Return the correct trials task for extracting only the Bpod trials.
Run TaskQC viewer with wheel data.
Displays the task QC for a given session.
Run TaskQC viewer with wheel data.
+For information on the QC checks see the QC Flags & failures document: +https://docs.google.com/document/d/1X-ypFEIxqwX6lU9pig4V_zrcR5lITpd8UJQWzW9I9zI/edit#
+Examples
+>>> ipython task_qc.py c9fec76e-7a20-4da4-93ad-04510a89473b
+>>> ipython task_qc.py ./KS022/2019-12-10/001 --local
+
Test for FpgaTrials._time_fields static method.
Test for FpgaTrials._is_trials_object_attribute method.
+session_path – The raw ephys data path to place files
model – Probe model file structure (‘3A’ or ‘3B’)
legacy – If true, the emulate older SpikeGLX version where all files are saved
legacy – If true, emulate older SpikeGLX version where all files are saved
Test for QC.outcome property setter.
Test for QC.code_to_outcome method.
-Test HabituationQC class NB: For complete coverage this should be run along slide the integration tests
Bases: TestCase
Remove TaskQC.compute_session_status_from_dict after 2024-06-01. Cherry pick commit +3cbbd1769e1ba82a51b09a992b2d5f4929f396b2 for removal of this test and applicable code
+Test TaskQC.compute_dateset_qc_status method.
+Bases: TestCase
Test task_metrics.update_dataset_qc function.
+Test for ibllib.io.session_params.merge_params functions.
+Tests for ibllib.oneibl.data_handlers.DataHandler classes.
Tests for the ibllib.oneibl.patcher.GlobusPatcher class.
Test helper functions in ibllib.oneibl.registration module.
Bases: TestCase
Tests for ibllib.oneibl.data_handlers.DataHandler classes.
+A test for ServerDataHandler.uploadData method.
+Hook method for deconstructing the test fixture after testing it.
++ | + |
+ | + |
+ | Tools to generate and identify spacers. |
+
+ | + |
© Copyright 2020, International Brain Laboratory.
+File hashing functions +Uses hashlib to perform either md5 or sha1 hashing in a memory controlled manner, +with a progress bar for larger files.
+Functions
++ | Computes blake2b hash in a memory reasoned way blake2b_hash = hashfile.blake2b(file_path) |
+
+ | Computes md5 hash in a memory reasoned way md5_hash = hashfile.md5(file_path) |
+
+ | Computes sha1 hash in a memory reasoned way sha1_hash = hashfile.sha1(file_path) |
+
Computes blake2b hash in a memory reasoned way +blake2b_hash = hashfile.blake2b(file_path)
+© Copyright 2020, International Brain Laboratory.
++ | File hashing functions Uses hashlib to perform either md5 or sha1 hashing in a memory controlled manner, with a progress bar for larger files. |
+
+ | + |
+ | Network communication between acquisition devices. |
+
+ | + |
+ | + |
© Copyright 2020, International Brain Laboratory.
+© Copyright 2020, International Brain Laboratory.
+Examples
+# Connect to remote server rig, send initialization message and wait for response +>>> server = await EchoProtocol.server(‘udp://192.168.0.4’, name=’main’) +>>> await server.init(‘2022-01-01_1_subject’) # Send init message and await confirmation of receipt +>>> response = await server.on_event(‘INIT’) # Await response
+# Send initialization message and wait max 10 seconds for response +>>> try: +… response = await asyncio.wait_for(server.on_event(‘INIT’), 10.) +… except asyncio.TimeoutError: +… server.close()
+Functions
++ | An example of an entry point for creating an individual communicator. |
+
Classes
++ | An echo server implementing TCP/IP and UDP. |
+
+ | Handler for multiple remote rig services. |
+
Bases: Communicator
An echo server implementing TCP/IP and UDP.
+This should be instantiated using either EchoProtocol.server or EchoProtocol.client. +In the client role, the remote address is specified; in the server role, the local address is +specified.
+A network server instance if using TCP/IP.
+asyncio.Server
+The communicator role. A server may communicate with multiple clients. The server URI +specifies its local address. A client only communicates with a single host, specified by +the server URI.
+{‘client’, ‘server’}
+The default maximum time in seconds to await a message echo.
+float
+A map of addresses holding the last sent bytes str and the future being waited on. In +client mode there should only be one entry - the server URI.
+dict[(str, int), (bytes, asyncio.Future)]
+The remote computer’s role
+{‘client’, ‘server’}
+True if transport layer set and open.
+bool
+bool: True if awaiting confirmation of receipt from remote.
+Cleanup experiment.
+Send a cleanup message to the remote host.
+data (any) – Optional extra data to send to the remote host.
addr ((str, int)) – The remote host address and port. Only required in server role.
TimeoutError – Remote host failed to echo the message within the timeout period.
+Start an experiment.
+Send a stop message to the remote host.
+exp_ref (str) – A experiment reference string in the form yyyy-mm-dd_n_subject.
data (any) – Optional extra data to send to the remote host.
addr ((str, int)) – The remote host address and port. Only required in server role.
TimeoutError – Remote host failed to echo the message within the timeout period.
+End an experiment.
+Send a stop message to the remote host.
+data (any) – Optional extra data to send to the remote host.
immediately (bool) – If True, an EXPINTERRUPT signal is used.
addr ((str, int)) – The remote host address and port. Only required in server role.
TimeoutError – Remote host failed to echo the message within the timeout period.
+Initialize an experiment.
+Send an initialization message to the remote host.
+data (any) – Optional extra data to send to the remote host.
addr ((str, int)) – The remote host address and port. Only required in server role.
TimeoutError – Remote host failed to echo the message within the timeout period.
+Send/request Alyx token to/from remote host.
+alyx (one.webclient.AlyxClient) – An instance of Alyx to extract and send token from.
addr ((str, int))
role. (The remote host address and port. Only required in server)
(str, dict) – (If alyx arg was None) the received Alyx token in the form (base_url, {user: token}).
(str, int) – The hostname and port of the remote host.
Send data to clients.
+Serialize data and pass to transport layer.
+Send a message to the client and await echo.
+NB: Methods such as start, stop, init, cleanup and alyx should be used instead of calling +this directly.
+data (any) – The data to serialize and send to remote host.
addr ((str, int)) – The remote host address and port. Only required in server role.
timeout (float, optional) – The time in seconds to wait for an echo before raising an exception.
TimeoutError – Remote host failed to echo the message within the timeout period.
RuntimeError – The response from the client did not match the original message.
ValueError – Timeout must be non-zero number. + Unexpected remote address: in client mode the address must match server_uri.
TypeError – In server mode a remote address must be provided.
Close the connection, de-register callbacks and cancel outstanding futures.
+The EchoProtocol.on_connection_lost future is resolved at this time, all others are +cancelled. NB: Closing the socket should be handled by transport base class later on.
+Create a remote server instance.
+server_uri (str, ipaddress.IPv4Address, ipaddress.IPv6Address) – The address of the remote computer, may be an IP or hostname with or without a port. +To use TCP/IP instead of the default UDP, add a ‘ws://’ scheme to the URI.
name (str) – An optional, arbitrary label.
**kwargs – Optional parameters to pass to create_datagram_endpoint for UDP or create_server for +TCP/IP.
A Communicator instance.
+Create a remote client instance.
+server_uri (str) – The address of the remote computer, may be an IP or hostname with or without a port. +To use TCP/IP instead of the default UDP, add a ‘ws://’ scheme to the URI.
name (str) – An optional, arbitrary label.
**kwargs – Optional parameters to pass to create_datagram_endpoint for UDP or create_server for +TCP/IP.
A Communicator instance.
+Bases: Service
, UserDict
Handler for multiple remote rig services.
+Assign a callback to all services for a given event.
+event (str, int, iblutil.io.net.base.ExpMessage) – An event to listen for.
callback (function, async.io.Future) – A callable or future to notify when the event occurs.
return_service (bool) – When True an instance of the Communicator is additionally passed to the callback.
Clear all callbacks for a given event.
+event (str, int, iblutil.io.net.base.ExpMessage) – The event to clear listeners from.
callback (function, asyncio.Future) – A specific callback or future to remove.
Wait for all services to report a given event.
+event (str, int, iblutil.io.net.base.ExpMessage) – The event to wait on.
+A map of rig name and the data that was received.
+dict
+Initialize an experiment.
+Send an initialization signal to the remote services and await the responses.
+data (any) – Optional extra data to send to the remote host.
concurrent (bool) – If false, wait for response from each service before communicating with the next.
A dictionary of service names and the response data received.
+dict of str
+TimeoutError – Remote host failed to echo the message within the timeout period. + Remote host failed to respond within response period.
+Cleanup an experiment.
+Send an cleanup signal to the remote services and await responses.
+data (any) – Optional extra data to send to the remote host.
concurrent (bool) – If false, wait for response from each service before communicating with the next.
A dictionary of service names and the response data received.
+dict of str
+TimeoutError – Remote host failed to echo the message within the timeout period. + Remote host failed to respond within response period.
+Start an experiment.
+Send a start signal to the remote services and await responses.
+exp_ref (str) – An experiment reference string in the form yyyy-mm-dd_n_subject.
data (any) – Optional extra data to send to the remote host.
concurrent (bool) – If false, wait for response from each service before communicating with the next.
A dictionary of service names and the response data received.
+dict of str
+TimeoutError – Remote host failed to echo the message within the timeout period. + Remote host failed to respond within response period.
+End an experiment.
+Send a stop signal to the remote services and await responses.
+data (any) – Optional extra data to send to the remote host.
immediately (bool) – If true, send an EXPINTERRUPT signal.
concurrent (bool) – If false, wait for response from each service before communicating with the next.
A dictionary of service names and the response data received.
+dict of str
+TimeoutError – Remote host failed to echo the message within the timeout period. + Remote host failed to respond within response period.
+Send Alyx token to remote services.
+alyx (one.webclient.AlyxClient) – An instance of Alyx to extract and send token from.
+© Copyright 2020, International Brain Laboratory.
+Functions
++ | Fetch WAN IP address. |
+
+ | Resolve hostname to IP address. |
+
+ | Test whether IP address is valid. |
+
+ | Ensure URI is complete and correct. |
+
Classes
++ | A base class for communicating between experimental rigs. |
+
+ | A set of standard experiment messages for communicating between rigs. |
+
+ | An abstract base class for auxiliary experiment services. |
+
Fetch WAN IP address.
+NB: Requires internet.
+The computer’s default WAN IP address.
+ipaddress.IPv4Address, ipaddress.IPv6Address
+Test whether IP address is valid.
+ip_address (str) – An IP address to validate.
+True is IP address is valid.
+bool
+Resolve hostname to IP address.
+hostname (str, optional) – The hostname to resolve. If None, resolved this computer’s hostname.
+The resolved IP address.
+ipaddress.IPv4Address, ipaddress.IPv6Address
+ValueError – Failed to resolve IP for hostname.
+Ensure URI is complete and correct.
+uri (str, ipaddress.IPv4Address, ipaddress.IPv6Address) – A full URI, hostname or hostname and port.
resolve_host (bool) – If the URI is not an IP address, attempt to resolve hostname to IP.
default_port (int, str) – If the port is absent from the URI, append this one.
default_proc (str) – If the URI scheme is missing, prepend this one.
The complete URI.
+str
+TypeError – URI type not supported.
ValueError – Failed to resolve host name to IP address. + URI host contains invalid characters (expects only alphanumeric + hyphen). + Port number not within range (must be > 1, <= 65535).
Bases: IntEnum
A set of standard experiment messages for communicating between rigs.
+Experiment has begun.
+Experiment has stopped.
+Experiment cleanup begun.
+Experiment interrupted.
+Experiment status.
+Experiment info, including task protocol start and end.
+Alyx token.
+Validate an event message, returning an corresponding enumeration if valid and raising an +exception if not.
+event (str, int, ExpMessage) – An event message to validate.
+The corresponding event enumeration.
+TypeError – event is neither a string, integer nor enumeration.
ValueError – event does not correspond to any ExpMessage enumeration, neither in its integer form + nor its string name.
Examples
+>>> ExpMessage.validate('expstart')
+ExpMessage.EXPSTART
+
>>> ExpMessage.validate(10)
+ExpMessage.EXPINIT
+
>>> ExpMessage.validate(ExpMessage.EXPEND)
+ExpMessage.EXPEND
+
Bases: ABC
An abstract base class for auxiliary experiment services.
+Initialize an experiment.
+This is intended to specify the expected message signature. The subclassed method should +serialize the returned values and pass them to the transport layer.
+data (any) – Optional extra data to send to the remote server.
+ExpMessage.EXPINIT – The EXPINIT event.
any, None – Optional extra data.
Start an experiment.
+This is intended to specify the expected message signature. The subclassed method should +serialize the returned values and pass them to the transport layer.
+exp_ref (str) – An experiment reference string in the form yyyy-mm-dd_n_subject.
data (any) – Optional extra data to send to the remote server.
ExpMessage.EXPSTART – The EXPSTART event.
str – The experiment reference string.
any, None – Optional extra data.
Stop an experiment.
+This is intended to specify the expected message signature. The subclassed method should +serialize the returned values and pass them to the transport layer.
+data (any) – Optional extra data to send to the remote server.
immediately (bool) – If True, an EXPINTERRUPT message is returned.
ExpMessage.EXPINTERRUPT, ExpMessage.EXPEND – The EXPEND event, or EXPINTERRUPT if immediately is True.
any, None – Optional extra data.
Clean up an experiment.
+This is intended to specify the expected message signature. The subclassed method should +serialize the returned values and pass them to the transport layer.
+data (any) – Optional extra data to send to the remote server.
+ExpMessage.EXPCLEANUP – The EXPCLEANUP event.
any, None – Optional extra data.
Request/send Alyx token.
+This is intended to specify the expected message signature. The subclassed method should +serialize the returned values and pass them to the transport layer.
+alyx (one.webclient.AlyxClient) – Optional instance of Alyx to send.
+ExpMessage.ALYX – The ALYX event.
str – The Alyx database URL.
dict – The Alyx token in the form {user: token}.
Bases: Service
A base class for communicating between experimental rigs.
+An arbitrary label for the remote host
+str
+The full URI of the remote device, e.g. udp://192.168.0.1:1001
+str
+Assign a callback to be called when an event occurs.
+NB: Unlike with futures, an assigned callback may be triggered multiple times, whereas +coroutines may only be set once after which they are cleared.
+event (str, int, iblutil.io.net.base.ExpMessage) – The event for which the callback is registered.
callback (function, asyncio.Future) – A function or Future to trigger when an event occurs.
See also
+EchoProtocol.receive
The method that processes the callbacks upon receiving a message.
+For a given event, remove the provided callback, or all callbacks if none were provided.
+event (str, int, iblutil.io.net.base.ExpMessage) – The event for which the callback was registered.
callback (function, asyncio.Future) – The callback or future to remove.
The number of callbacks removed.
+int
+Await an event from the remote host.
+event (str, int, iblutil.io.net.base.ExpMessage) – The event to wait on.
+The response data returned by the remote host.
+any
+Examples
+>>> data = await com.on_event('EXPSTART')
+
>>> event = await asyncio.create_task(com.on_event('EXPSTART'))
+>>> ...
+>>> data = await event
+
the remote port
+int
+the remote hostname or IP address
+str
+the protocol scheme, e.g. udp, ws
+str
+True if the remote device is connected
+bool
+Serialize data for transmission.
+None-string or -bytes objects are encoded as JSON before converting to bytes.
+data (any) – The data to serialize.
+The encoded data.
+bytes
+© Copyright 2020, International Brain Laboratory.
+Network communication between acquisition devices.
++ | Examples + |
+
+ | + |
© Copyright 2020, International Brain Laboratory.
+Functions
++ | + |
+ | + |
+ | Returns full path of the param file per system convention: |
+
+ | Reads in and parse Json parameter file into dictionary. |
+
+ | Set a given file or folder path to be hidden. |
+
+ | Write a parameter file in Json format |
+
linux/mac: ~/.str_params, Windows: APPDATA folder
+str_params – string that identifies parm file
+string of full path
+Set a given file or folder path to be hidden. On macOS and Windows a specific flag is set, +while on other systems the file or folder is simply renamed to start with a dot. On macOS the +folder may only be hidden in Explorer.
+path (str, pathlib.Path) – The path of the file or folder to (un)hide.
hide (bool) – If True the path is set to hidden, otherwise it is unhidden.
The path of the file or folder, which may have been renamed.
+pathlib.Path
+Reads in and parse Json parameter file into dictionary. If the parameter file doesn’t +exist and no defaults are provided, a FileNotFound error is raised, otherwise any extra +default parameters will be written into the file.
+Examples
+# Load parameters, raise error if file not found +par = read(‘globus/admin’)
+# Load with defaults +par = read(‘globus/admin’, {‘local_endpoint’: None, ‘remote_endpoint’: None})
+# Return empty dict if file not found (i.e. touch new param file) +par = read(‘new_pars’, {})
+str_params – path to text json file
default – default values for missing parameters
named tuple containing parameters
+© Copyright 2020, International Brain Laboratory.
+Functions
++ | The purpose of this is to correctly identify ids even as object arrays |
+
+ | Loads parquet file into pandas dataframe |
+
+ | + |
+ | + |
+ | Save pandas dataframe to parquet |
+
+ | Converts uuid string or list of uuid strings to int64 numpy array with 2 cols Returns [0, 0] for None list entries |
+
+ | + |
Loads parquet file into pandas dataframe
+filename
+Save pandas dataframe to parquet
+filename
table
metadata
© Copyright 2020, International Brain Laboratory.
+Functions
++ | Given a vector of sorted values, returns a boolean vector indices True when the value is between bounds. If multiple bounds are given, returns the equivalent OR of individual bounds tuple Especially useful for spike times indices = between_sorted(spike_times, [tstart, tstop]). |
+
+ | Computes a 2D histogram by aggregating values in a 2D array. |
+
+ | Performs intersection on multiple columns arrays a0 and a1 |
+
+ | equivalent of np.isin but returns indices as in the matlab ismember function returns an array containing logical 1 (true) where the data in A is B also returns the location of members in b such as a[lia] == b[locb] |
+
+ | Equivalent of np.isin but returns indices as in the matlab ismember function returns an array containing logical 1 (true) where the data in A is B also returns the location of members in b such as a[lia, :] == b[locb, :] |
+
+ | Computes pairwise Pearson correlation coefficients for matrices. |
+
+ | Detects which points of the input vector lie within one of the ranges specified in the ranges. |
+
Given a vector of sorted values, returns a boolean vector indices True when the value +is between bounds. If multiple bounds are given, returns the equivalent OR of individual +bounds tuple +Especially useful for spike times
+++indices = between_sorted(spike_times, [tstart, tstop])
+
sorted_v – vector containing sorted values (won’t check)
bounds – minimum included value and maximum included value +can be a list[tstart, tstop] or an array of dimension (n, 2)
equivalent of np.isin but returns indices as in the matlab ismember function +returns an array containing logical 1 (true) where the data in A is B +also returns the location of members in b such as a[lia] == b[locb]
+a – 1d - array
b – 1d - array
isin, locb
+Equivalent of np.isin but returns indices as in the matlab ismember function +returns an array containing logical 1 (true) where the data in A is B +also returns the location of members in b such as a[lia, :] == b[locb, :]
+a – 2d array
b – 2d array
isin, locb
+Performs intersection on multiple columns arrays a0 and a1
+a0
a1
assume_unique – If True, the input arrays are both assumed to be unique,
which can speed up the calculation. +:return: intersection +:return: index of a0 such as intersection = a0[ia, :] +:return: index of b0 such as intersection = b0[ib, :]
+Computes a 2D histogram by aggregating values in a 2D array.
+x – values to bin along the 2nd dimension (c-contiguous)
y – values to bin along the 1st dimension
xbin – scalar: bin size along 2nd dimension +0: aggregate according to unique values +array: aggregate according to exact values (count reduce operation)
ybin – scalar: bin size along 1st dimension +0: aggregate according to unique values +array: aggregate according to exact values (count reduce operation)
xlim – (optional) 2 values (array or list) that restrict range along 2nd dimension
ylim – (optional) 2 values (array or list) that restrict range along 1st dimension
weights – (optional) defaults to None, weights to apply to each value for aggregation
3 numpy arrays MAP [ny,nx] image, xscale [nx], yscale [ny]
+Computes pairwise Pearson correlation coefficients for matrices.
+That is for 2 matrices the same size, computes the row to row coefficients and outputs +a vector corresponding to the number of rows of the first matrix. +If the second array is a vector then computes the correlation coefficient for all rows.
+x – np array [nc, ns]
y – np array [nc, ns] or [ns]
r [nc]
+Detects which points of the input vector lie within one of the ranges specified in the ranges. +Returns an array the size of x with a 1 if the corresponding point is within a range.
+The function uses a stable sort algorithm (timsort) to find the edges within the input array. +Edge behaviour is inclusive.
+Ranges are [(start0, stop0), (start1, stop1), etc.] or n-by-2 numpy array. +The ranges may be optionally assigned a row in ‘matrix’ mode or a numerical label in ‘vector’ +mode. Labels must have a length of n. Overlapping ranges have a value that is the sum of the +relevant range labels (ones in ‘matrix’ mode).
+If mode is ‘vector’ (default) it will give a vector, specifying the range of each point. +If mode is ‘matrix’ it will give a matrix output where each range is assigned a particular row +index with 1 if the point belongs to that range label. Multiple ranges can be assigned to a +particular row, e.g. [0, 0,1] would give a 2-by-N matrix with the first two ranges in the +first row. Points within more than one range are given a value > 1
+x (array_like) – An array whose points are tested against the ranges. multi-dimensional arrays are +flattened to 1D
ranges (array_like) – A list of tuples or N-by-2 array of ranges to test, where N is the number of ranges, +i.e. [[start0, stop0], +[start1, stop1]]
labels (vector, list) – If mode is ‘vector’; a list of integer labels to demarcate which points lie within each +range. In ‘matrix’ mode; a list of column indices (ranges can share indices). +The number of labels should match the number of ranges. If None, ones are used for all +ranges.
mode ({'matrix', 'vector'}) – The type of output to return. If ‘matrix’ (default), an N-by-M matrix is returned where N +is the size of x and M corresponds to the max index in labels, e.g. with labels=[0,1,2], +the output matrix would have 3 columns. If ‘vector’ a vector the size of x is returned.
dtype (str, numeric or boolean type) – The data type of the returned array. If type is bool, the labels in vector mode will be +ignored. Default is int8.
A vector of size like x where zeros indicate that the points do not lie within ranges (
‘vector’ mode) or a matrix where out.shape[0] == x.size and out.shape[1] == max(labels) + 1.
Examples
+# Assert that points in ranges are mutually exclusive +np.all(within_ranges(x, ranges) <= 1)
+>>> import numpy as np
+>>> within_ranges(np.arange(11), [(1, 2), (5, 8)])
+array([0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0], dtype=int8)
+>>> ranges = np.array([[1, 2], [5, 8]])
+>>> within_ranges(np.arange(10) + 1, ranges, labels=np.array([0,1]), mode='matrix')
+array([[1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 1, 1, 1, 1, 0, 0]], dtype=int8)
+>>> within_ranges(np.arange(11), [(1,2), (5,8), (4,6)], labels=[0,1,1], mode='matrix')
+array([[0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 1, 2, 2, 1, 1, 0, 0]], dtype=int8)
+>>> within_ranges(np.arange(10) + 1, ranges, np.array([3,1]), mode='vector')
+array([3, 3, 0, 0, 1, 1, 1, 1, 0, 0], dtype=int8)
+>>> within_ranges(np.arange(11), [(1,2), (5,8), (4,6)], dtype=bool)
+array([False, True, True, False, True, True, True, True, True,
+ False, False])
+
© Copyright 2020, International Brain Laboratory.
+Tools to generate and identify spacers.
+Spacers are sequences of up and down pulses with a specific, identifiable pattern. +They are generated with a chirp coding to reduce cross-correlaation sidelobes. +They are used to mark the beginning of a behaviour sequence within a session.
+Example
+>>> spacer = Spacer()
+>>> spacer.add_spacer_states(sma, t, next_state='first_state')
+>>> for i in range(ntrials):
+... sma.add_state(
+... state_name='first_state',
+... state_timer=tup,
+... state_change_conditions={'Tup': f'spacer_low_{i:02d}'},
+... output_actions=[('BNC1', 255)], # To FPGA
+... )
+
Classes
++ | + |
Bases: object
Computes spacer up times using a chirp up and down pattern.
+Each time corresponds to an up time of the BNC1 signal.
+Numpy arrays of spacer times.
+numpy.array
+Generates a spacer voltage template to cross-correlate with a voltage trace from a DAQ to +detect a voltage trace.
+fs (int) – DAQ sampling frequency.
+The template spacer signal.
+numpy.array
+Add spacer states to a state machine.
+sma (pybpodapi.state_machine.StateMachine) – A Bpod state machine instance.
next_state (str) – The name of the state to follow the spacer state.
Given the timestamps and polarities of a digital signal, returns the timestamps of each +signal. This method first finds the locations where there are n consecutive pulses of the +correct width then convolves this part of the signal with the template signal.
+This method may be relaxed in order to make it robust to noise in the signal.
+fronts (dict[str, numpy.array]) – Dictionary with keys (‘times’, ‘polarities’) containing the timestamps and polarities +of the signal fronts, respectively.
fs (int) – The sampling frequency of the DAQ signal.
The times of the protocol spacer signals.
+numpy.array
+Find spacers in a voltage time series. Assumes that the signal is a digital signal between +0 and 1.
+signal (numpy.ndarray) – The signal in which to find the spacer.
threshold (float) – The cross-correlation detection threshold.
fs (int) – The sampling frequency of the DAQ signal.
An array containing the times of each spacer signal relative to the first sample.
+numpy.ndarray
+© Copyright 2020, International Brain Laboratory.
+Functions
++ | Flatten a nested Iterable excluding strings and dicts. |
+
+ | Save log information to a given filename in '.ibl_logs' folder (in home directory). |
+
+ | Given a list of integers, returns a terse string expressing the unique values. |
+
+ | Recursively remove a folder and its parents up to a defined level - if they are empty. |
+
+ | Set up a log for IBL packages. |
+
Classes
++ | A subclass of dictionary with an additional dot syntax. |
+
Bases: dict
A subclass of dictionary with an additional dot syntax.
+Return a new Bunch instance which is a copy of the current Bunch instance.
+deep (bool) – If True perform a deep copy (see notes). By default a shallow copy is returned.
+A new copy of the Bunch.
+Notes
+A shallow copy constructs a new Bunch object and then (to the extent possible) inserts
references into it to the objects found in the original. +- A deep copy constructs a new Bunch and then, recursively, inserts copies into it of the
+++objects found in the original.
+
Flatten a nested Iterable excluding strings and dicts.
+Converts nested Iterable into flat list. Will not iterate through strings or +dicts.
+Flattened list or generator object.
+list or generator
+Given a list of integers, returns a terse string expressing the unique values.
+Example
+indices = [0, 1, 2, 3, 4, 7, 8, 11, 15, 20] +range_str(indices) +>> ‘0-4, 7-8, 11, 15 & 20’
+values – An iterable of ints
+A string of unique value ranges
+Set up a log for IBL packages.
+Uses date time, calling function and distinct colours for levels. +Sets the name if not set already and add a stream handler. +If the stream handler already exists, does not duplicate. +The naming/level allows not to interfere with third-party libraries when setting level.
+name (str) – Log name, should be set to the root package name for consistent logging throughout the app.
level (str, int) – The logging level (defaults to NOTSET, which inherits the parent log level)
file (bool, str, pathlib.Path) – If True, a file handler is added with the default file location, otherwise a log file path +may be passed.
no_color (bool) – If true the colour log is deactivated. May be useful when directing the std out to a file.
The configured log.
+logging.Logger, logging.RootLogger
+Save log information to a given filename in ‘.ibl_logs’ folder (in home directory).
+log (str, logging.Logger) – The log (name or object) to add file handler to.
filename (str, Pathlib.Path) – The name of the log file to save to.
The log with the file handler attached.
+logging.Logger
+Recursively remove a folder and its parents up to a defined level - if they are empty.
+folder (pathlib.Path) – The path to a folder at which to start the recursion.
levels (int) – Recursion level, i.e. the number of parents to delete, relative to folder. +Defaults to 0 - which has the same effect as pathlib.Path.rmdir except that it won’t +raise an OSError if the directory is not empty.
A list of folders that were recursively removed.
+list of pathlib.Path
+FileNotFoundError – If folder does not exist.
PermissionError – Insufficient privileges or folder in use by another process.
NotADirectoryError – The folder provided is most likely a file.
© Copyright 2020, International Brain Laboratory.
+>Y`dF){kQUG6$Bbl?Kz;IgW!yI{3@1qE>+0DB(|
z-<~ouVvB}P+qy9$JDUf5ZR3~z;o;$QwO~rCqk}_Ep-$_*z(AAjKaU+hUUbpP6eJLL
zY-w 9P$maEGWOl!NV@EFW&&9L^}#}8pH}8_;liY$mOAH
zZlV4wH1OEauRpvjtM-3?*Kh9B2;Y}&FTGEG_#=o;O0|9adpvkV`s(RZu@o$Na7g&-
z>_LnsaETiUK>P*K%CiI%E_cn@#Eng+6Q^i_OwG;x2_(Yj;APU_3)PRN5?={ZsFvoo
z)%@Gz*2i~Dn|ls)htIePZ30DJI&TF(<(qUXRD?p}Ccr1bqeHfIWc6=8&*Mu=?OYvf
zY-|b<%`;)!%zGE>OUyDA9Mf76kT&vP1Qx6KCh>o(xV&^a4zwX7UwY;Yyk%xHbHkbu
z^C8*m*cTfd_1&h^vt?KK!SckxUQ$uR5|UUe`T(3%{7AufSWu&t@1E%TwgEb3ECV*L
z6MG)WFW|
zQNRswiC_1scql`1UTGh|b@r`^u!APr6vxYJ&De2GI0inCfEplV4#-ZwAG?mJ4*SbD
zqRNJ0r%N&iPp5Y{&ep1!WvlTLZ`*glaE=|+h^-pqcems5i{gekio&=LUgIdQB%*?Y
z7gL?`=p!fUNhW5DEjH{p>?xca;3eP)SAwV%i*_0*tQE`(@b-}Ss5SY^A6xdVH=32
z`T6)S
v7*f1|;b91^<0!tdt`TT_o^WZ;_mSxVy(3TF-f6>s*NVx@GrQ8A*{Er@pmEe8g
zA5{AK+|eG(>%4`^BV<$p
MC>HT9$pe?3up<^K(@qk
z%kN(d8%bBXOOCeG-E0lRXPXij5DU3G65=e;elSzeW
z#dmVhoJW}nZSg_#PkYn}TKsM?>JXTXVnC*Q>^-G>r8A(6islqx9ID!+$+nWV@>?k^
z{&1-X{$>liI4adEY~2vFB#Ef-=-IQh*RR`i;UDSLtfG1v4ZA+UPUGXN#rV}owL!q6
zQY~Bc%6XkFXHoh#(ZsS+3r4TlDM=du*#vZokCjTWabM7~z
VyHQXoQOxQ^jZW&1dHER#jWN}lz#mx
z)CU=3zKF9LHx3tW1%}-+B6ao_8Xzy>FKtO!Ay+liV8aW?RUq_vQO28ep3PZ!=FnndA8px6?KHfO@@dy!O@
z2hp4|9B@{#*cKeD9ylA$!&XW+vgZw;MJPGlII5gD^E>N9Ds}$iahi=pxWF$D4E-8A
z>lDV2N9me8YpVAF&TSb~i^>T!{s|XQWE@vp+@lq1#}O6IBEvFq*r%Ny9vv|mJr~C+
z>$2;-gp>YSckI5w+*&=xH;&+q&{cp8e`jS~ZXNC187cfEtxxWx4J0U|s?zeao2Ss!X
z0dh0yJ$k$Xc;*VRwlR&fa~hHzAaEqZKV=HJEMeXw*6zK2eUJ-Cpq2VM>R)a?(
USQB1)DLLqw!3g%rsit@fpCDN+iRkR*vHOV-vQS;|scEZKKcTEy5YTTSVH
zKc{)#_YZjIm*<%oQ}_M-Ue~!C$8jE~THgbl)rS~ha`v_}w!y=`NlnA|YriH}UA=}#
za07NmHrZn+#Z;lu%Ml6W8lYatuu80Ck-Q1o%+nz8$bvOmtj(dJcQQY9mAQLBekz4O
z5n%|&z%vk)jZ9oI0^s?55%Y#gu&;6*Xi7F!e_6#p7ZWmGYvWP#=mMPn9zGJEN2wu1
zGf9Dt$mTH2BNc@gp`N5HDio@)SlDv}m|TCc@S*4x4g@}l&nj9y07emj($%KCH{A#E
zj%WqNw|sb{C}jY&Kum}q3djhXCQJ;%d9n2UaMmsuFr_PUVAj?IlSD*^NH(jw@zOf8
zAqoaW1;TdYl>=d<4%K^n(m^YzCNa2#;ifU^_tHybXqVQuqHQ4y%i^p4g-yb$uhg9E
z+XT9Us-t=*^aq?bkzPEiMjrRR|LmD2UjhpwY