Skip to content

Commit

Permalink
converted docs and made ruff complient
Browse files Browse the repository at this point in the history
  • Loading branch information
evalott100 committed Mar 12, 2024
1 parent 7316225 commit 188aa10
Show file tree
Hide file tree
Showing 23 changed files with 432 additions and 389 deletions.
17 changes: 0 additions & 17 deletions docs/CHANGELOG.rst

This file was deleted.

1 change: 1 addition & 0 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,7 @@
("py:class", "'id'"),
("py:class", "typing_extensions.Literal"),
("py:func", "int"),
("py:class", "pandablocks.commands.T"),
]

# Both the class’ and the __init__ method’s docstring are concatenated and
Expand Down
Original file line number Diff line number Diff line change
@@ -1,24 +1,19 @@
3. Sans-IO pandABlocks-client
=============================
# 3. Sans-IO pandABlocks-client

Date: 2021-08-02 (ADR created retroactively)

Status
------
## Status

Accepted

Context
-------
## Context

Ensure PandABlocks-client works sans-io.

Decision
--------
## Decision

We will ensure pandablocks works sans-io `sans-io <sans-io>`.

Consequences
------------
## Consequences

We have the option to use an asyncio client or a blocking client.
We have the option to use an asyncio client or a blocking client.
Original file line number Diff line number Diff line change
@@ -1,14 +1,13 @@
.. _performance:
(performance)=

How fast can we write HDF files?
================================
# How fast can we write HDF files?

There are many factors that affect the speed we can write HDF files. This article
discusses how this library addresses them and what the maximum data rate of a PandA is.

Factors to consider
-------------------
## Factors to consider

```{eval-rst}
.. list-table::
:widths: 10 50
Expand All @@ -32,50 +31,44 @@ Factors to consider
or panda-webcontrol will reduce throughput
* - Flush rate
- Flushing data to disk to often will slow write speed
```

Strategies to help
------------------
## Strategies to help

There are a number of strategies that help increase performance. These can be
combined to give the greatest benefit

Average the data
~~~~~~~~~~~~~~~~
### Average the data

Selecting the ``Mean`` capture mode will activate on-FPGA averaging of the
captured value. ``Min`` and ``Max`` can also be captured at the same time.
Capturing these rather than ``Value`` may allow you to lower the trigger
Selecting the `Mean` capture mode will activate on-FPGA averaging of the
captured value. `Min` and `Max` can also be captured at the same time.
Capturing these rather than `Value` may allow you to lower the trigger
frequency while still providing enough information for data analysis

Scale the data on the client
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### Scale the data on the client

`AsyncioClient.data` and `BlockingClient.data` accept a ``scaled`` argument.
`AsyncioClient.data` and `BlockingClient.data` accept a `scaled` argument.
Setting this to False will transfer the raw unscaled data, allowing for up to
50% more data to be sent depending on the datatype of the field. You can
use the `StartData.fields` information to scale the data on the client.
The `write_hdf_files` function uses this approach.

Remove the panda-webcontrol package
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### Remove the panda-webcontrol package

The measures above should get you to about 50MBytes/s, but if more clients
connect to the web GUI then this will drop. To increase the data rate to
60MBytes/s and improve stability you may want to remove the panda-webcontrol
zpkg.

Flush about 1Hz
~~~~~~~~~~~~~~~
### Flush about 1Hz

`AsyncioClient.data` accepts a ``flush_period`` argument. If given, it will
`AsyncioClient.data` accepts a `flush_period` argument. If given, it will
squash intermediate data frames together until this period expires, and only
then produce them. This means the numpy data blocks are larger and can be more
efficiently written to disk then flushed. The `write_hdf_files` function uses
this approach.


Performance Achieved
--------------------
## Performance Achieved

Tests were run with the following conditions:

Expand All @@ -100,15 +93,14 @@ When panda-webcontrol was not installed, the following results were achieved:

Increasing above these throughputs failed most scans with `DATA_OVERRUN`.

Data overruns
-------------
## Data overruns

If there is a `DATA_OVERRUN`, the server will stop sending data. The most recently
received `FrameData` from either `AsyncioClient.data` or `BlockingClient.data` may
be corrupt. This is the case if the ``scaled`` argument is set to False. The mechanism
be corrupt. This is the case if the `scaled` argument is set to False. The mechanism
the server uses to send raw unscaled data is only able to detect the corrupt frame after
it has already been sent. Conversely, the mechanism used to send scaled data aborts prior
to sending a corrupt frame.

The `write_hdf_files` function uses ``scaled=False``, so your HDF file may include some
The `write_hdf_files` function uses `scaled=False`, so your HDF file may include some
corrupt data in the event of an overrun.
73 changes: 32 additions & 41 deletions docs/explanations/sans-io.rst → docs/explanations/sans-io.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,8 @@
.. _sans-io:
(sans-io)=

Why write a Sans-IO library?
============================
# Why write a Sans-IO library?

As the reference_ says: *Reusability*. The protocol can be coded in a separate
As the [reference] says: *Reusability*. The protocol can be coded in a separate
class to the I/O allowing integration into a number of different concurrency
frameworks.

Expand All @@ -12,69 +11,61 @@ coded the protocol in either of them it would not be usable in the other. Much
better to put it in a separate class and feed it bytes off the wire. We call
this protocol encapsulation a Connection.

Connections
-----------
## Connections

The PandA TCP server exposes a Control port and a Data port, so there are
corresponding `ControlConnection` and `DataConnection` objects:
corresponding [](ControlConnection) and objects:

.. currentmodule:: pandablocks.connections
The [](ControlConnection) class has the following methods:

.. autoclass:: ControlConnection
:noindex:

The :meth:`~ControlConnection.send` method takes a `Command` subclass and
- The [`send()`](ControlConnection.send) method takes a `Command` subclass and
returns the bytes that should be sent to the PandA. Whenever bytes are
received from the socket they can be passed to
:meth:`~ControlConnection.receive_bytes` which will return any subsequent
bytes that should be send back. The :meth:`~ControlConnection.responses`
method returns an iterator of ``(command, response)`` tuples that have now
received from the socket they can be passed to this method which will return any subsequent
bytes that should be send back.
- The [`responses()`](ControlConnection.responses) method returns an iterator of ``(command, response)`` tuples that have now
completed. The response type will depend on the command. For instance `Get`
returns `bytes` or a `list` of `bytes` of the field value, and `GetFieldInfo`
returns a `dict` mapping `str` field name to `FieldInfo`.

.. autoclass:: DataConnection
:noindex:
The [](DataConnection) class has the following methods:

The :meth:`~DataConnection.connect` method takes any connection arguments
- The [`connect()`](DataConnection.connect) method takes any connection arguments
and returns the bytes that should be sent to the PandA to make the initial
connection. Whenever bytes are received from the socket they can be passed
to :meth:`~DataConnection.receive_bytes` which will return an iterator of
`Data` objects. Intermediate `FrameData` can be squashed together by passing
to this method which will return an iterator of
`Data` objects.
- Intermediate `FrameData` can be squashed together by passing
``flush_every_frame=False``, then explicitly calling
:meth:`~DataConnection.flush` when they are required.
[`flush()`](DataConnection.flush) when they are required.

Wrappers
--------
## Wrappers

Of course, these Connections are useless without connecting some I/O. To aid with
this, wrappers are included for use in `asyncio <asyncio>` and blocking programs. They expose
slightly different APIs to make best use of the features of their respective concurrency frameworks.

For example, to send multiple commands in fields with the `blocking` wrapper::
For example, to send multiple commands in fields with the `blocking` wrapper:

with BlockingClient("hostname") as client:
resp1, resp2 = client.send([cmd1, cmd2])
```
with BlockingClient("hostname") as client:
resp1, resp2 = client.send([cmd1, cmd2])
```

while with the `asyncio` wrapper we would::
while with the `asyncio` wrapper we would:

async with AsyncioClient("hostname") as client:
resp1, resp2 = await asyncio.gather(
client.send(cmd1),
client.send(cmd2)
)
```
async with AsyncioClient("hostname") as client:
resp1, resp2 = await asyncio.gather(
client.send(cmd1),
client.send(cmd2)
)
```

The first has the advantage of simplicity, but blocks while waiting for data.
The second allows multiple co-routines to use the client at the same time at the
expense of a more verbose API.

The wrappers do not guarantee feature parity, for instance the ``flush_period``
The wrappers do not guarantee feature parity, for instance the `flush_period`
option is only available in the asyncio wrapper.







.. _reference: https://sans-io.readthedocs.io/
[reference]: https://sans-io.readthedocs.io/
Original file line number Diff line number Diff line change
@@ -1,22 +1,21 @@
How to introspect a PandA
===========================
# How to introspect a PandA

Using a combination of `commands <pandablocks.commands>` it is straightforward to query the PandA
to list all blocks, and all fields inside each block, that exist.
to list all blocks, and all fields inside each block, that exist.

Call the following script, with the address of the PandA as the first and only command line argument:

```{literalinclude} ../../examples/introspect_panda.py
```

.. literalinclude:: ../../../examples/introspect_panda.py

This script can be found in ``examples/introspect_panda.py``.
This script can be found in `examples/introspect_panda.py`.

By examining the `BlockInfo` structure returned from `GetBlockInfo` for each Block the number
and description may be acquired for every block.

By examining the `FieldInfo` structure (which is fully printed in this example) the ``type``,
``sub-type``, ``description`` and ``label`` may all be found for every field.
By examining the `FieldInfo` structure (which is fully printed in this example) the `type`,
`sub-type`, `description` and `label` may all be found for every field.

Lastly the complete list of every ``BITS`` field in the ``PCAP`` block are gathered and
printed. See the documentation in the `Field Types <https://pandablocks-server.readthedocs.io/en/latest/fields.html?#field-types>`_
Lastly the complete list of every `BITS` field in the `PCAP` block are gathered and
printed. See the documentation in the [Field Types](https://pandablocks-server.readthedocs.io/en/latest/fields.html?#field-types)
section of the PandA Server documentation.
36 changes: 18 additions & 18 deletions docs/how-to/library-hdf.rst → docs/how-to/library-hdf.md
Original file line number Diff line number Diff line change
@@ -1,42 +1,43 @@
.. _library-hdf:
(library-hdf)=

How to use the library to capture HDF files
===========================================
# How to use the library to capture HDF files

The `commandline-hdf` introduced how to use the commandline to capture HDF files.
The `write_hdf_files` function that is called to do this can also be integrated
into custom Python applications. This guide shows how to do this.

Approach 1: Call the function directly
--------------------------------------
## Approach 1: Call the function directly

If you need a one-shot configure and run application, you can use the
function directly:

.. literalinclude:: ../../../examples/arm_and_hdf.py
```{literalinclude} ../../examples/arm_and_hdf.py
```

With the `AsyncioClient` as a `Context Manager <typecontextmanager>`, this code
sets up some fields of a PandA before taking a single acquisition. The code in
`write_hdf_files` is responsible for arming the PandA.

.. note::
:::{note}
There are no log messages emitted like in `commandline-hdf`. This is because
we have not configured the logging framework in this example. You can get
these messages by adding a call to `logging.basicConfig` like this:

There are no log messages emitted like in `commandline-hdf`. This is because
we have not configured the logging framework in this example. You can get
these messages by adding a call to `logging.basicConfig` like this::
```
logging.basicConfig(level=logging.INFO)
```
:::

logging.basicConfig(level=logging.INFO)

Approach 2: Create the pipeline yourself
----------------------------------------
## Approach 2: Create the pipeline yourself

If you need more control over the pipeline, for instance to display progress,
you can create the pipeline yourself, and feed it data from the PandA. This
means you can make decisions about when to start and stop acquisitions based on
the `Data` objects that go past. For example, if we want to make a progress bar
we could:

.. literalinclude:: ../../../examples/hdf_queue_reporting.py
```{literalinclude} ../../examples/hdf_queue_reporting.py
```

This time, after setting up the PandA, we create the `AsyncioClient.data`
iterator ourselves. Each `Data` object we get is queued on the first `Pipeline`
Expand All @@ -46,9 +47,8 @@ update a progress bar, or return as acquisition is complete.
In a `finally <finally>` block we stop the pipeline, which will wait for all data
to flow through the pipeline and close the HDF file.

Performance
-----------
## Performance

The commandline client and both these approaches use the same core code, so will
give the same performance. The steps to consider in optimising performance are
outlined in `performance`
outlined in `performance`
3 changes: 3 additions & 0 deletions docs/how-to/poll-changes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# How to efficiently poll for changes

Write something about using `*CHANGES` like Malcolm does.
4 changes: 0 additions & 4 deletions docs/how-to/poll-changes.rst

This file was deleted.

Loading

0 comments on commit 188aa10

Please sign in to comment.