diff --git a/docs/CHANGELOG.rst b/docs/CHANGELOG.rst
deleted file mode 100644
index 8bfb306de..000000000
--- a/docs/CHANGELOG.rst
+++ /dev/null
@@ -1,17 +0,0 @@
-Change Log
-==========
-All notable changes to this project will be documented in this file.
-This project adheres to `Semantic Versioning `_.
-
-Unreleased_
------------
-
-- Nothing yet
-
-
-0.1 - 2020-07-09
-----------------
-
-- Initial release
-
-.. _Unreleased: https://github.com/PandABlocks/PandABlocks-client/compare/0.1...HEAD
diff --git a/docs/conf.py b/docs/conf.py
index a3b457b5c..5f555b79e 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -73,6 +73,7 @@
("py:class", "'id'"),
("py:class", "typing_extensions.Literal"),
("py:func", "int"),
+ ("py:class", "pandablocks.commands.T"),
]
# Both the class’ and the __init__ method’s docstring are concatenated and
diff --git a/docs/developer/explanations/decisions/0003-make-library-sans-io.rst b/docs/explanations/decisions/0003-make-library-sans-io.md
similarity index 62%
rename from docs/developer/explanations/decisions/0003-make-library-sans-io.rst
rename to docs/explanations/decisions/0003-make-library-sans-io.md
index a1a634ff3..ad9f2e5cb 100644
--- a/docs/developer/explanations/decisions/0003-make-library-sans-io.rst
+++ b/docs/explanations/decisions/0003-make-library-sans-io.md
@@ -1,24 +1,19 @@
-3. Sans-IO pandABlocks-client
-=============================
+# 3. Sans-IO pandABlocks-client
Date: 2021-08-02 (ADR created retroactively)
-Status
-------
+## Status
Accepted
-Context
--------
+## Context
Ensure PandABlocks-client works sans-io.
-Decision
---------
+## Decision
We will ensure pandablocks works sans-io `sans-io `.
-Consequences
-------------
+## Consequences
-We have the option to use an asyncio client or a blocking client.
\ No newline at end of file
+We have the option to use an asyncio client or a blocking client.
diff --git a/docs/explanations/performance.rst b/docs/explanations/performance.md
similarity index 77%
rename from docs/explanations/performance.rst
rename to docs/explanations/performance.md
index f2952d96c..4e7a22f28 100644
--- a/docs/explanations/performance.rst
+++ b/docs/explanations/performance.md
@@ -1,14 +1,13 @@
-.. _performance:
+(performance)=
-How fast can we write HDF files?
-================================
+# How fast can we write HDF files?
There are many factors that affect the speed we can write HDF files. This article
discusses how this library addresses them and what the maximum data rate of a PandA is.
-Factors to consider
--------------------
+## Factors to consider
+```{eval-rst}
.. list-table::
:widths: 10 50
@@ -32,50 +31,44 @@ Factors to consider
or panda-webcontrol will reduce throughput
* - Flush rate
- Flushing data to disk to often will slow write speed
+```
-Strategies to help
-------------------
+## Strategies to help
There are a number of strategies that help increase performance. These can be
combined to give the greatest benefit
-Average the data
-~~~~~~~~~~~~~~~~
+### Average the data
-Selecting the ``Mean`` capture mode will activate on-FPGA averaging of the
-captured value. ``Min`` and ``Max`` can also be captured at the same time.
-Capturing these rather than ``Value`` may allow you to lower the trigger
+Selecting the `Mean` capture mode will activate on-FPGA averaging of the
+captured value. `Min` and `Max` can also be captured at the same time.
+Capturing these rather than `Value` may allow you to lower the trigger
frequency while still providing enough information for data analysis
-Scale the data on the client
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+### Scale the data on the client
-`AsyncioClient.data` and `BlockingClient.data` accept a ``scaled`` argument.
+`AsyncioClient.data` and `BlockingClient.data` accept a `scaled` argument.
Setting this to False will transfer the raw unscaled data, allowing for up to
50% more data to be sent depending on the datatype of the field. You can
use the `StartData.fields` information to scale the data on the client.
The `write_hdf_files` function uses this approach.
-Remove the panda-webcontrol package
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+### Remove the panda-webcontrol package
The measures above should get you to about 50MBytes/s, but if more clients
connect to the web GUI then this will drop. To increase the data rate to
60MBytes/s and improve stability you may want to remove the panda-webcontrol
zpkg.
-Flush about 1Hz
-~~~~~~~~~~~~~~~
+### Flush about 1Hz
-`AsyncioClient.data` accepts a ``flush_period`` argument. If given, it will
+`AsyncioClient.data` accepts a `flush_period` argument. If given, it will
squash intermediate data frames together until this period expires, and only
then produce them. This means the numpy data blocks are larger and can be more
efficiently written to disk then flushed. The `write_hdf_files` function uses
this approach.
-
-Performance Achieved
---------------------
+## Performance Achieved
Tests were run with the following conditions:
@@ -100,15 +93,14 @@ When panda-webcontrol was not installed, the following results were achieved:
Increasing above these throughputs failed most scans with `DATA_OVERRUN`.
-Data overruns
--------------
+## Data overruns
If there is a `DATA_OVERRUN`, the server will stop sending data. The most recently
received `FrameData` from either `AsyncioClient.data` or `BlockingClient.data` may
-be corrupt. This is the case if the ``scaled`` argument is set to False. The mechanism
+be corrupt. This is the case if the `scaled` argument is set to False. The mechanism
the server uses to send raw unscaled data is only able to detect the corrupt frame after
it has already been sent. Conversely, the mechanism used to send scaled data aborts prior
to sending a corrupt frame.
-The `write_hdf_files` function uses ``scaled=False``, so your HDF file may include some
+The `write_hdf_files` function uses `scaled=False`, so your HDF file may include some
corrupt data in the event of an overrun.
diff --git a/docs/explanations/sans-io.rst b/docs/explanations/sans-io.md
similarity index 52%
rename from docs/explanations/sans-io.rst
rename to docs/explanations/sans-io.md
index e8f759686..07c2357f6 100644
--- a/docs/explanations/sans-io.rst
+++ b/docs/explanations/sans-io.md
@@ -1,9 +1,8 @@
-.. _sans-io:
+(sans-io)=
-Why write a Sans-IO library?
-============================
+# Why write a Sans-IO library?
-As the reference_ says: *Reusability*. The protocol can be coded in a separate
+As the [reference] says: *Reusability*. The protocol can be coded in a separate
class to the I/O allowing integration into a number of different concurrency
frameworks.
@@ -12,69 +11,61 @@ coded the protocol in either of them it would not be usable in the other. Much
better to put it in a separate class and feed it bytes off the wire. We call
this protocol encapsulation a Connection.
-Connections
------------
+## Connections
The PandA TCP server exposes a Control port and a Data port, so there are
-corresponding `ControlConnection` and `DataConnection` objects:
+corresponding [](ControlConnection) and objects:
-.. currentmodule:: pandablocks.connections
+The [](ControlConnection) class has the following methods:
-.. autoclass:: ControlConnection
- :noindex:
-
- The :meth:`~ControlConnection.send` method takes a `Command` subclass and
+- The [`send()`](ControlConnection.send) method takes a `Command` subclass and
returns the bytes that should be sent to the PandA. Whenever bytes are
- received from the socket they can be passed to
- :meth:`~ControlConnection.receive_bytes` which will return any subsequent
- bytes that should be send back. The :meth:`~ControlConnection.responses`
- method returns an iterator of ``(command, response)`` tuples that have now
+ received from the socket they can be passed to this method which will return any subsequent
+ bytes that should be send back.
+- The [`responses()`](ControlConnection.responses) method returns an iterator of ``(command, response)`` tuples that have now
completed. The response type will depend on the command. For instance `Get`
returns `bytes` or a `list` of `bytes` of the field value, and `GetFieldInfo`
returns a `dict` mapping `str` field name to `FieldInfo`.
-.. autoclass:: DataConnection
- :noindex:
+The [](DataConnection) class has the following methods:
- The :meth:`~DataConnection.connect` method takes any connection arguments
+- The [`connect()`](DataConnection.connect) method takes any connection arguments
and returns the bytes that should be sent to the PandA to make the initial
connection. Whenever bytes are received from the socket they can be passed
- to :meth:`~DataConnection.receive_bytes` which will return an iterator of
- `Data` objects. Intermediate `FrameData` can be squashed together by passing
+ to this method which will return an iterator of
+ `Data` objects.
+- Intermediate `FrameData` can be squashed together by passing
``flush_every_frame=False``, then explicitly calling
- :meth:`~DataConnection.flush` when they are required.
+ [`flush()`](DataConnection.flush) when they are required.
-Wrappers
---------
+## Wrappers
Of course, these Connections are useless without connecting some I/O. To aid with
this, wrappers are included for use in `asyncio ` and blocking programs. They expose
slightly different APIs to make best use of the features of their respective concurrency frameworks.
-For example, to send multiple commands in fields with the `blocking` wrapper::
+For example, to send multiple commands in fields with the `blocking` wrapper:
- with BlockingClient("hostname") as client:
- resp1, resp2 = client.send([cmd1, cmd2])
+```
+with BlockingClient("hostname") as client:
+ resp1, resp2 = client.send([cmd1, cmd2])
+```
-while with the `asyncio` wrapper we would::
+while with the `asyncio` wrapper we would:
- async with AsyncioClient("hostname") as client:
- resp1, resp2 = await asyncio.gather(
- client.send(cmd1),
- client.send(cmd2)
- )
+```
+async with AsyncioClient("hostname") as client:
+ resp1, resp2 = await asyncio.gather(
+ client.send(cmd1),
+ client.send(cmd2)
+ )
+```
The first has the advantage of simplicity, but blocks while waiting for data.
The second allows multiple co-routines to use the client at the same time at the
expense of a more verbose API.
-The wrappers do not guarantee feature parity, for instance the ``flush_period``
+The wrappers do not guarantee feature parity, for instance the `flush_period`
option is only available in the asyncio wrapper.
-
-
-
-
-
-
-.. _reference: https://sans-io.readthedocs.io/
\ No newline at end of file
+[reference]: https://sans-io.readthedocs.io/
diff --git a/docs/how-to/introspect-panda.rst b/docs/how-to/introspect-panda.md
similarity index 56%
rename from docs/how-to/introspect-panda.rst
rename to docs/how-to/introspect-panda.md
index 2bd129147..64ca6cd58 100644
--- a/docs/how-to/introspect-panda.rst
+++ b/docs/how-to/introspect-panda.md
@@ -1,22 +1,21 @@
-How to introspect a PandA
-===========================
+# How to introspect a PandA
Using a combination of `commands ` it is straightforward to query the PandA
-to list all blocks, and all fields inside each block, that exist.
+to list all blocks, and all fields inside each block, that exist.
Call the following script, with the address of the PandA as the first and only command line argument:
+```{literalinclude} ../../examples/introspect_panda.py
+```
-.. literalinclude:: ../../../examples/introspect_panda.py
-
-This script can be found in ``examples/introspect_panda.py``.
+This script can be found in `examples/introspect_panda.py`.
By examining the `BlockInfo` structure returned from `GetBlockInfo` for each Block the number
and description may be acquired for every block.
-By examining the `FieldInfo` structure (which is fully printed in this example) the ``type``,
-``sub-type``, ``description`` and ``label`` may all be found for every field.
+By examining the `FieldInfo` structure (which is fully printed in this example) the `type`,
+`sub-type`, `description` and `label` may all be found for every field.
-Lastly the complete list of every ``BITS`` field in the ``PCAP`` block are gathered and
-printed. See the documentation in the `Field Types `_
+Lastly the complete list of every `BITS` field in the `PCAP` block are gathered and
+printed. See the documentation in the [Field Types](https://pandablocks-server.readthedocs.io/en/latest/fields.html?#field-types)
section of the PandA Server documentation.
diff --git a/docs/how-to/library-hdf.rst b/docs/how-to/library-hdf.md
similarity index 67%
rename from docs/how-to/library-hdf.rst
rename to docs/how-to/library-hdf.md
index 54ebd97a0..b24befe87 100644
--- a/docs/how-to/library-hdf.rst
+++ b/docs/how-to/library-hdf.md
@@ -1,34 +1,34 @@
-.. _library-hdf:
+(library-hdf)=
-How to use the library to capture HDF files
-===========================================
+# How to use the library to capture HDF files
The `commandline-hdf` introduced how to use the commandline to capture HDF files.
The `write_hdf_files` function that is called to do this can also be integrated
into custom Python applications. This guide shows how to do this.
-Approach 1: Call the function directly
---------------------------------------
+## Approach 1: Call the function directly
If you need a one-shot configure and run application, you can use the
function directly:
-.. literalinclude:: ../../../examples/arm_and_hdf.py
+```{literalinclude} ../../examples/arm_and_hdf.py
+```
With the `AsyncioClient` as a `Context Manager `, this code
sets up some fields of a PandA before taking a single acquisition. The code in
`write_hdf_files` is responsible for arming the PandA.
-.. note::
+:::{note}
+There are no log messages emitted like in `commandline-hdf`. This is because
+we have not configured the logging framework in this example. You can get
+these messages by adding a call to `logging.basicConfig` like this:
- There are no log messages emitted like in `commandline-hdf`. This is because
- we have not configured the logging framework in this example. You can get
- these messages by adding a call to `logging.basicConfig` like this::
+```
+logging.basicConfig(level=logging.INFO)
+```
+:::
- logging.basicConfig(level=logging.INFO)
-
-Approach 2: Create the pipeline yourself
-----------------------------------------
+## Approach 2: Create the pipeline yourself
If you need more control over the pipeline, for instance to display progress,
you can create the pipeline yourself, and feed it data from the PandA. This
@@ -36,7 +36,8 @@ means you can make decisions about when to start and stop acquisitions based on
the `Data` objects that go past. For example, if we want to make a progress bar
we could:
-.. literalinclude:: ../../../examples/hdf_queue_reporting.py
+```{literalinclude} ../../examples/hdf_queue_reporting.py
+```
This time, after setting up the PandA, we create the `AsyncioClient.data`
iterator ourselves. Each `Data` object we get is queued on the first `Pipeline`
@@ -46,9 +47,8 @@ update a progress bar, or return as acquisition is complete.
In a `finally ` block we stop the pipeline, which will wait for all data
to flow through the pipeline and close the HDF file.
-Performance
------------
+## Performance
The commandline client and both these approaches use the same core code, so will
give the same performance. The steps to consider in optimising performance are
-outlined in `performance`
\ No newline at end of file
+outlined in `performance`
diff --git a/docs/how-to/poll-changes.md b/docs/how-to/poll-changes.md
new file mode 100644
index 000000000..bfab9b0e7
--- /dev/null
+++ b/docs/how-to/poll-changes.md
@@ -0,0 +1,3 @@
+# How to efficiently poll for changes
+
+Write something about using `*CHANGES` like Malcolm does.
diff --git a/docs/how-to/poll-changes.rst b/docs/how-to/poll-changes.rst
deleted file mode 100644
index 6be376256..000000000
--- a/docs/how-to/poll-changes.rst
+++ /dev/null
@@ -1,4 +0,0 @@
-How to efficiently poll for changes
-===================================
-
-Write something about using ``*CHANGES`` like Malcolm does.
diff --git a/docs/reference/api.md b/docs/reference/api.md
deleted file mode 100644
index 687260d8b..000000000
--- a/docs/reference/api.md
+++ /dev/null
@@ -1,17 +0,0 @@
-# API
-
-```{eval-rst}
-.. automodule:: pandablocks
-
- ``pandablocks``
- -----------------------------------
-```
-
-This is the internal API reference for pandablocks
-
-```{eval-rst}
-.. data:: pandablocks.__version__
- :type: str
-
- Version number as calculated by https://github.com/pypa/setuptools_scm
-```
diff --git a/docs/reference/api.rst b/docs/reference/api.rst
new file mode 100644
index 000000000..9b3957400
--- /dev/null
+++ b/docs/reference/api.rst
@@ -0,0 +1,92 @@
+.. _API:
+
+API
+===
+
+The top level pandablocks module contains a number of packages that can be used
+from code:
+
+- `pandablocks.commands`: The control commands that can be sent to a PandA
+- `pandablocks.responses`: The control and data responses that will be received
+- `pandablocks.connections`: Control and data connections that implements the parsing logic
+- `pandablocks.asyncio`: An asyncio client that uses the control and data connections
+- `pandablocks.blocking`: A blocking client that uses the control and data connections
+- `pandablocks.hdf`: Some helpers for efficiently writing data responses to disk
+- `pandablocks.utils`: General utility methods for use with pandablocks
+
+
+.. automodule:: pandablocks.commands
+ :members:
+
+ Commands
+ --------
+
+ There is a `Command` subclass for every sort of command that can be sent to
+ the `ControlConnection` of a PandA. Many common actions can be accomplished
+ with a simple `Get` or `Put`, but some convenience commands like
+ `GetBlockInfo`, `GetFieldInfo`, etc. are provided that parse output into
+ specific classes.
+
+
+.. automodule:: pandablocks.responses
+ :members:
+
+ Responses
+ ---------
+
+ Classes used in responses from both the `ControlConnection` and
+ `DataConnection` of a PandA live in this package.
+
+.. automodule:: pandablocks.connections
+ :members:
+
+ Connections
+ -----------
+
+ `Sans-IO ` connections for both the Control and Data ports
+ of PandA TCP server.
+
+.. automodule:: pandablocks.asyncio
+ :members:
+
+ Asyncio Client
+ --------------
+
+ This is an `asyncio` wrapper to the `ControlConnection` and `DataConnection`
+ objects, allowing async calls to ``send(command)`` and iterate over
+ ``data()``.
+
+.. automodule:: pandablocks.blocking
+ :members:
+
+ Blocking Client
+ ---------------
+
+ This is a blocking wrapper to the `ControlConnection` and `DataConnection`
+ objects, allowing blocking calls to ``send(commands)`` and iterate over
+ ``data()``.
+
+.. automodule:: pandablocks.hdf
+ :members:
+
+ HDF Writing
+ -----------
+
+ This package contains components needed to write PCAP data to and HDF file
+ in the most efficient way. The oneshot `write_hdf_files` is exposed in the
+ commandline interface. It assembles a short `Pipeline` of:
+
+ `AsyncioClient` -> `FrameProcessor` -> `HDFWriter`
+
+ The FrameProcessor and HDFWriter run in their own threads as most of the
+ heavy lifting is done by numpy_ and h5py_, so running in their own threads
+ gives multi-CPU benefits without hitting the limit of the GIL.
+
+ .. seealso:: `library-hdf`, `performance`
+
+.. automodule:: pandablocks.utils
+
+ Utilities
+ ---------
+
+ This package contains general methods for working with pandablocks.
\ No newline at end of file
diff --git a/docs/reference/appendix.rst b/docs/reference/appendix.rst
deleted file mode 100644
index 9c10c37eb..000000000
--- a/docs/reference/appendix.rst
+++ /dev/null
@@ -1,15 +0,0 @@
-:orphan:
-
-Appendix
-========
-
-These definitions are needed to quell sphinx warnings.
-
-.. py:class:: T
- :canonical: pandablocks.commands.T
-
- Parameter for Generic class `Command`, indicating its response type
-
-.. py:class:: socket.socket
-
- The docs for this are `here `
\ No newline at end of file
diff --git a/docs/reference/changelog.rst b/docs/reference/changelog.rst
deleted file mode 100644
index 09929fe43..000000000
--- a/docs/reference/changelog.rst
+++ /dev/null
@@ -1 +0,0 @@
-.. include:: ../../CHANGELOG.rst
diff --git a/docs/reference/contributing.rst b/docs/reference/contributing.rst
deleted file mode 100644
index 65b992f08..000000000
--- a/docs/reference/contributing.rst
+++ /dev/null
@@ -1 +0,0 @@
-.. include:: ../../../.github/CONTRIBUTING.rst
diff --git a/docs/tutorials/commandline-hdf.md b/docs/tutorials/commandline-hdf.md
new file mode 100644
index 000000000..7c835620c
--- /dev/null
+++ b/docs/tutorials/commandline-hdf.md
@@ -0,0 +1,144 @@
+(commandline-hdf)=
+
+# Commandline Capture of HDF Files Tutorial
+
+This tutorial shows how to use the commandline tool to save an HDF file from the PandA
+for each PCAP acquisition. It assumes that you have followed the `tutorial-load-save` tutorial
+to setup the PandA.
+
+## Capturing some data
+
+In one terminal launch the HDF writer client, and tell it to capture 3 frames in a
+location of your choosing:
+
+```
+pandablocks hdf --num=3 /tmp/panda-capture-%d.h5
+```
+
+Where `` is the hostname or ip address of your PandA. This will connect
+to the data port of the PandA and start listening for up to 3 acquisitions. It will
+then write these into files:
+
+```
+/tmp/panda-capture-1.h5
+/tmp/panda-capture-2.h5
+/tmp/panda-capture-3.h5
+```
+
+In a second terminal you can launch the acquisition:
+
+```
+$ pandablocks control
+< *PCAP.ARM=
+OK
+```
+
+This should write 1000 frames at 500Hz, printing in the first terminal window:
+
+```
+INFO:Opened '/tmp/panda-capture-1.h5' with 60 byte samples stored in 11 datasets
+INFO:Closed '/tmp/panda-capture-1.h5' after writing 1000 samples. End reason is 'Ok'
+```
+
+You can then do `PCAP.ARM=` twice more to make the other files.
+
+## Examining the data
+
+You can use your favourite HDF reader to examine the data. It is written in `swmr`
+mode so that you can read partial acquisitions before they are complete.
+
+:::{note}
+Reading SWMR HDF5 files while they are being written to require the use of a
+Posix compliant filesystem like a local disk or GPFS native client. NFS
+mounts are *not* Posix compliant.
+:::
+
+In the repository `examples/plot_counter_hdf.py` is an example of reading the
+file, listing the datasets, and plotting the counters:
+
+```{literalinclude} ../../examples/plot_counter_hdf.py
+```
+
+Running it on `/tmp/panda-capture-1.h5` will show the three counter values:
+
+```{eval-rst}
+.. plot::
+
+ for i in range(1, 4):
+ plt.plot(np.arange(1, 1001) * i, label=f"Counter {i}")
+ plt.legend()
+ plt.show()
+```
+
+You should see that they are all the same size:
+
+```
+$ ls -s --si /tmp/panda-capture-*.h5
+74k /tmp/panda-capture-1.h5
+74k /tmp/panda-capture-2.h5
+74k /tmp/panda-capture-3.h5
+```
+
+If you have h5diff you can check the contents are the same:
+
+```
+$ h5diff /tmp/panda-capture-1.h5 /tmp/panda-capture-2.h5
+$ h5diff /tmp/panda-capture-1.h5 /tmp/panda-capture-3.h5
+```
+
+## Collecting more data faster
+
+The test data is produced by a SEQ Block, configured to produce a high level
+for 1 prescaled tick, then a low level for 1 prescaled tick. The default
+setting is to produce 1000 repeats of these, with a prescale of 1ms and hence
+a period of 2ms. Each sample is 11 fields, totalling 60 bytes, which means
+that it will produce data at a modest 30kBytes/s for a total of 2s.
+We can increase this to a more taxing 30MBytes/s by reducing the
+prescaler to 1us. If we increase the prescaler to 10 million then we will
+sustain this data rate for 20s and write 600MByte files each time:
+
+```
+$ pandablocks control
+< SEQ1.REPEATS?
+OK =1000 # It was doing 1k samples, change to 10M
+< SEQ1.REPEATS=10000000
+OK
+< SEQ1.PRESCALE?
+OK =1000
+< SEQ1.PRESCALE.UNITS?
+OK =us # It was doing 1ms ticks, change to 1us
+< SEQ1.PRESCALE=1
+OK
+```
+
+Lets write a single file this time, telling the command to also arm the PandA:
+
+```
+pandablocks hdf --arm /tmp/biggerfile-%d.h5
+```
+
+Twenty seconds later we will get a file:
+
+```
+$ ls -s --si /tmp/biggerfile-*.h5
+602M /tmp/biggerfile-1.h5
+```
+
+Which looks very similar when plotted with the code above, just a bit bigger:
+
+```{eval-rst}
+.. plot::
+
+ for i in range(1, 4):
+ plt.plot(np.arange(1, 10000001) * i, label=f"Counter {i}")
+ plt.legend()
+ plt.show()
+```
+
+## Conclusion
+
+This tutorial has shown how to capture data to an HDF file using the commandline
+client. It is possible to use this commandline interface in production, but it is
+more likely to be integrated in an application that controls the acquisition as well
+as writing the data. This is covered in `library-hdf`. You can explore strategies
+on getting the maximum performance out of a PandA in `performance`.
diff --git a/docs/tutorials/commandline-hdf.rst b/docs/tutorials/commandline-hdf.rst
deleted file mode 100644
index 28b5f0cec..000000000
--- a/docs/tutorials/commandline-hdf.rst
+++ /dev/null
@@ -1,126 +0,0 @@
-.. _commandline-hdf:
-
-Commandline Capture of HDF Files Tutorial
-=========================================
-
-This tutorial shows how to use the commandline tool to save an HDF file from the PandA
-for each PCAP acquisition. It assumes that you have followed the `tutorial-load-save` tutorial
-to setup the PandA.
-
-Capturing some data
--------------------
-
-In one terminal launch the HDF writer client, and tell it to capture 3 frames in a
-location of your choosing::
-
- pandablocks hdf --num=3 /tmp/panda-capture-%d.h5
-
-Where ```` is the hostname or ip address of your PandA. This will connect
-to the data port of the PandA and start listening for up to 3 acquisitions. It will
-then write these into files::
-
- /tmp/panda-capture-1.h5
- /tmp/panda-capture-2.h5
- /tmp/panda-capture-3.h5
-
-In a second terminal you can launch the acquisition::
-
- $ pandablocks control
- < *PCAP.ARM=
- OK
-
-This should write 1000 frames at 500Hz, printing in the first terminal window::
-
- INFO:Opened '/tmp/panda-capture-1.h5' with 60 byte samples stored in 11 datasets
- INFO:Closed '/tmp/panda-capture-1.h5' after writing 1000 samples. End reason is 'Ok'
-
-You can then do ``PCAP.ARM=`` twice more to make the other files.
-
-Examining the data
-------------------
-
-You can use your favourite HDF reader to examine the data. It is written in `swmr`
-mode so that you can read partial acquisitions before they are complete.
-
-.. note::
-
- Reading SWMR HDF5 files while they are being written to require the use of a
- Posix compliant filesystem like a local disk or GPFS native client. NFS
- mounts are *not* Posix compliant.
-
-In the repository ``examples/plot_counter_hdf.py`` is an example of reading the
-file, listing the datasets, and plotting the counters:
-
-.. literalinclude:: ../../../examples/plot_counter_hdf.py
-
-Running it on ``/tmp/panda-capture-1.h5`` will show the three counter values:
-
-.. plot::
-
- for i in range(1, 4):
- plt.plot(np.arange(1, 1001) * i, label=f"Counter {i}")
- plt.legend()
- plt.show()
-
-You should see that they are all the same size::
-
- $ ls -s --si /tmp/panda-capture-*.h5
- 74k /tmp/panda-capture-1.h5
- 74k /tmp/panda-capture-2.h5
- 74k /tmp/panda-capture-3.h5
-
-If you have h5diff you can check the contents are the same::
-
- $ h5diff /tmp/panda-capture-1.h5 /tmp/panda-capture-2.h5
- $ h5diff /tmp/panda-capture-1.h5 /tmp/panda-capture-3.h5
-
-Collecting more data faster
----------------------------
-
-The test data is produced by a SEQ Block, configured to produce a high level
-for 1 prescaled tick, then a low level for 1 prescaled tick. The default
-setting is to produce 1000 repeats of these, with a prescale of 1ms and hence
-a period of 2ms. Each sample is 11 fields, totalling 60 bytes, which means
-that it will produce data at a modest 30kBytes/s for a total of 2s.
-We can increase this to a more taxing 30MBytes/s by reducing the
-prescaler to 1us. If we increase the prescaler to 10 million then we will
-sustain this data rate for 20s and write 600MByte files each time::
-
- $ pandablocks control
- < SEQ1.REPEATS?
- OK =1000 # It was doing 1k samples, change to 10M
- < SEQ1.REPEATS=10000000
- OK
- < SEQ1.PRESCALE?
- OK =1000
- < SEQ1.PRESCALE.UNITS?
- OK =us # It was doing 1ms ticks, change to 1us
- < SEQ1.PRESCALE=1
- OK
-
-Lets write a single file this time, telling the command to also arm the PandA::
-
- pandablocks hdf --arm /tmp/biggerfile-%d.h5
-
-Twenty seconds later we will get a file::
-
- $ ls -s --si /tmp/biggerfile-*.h5
- 602M /tmp/biggerfile-1.h5
-
-Which looks very similar when plotted with the code above, just a bit bigger:
-
-.. plot::
-
- for i in range(1, 4):
- plt.plot(np.arange(1, 10000001) * i, label=f"Counter {i}")
- plt.legend()
- plt.show()
-
-Conclusion
-----------
-
-This tutorial has shown how to capture data to an HDF file using the commandline
-client. It is possible to use this commandline interface in production, but it is
-more likely to be integrated in an application that controls the acquisition as well
-as writing the data. This is covered in `library-hdf`. You can explore strategies
-on getting the maximum performance out of a PandA in `performance`.
diff --git a/docs/tutorials/control.md b/docs/tutorials/control.md
new file mode 100644
index 000000000..bf8cbf592
--- /dev/null
+++ b/docs/tutorials/control.md
@@ -0,0 +1,68 @@
+# Interactive Control Tutorial
+
+This tutorial shows how to use the commandline tool to open an interactive terminal
+to control a PandA.
+
+## Connect
+
+Open a terminal, and type:
+
+```
+pandablocks control
+```
+
+Where `` is the hostname or ip address of your PandA.
+
+## Type Commands
+
+You should be presented with a prompt where you can type PandABlocks-server
+[commands]. If you are on Linux you can tab complete commands with the TAB key:
+
+```
+< PCAP. # Hit TAB key...
+PCAP.ACTIVE PCAP.BITS1 PCAP.BITS3 PCAP.GATE PCAP.SAMPLES PCAP.TRIG PCAP.TS_END PCAP.TS_TRIG
+PCAP.BITS0 PCAP.BITS2 PCAP.ENABLE PCAP.HEALTH PCAP.SHIFT_SUM PCAP.TRIG_EDGE PCAP.TS_START
+```
+
+Pressing return will send the command to the server and display the response.
+
+## Control an acquisition
+
+You can check if an acquisition is currently in progress by getting the value of the
+`PCAP.ACTIVE` field:
+
+```
+< PCAP.ACTIVE?
+OK =0
+```
+
+You can start and stop acquisitions with special "star" commands. To start an acquisition:
+
+```
+< *PCAP.ARM=
+OK
+```
+
+You can now use the up arrow to recall the previous command, then press return:
+
+```
+< PCAP.ACTIVE?
+OK =1
+```
+
+This means that acquisition is in progress. You can stop it by disarming:
+
+```
+< *PCAP.DISARM=
+OK
+< PCAP.ACTIVE?
+OK =0
+```
+
+## Conclusion
+
+This tutorial has shown how to start and stop an acquisition from the commandline
+client. It can also be used to send any other control [commands] to query and set
+variables on the PandA.
+
+[commands]: https://pandablocks-server.readthedocs.io/en/latest/commands.html
diff --git a/docs/tutorials/control.rst b/docs/tutorials/control.rst
deleted file mode 100644
index 339d1cc91..000000000
--- a/docs/tutorials/control.rst
+++ /dev/null
@@ -1,61 +0,0 @@
-Interactive Control Tutorial
-============================
-
-This tutorial shows how to use the commandline tool to open an interactive terminal
-to control a PandA.
-
-Connect
--------
-
-Open a terminal, and type::
-
- pandablocks control
-
-Where ```` is the hostname or ip address of your PandA.
-
-Type Commands
--------------
-
-You should be presented with a prompt where you can type PandABlocks-server
-commands_. If you are on Linux you can tab complete commands with the TAB key::
-
- < PCAP. # Hit TAB key...
- PCAP.ACTIVE PCAP.BITS1 PCAP.BITS3 PCAP.GATE PCAP.SAMPLES PCAP.TRIG PCAP.TS_END PCAP.TS_TRIG
- PCAP.BITS0 PCAP.BITS2 PCAP.ENABLE PCAP.HEALTH PCAP.SHIFT_SUM PCAP.TRIG_EDGE PCAP.TS_START
-
-Pressing return will send the command to the server and display the response.
-
-Control an acquisition
-----------------------
-
-You can check if an acquisition is currently in progress by getting the value of the
-``PCAP.ACTIVE`` field::
-
- < PCAP.ACTIVE?
- OK =0
-
-You can start and stop acquisitions with special "star" commands. To start an acquisition::
-
- < *PCAP.ARM=
- OK
-
-You can now use the up arrow to recall the previous command, then press return::
-
- < PCAP.ACTIVE?
- OK =1
-
-This means that acquisition is in progress. You can stop it by disarming::
-
- < *PCAP.DISARM=
- OK
- < PCAP.ACTIVE?
- OK =0
-
-Conclusion
-----------
-
-This tutorial has shown how to start and stop an acquisition from the commandline
-client. It can also be used to send any other control commands_ to query and set
-variables on the PandA.
-
-.. _commands: https://pandablocks-server.readthedocs.io/en/latest/commands.html
diff --git a/docs/tutorials/load-save.rst b/docs/tutorials/load-save.md
similarity index 51%
rename from docs/tutorials/load-save.rst
rename to docs/tutorials/load-save.md
index 5134c40f6..1147cc068 100644
--- a/docs/tutorials/load-save.rst
+++ b/docs/tutorials/load-save.md
@@ -1,19 +1,19 @@
-.. _tutorial-load-save:
+(tutorial-load-save)=
-Commandline Load/Save Tutorial
-==============================
+# Commandline Load/Save Tutorial
This tutorial shows how to use the commandline tool to save the state of all the
Blocks and Fields in a PandA, and load a new state from file. It assumes that
you know the basic concepts of a PandA as outlined in the PandABlocks-FPGA
-blinking LEDs tutorial_.
+blinking LEDs [tutorial].
-Save
-----
+## Save
-You can save the current state using the save command as follows::
+You can save the current state using the save command as follows:
- $ pandablocks save
+```
+$ pandablocks save
+```
The save file is a text file containing the sequence of pandablocks control
commands that will set up the PandA to match its state at the time of the save.
@@ -22,35 +22,40 @@ fields.
e.g. the first few lines of the tutorial save file look like this:
-.. literalinclude:: ../../../src/pandablocks/saves/tutorial.sav
- :lines: 1-12
+```{literalinclude} ../../src/pandablocks/saves/tutorial.sav
+:lines: 1-12
+```
-Load
-----
+## Load
-To restore a PandA to a previously saved state use the load command as follows::
+To restore a PandA to a previously saved state use the load command as follows:
- $ pandablocks load
+```
+$ pandablocks load
+```
-This is equivalent to typing the sequence of commands in into the
+This is equivalent to typing the sequence of commands in \ into the
pandablocks control command line.
-To load the preconfigured tutorial state::
+To load the preconfigured tutorial state:
- $ pandablocks load --tutorial
+```
+$ pandablocks load --tutorial
+```
The tutorial sets up a Seqencer block driving 3 Counter blocks and a Position
Capture block. This configuration is the starting point for the next tutorial:
-:ref:`commandline-hdf`
+{ref}`commandline-hdf`
-.. note::
-
- The Web UI will not change the Blocks visible on the screen when you use
- ``pandablocks load``. If you want all the connected Blocks to appear in the
- UI then restart the services on the PandA (Admin > System > Reboot/Restart)
+:::{note}
+The Web UI will not change the Blocks visible on the screen when you use
+`pandablocks load`. If you want all the connected Blocks to appear in the
+UI then restart the services on the PandA (Admin > System > Reboot/Restart)
+:::
The tutorial blocks are wired up as shown in the following Web UI layout.
-.. image:: tutorial_layout.png
+```{image} tutorial_layout.png
+```
-.. _tutorial: https://pandablocks-fpga.readthedocs.io/en/latest/tutorials/tutorial1_blinking_leds.html
+[tutorial]: https://pandablocks-fpga.readthedocs.io/en/latest/tutorials/tutorial1_blinking_leds.html
diff --git a/src/pandablocks/connections.py b/src/pandablocks/connections.py
index 50e210d71..70206c961 100644
--- a/src/pandablocks/connections.py
+++ b/src/pandablocks/connections.py
@@ -102,8 +102,8 @@ def __iter__(self):
def __next__(self) -> bytes:
try:
return self.read_line()
- except NeedMoreData:
- raise StopIteration()
+ except NeedMoreData as err:
+ raise StopIteration() from err
@dataclass
diff --git a/tests/test_asyncio.py b/tests/test_asyncio.py
index 9e3902a8d..fd1881eda 100644
--- a/tests/test_asyncio.py
+++ b/tests/test_asyncio.py
@@ -54,7 +54,7 @@ async def test_asyncio_data_timeout(dummy_server_async, fast_dump):
dummy_server_async.data = fast_dump
async with AsyncioClient("localhost") as client:
with pytest.raises(asyncio.TimeoutError, match="No data received for 0.1s"):
- async for data in client.data(frame_timeout=0.1):
+ async for _ in client.data(frame_timeout=0.1):
"This goes forever, when it runs out of data we will get our timeout"
diff --git a/tests/test_cli.py b/tests/test_cli.py
index 38f333764..6ce05e585 100644
--- a/tests/test_cli.py
+++ b/tests/test_cli.py
@@ -72,8 +72,8 @@ def __call__(self, prompt):
assert prompt == cli.PROMPT
try:
return self._commands.popleft()
- except IndexError:
- raise EOFError()
+ except IndexError as err:
+ raise EOFError() from err
def test_interactive_simple(dummy_server_in_thread, capsys):