diff --git a/.copier-answers.yml b/.copier-answers.yml index f9bf5130..fc993721 100644 --- a/.copier-answers.yml +++ b/.copier-answers.yml @@ -5,11 +5,11 @@ author_email: giles.knap@diamond.ac.uk author_name: Giles Knap component_owner: group:default/sscc description: Documentation for the epics-containers framework -distribution_name: epic-containers +distribution_name: epics-containers docker: false docs_type: sphinx git_platform: github.com github_org: epics-containers package_name: epics-containers -repo_name: epic-containers.github.io +repo_name: epics-containers.github.io type_checker: mypy diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index b735c87e..92bb7e0c 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -1,27 +1,23 @@ -# Contribute to the project +# Contributing to the project Contributions and issues are most welcome! All issues and pull requests are -handled through [GitHub](https://github.com/epics-containers/epic-containers.github.io/issues). Also, please check for any existing issues before +handled through [GitHub]. Also, please check for any existing issues before filing a new one. If you have a great idea but it involves big changes, please file a ticket before making a pull request! We want to make sure you don't spend your time coding something that might not fit the scope of the project. ## Issue or Discussion? -Github also offers [discussions](https://github.com/epics-containers/epic-containers.github.io/discussions) as a place to ask questions and share ideas. If +Github also offers [discussions] as a place to ask questions and share ideas. If your issue is open ended and it is not obvious when it can be "closed", please raise it as a discussion instead. -## Code Coverage +## Developer guide -While 100% code coverage does not make a library bug-free, it significantly -reduces the number of easily caught bugs! Please make sure coverage remains the -same or is improved by a pull request! +The [Developer Guide] contains information on setting up a development +environment, building docs and what standards the documentation +should follow. -## Developer Information - -It is recommended that developers use a [vscode devcontainer](https://code.visualstudio.com/docs/devcontainers/containers). This repository contains configuration to set up a containerized development environment that suits its own needs. - -This project was created using the [Diamond Light Source Copier Template](https://github.com/DiamondLightSource/python-copier-template) for Python projects. - -For more information on common tasks like setting up a developer environment, running the tests, and setting a pre-commit hook, see the template's [How-to guides](https://diamondlightsource.github.io/python-copier-template/1.3.0/how-to.html). +[developer guide]: https://epics-containers.github.io/main/developer/how-to/contribute.html +[discussions]: https://github.com/epics-containers/epics-containers.github.io/discussions +[github]: https://github.com/epics-containers/epics-containers.github.io/issues diff --git a/.github/CONTRIBUTING.rst b/.github/CONTRIBUTING.rst deleted file mode 100644 index 151b2648..00000000 --- a/.github/CONTRIBUTING.rst +++ /dev/null @@ -1,29 +0,0 @@ -Contributing to the project -=========================== - -Contributions and issues are most welcome! All issues and pull requests are -handled through GitHub_. Also, please check for any existing issues before -filing a new one. If you have a great idea but it involves big changes, please -file a ticket before making a pull request! We want to make sure you don't spend -your time coding something that might not fit the scope of the project. - -.. _GitHub: https://github.com/epics-containers/epics-containers.github.io/issues - -Issue or Discussion? --------------------- - -Github also offers discussions_ as a place to ask questions and share ideas. If -your issue is open ended and it is not obvious when it can be "closed", please -raise it as a discussion instead. - -.. _discussions: https://github.com/epics-containers/epics-containers.github.io/discussions - - -Developer guide ---------------- - -The `Developer Guide`_ contains information on setting up a development -environment, building docs and what standards the documentation -should follow. - -.. _Developer Guide: https://epics-containers.github.io/main/developer/how-to/contribute.html diff --git a/.gitignore b/.gitignore index 2593ec75..e5171d44 100644 --- a/.gitignore +++ b/.gitignore @@ -68,3 +68,6 @@ lockfiles/ # ruff cache .ruff_cache/ + +# workspace files +**/*.code-workspace \ No newline at end of file diff --git a/README.md b/README.md index 1e5f5df3..475501a9 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -[![CI](https://github.com/epics-containers/epic-containers.github.io/actions/workflows/ci.yml/badge.svg)](https://github.com/epics-containers/epic-containers-github-io/actions/workflows/ci.yml) +[![CI](https://github.com/epics-containers/epics-containers.github.io/actions/workflows/ci.yml/badge.svg)](https://github.com/epics-containers/epics-containers.github.io/actions/workflows/ci.yml) [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) @@ -14,15 +14,13 @@ and the [Getting Started Guide](https://epics-containers.github.io/main/user/tut Useful Links ============ -Please contribute with comments and suggestions in the wiki or issues pages: - | Item | Link | -------------- | --------------------- -| Documentation | https://epics-containers.github.io -| Wiki | https://github.com/epics-containers/epics-containers.github.io/wiki -| Issues | https://github.com/epics-containers/epics-containers.github.io/issues -| Docs Source | https://github.com/epics-containers/epics-containers.github.io -| Organization | https://github.com/epics-containers +| Documentation | +| Wiki | +| Issues | +| Docs Source | +| Organization | diff --git a/catalog-info.yaml b/catalog-info.yaml index fa3d0c87..6ed15e1f 100644 --- a/catalog-info.yaml +++ b/catalog-info.yaml @@ -1,10 +1,10 @@ apiVersion: backstage.io/v1alpha1 kind: Component metadata: - name: epic-containers - title: epic-containers.github.io + name: epics-containers + title: epics-containers.github.io description: Documentation for the epics-containers framework spec: type: documentation lifecycle: experimental - owner: group:default/sscc \ No newline at end of file + owner: group:default/sscc diff --git a/docs/conf.py b/docs/conf.py index 5d16914e..c3e895cd 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -14,7 +14,7 @@ version = "1.0" # General information about the project. -project = "epic-containers.github.io" +project = "epics-containers" extensions = [ # Use this for generating API docs @@ -106,7 +106,7 @@ # a list of builtin themes. # html_theme = "pydata_sphinx_theme" -github_repo = "epic-containers.github.io" +github_repo = "epics-containers.github.io" github_user = "epics-containers" switcher_json = f"https://{github_user}.github.io/{github_repo}/switcher.json" switcher_exists = requests.get(switcher_json).ok diff --git a/docs/explanations/decisions/0003-use-substitution-files.md b/docs/explanations/decisions/0003-use-substitution-files.md new file mode 100644 index 00000000..58cb7bd8 --- /dev/null +++ b/docs/explanations/decisions/0003-use-substitution-files.md @@ -0,0 +1,48 @@ +# 3. Use of substitution files to generate EPICS Databases + +Date: 2023-11-30 + +## Status + +Accepted + +## Context + +There are two proposals for how EPICS Databases should be generated: + +1. At IOC startup `ibek` should generate a substitution file that describes the + required Databases. + + The IOC instance yaml combined with the definitions from support module yaml + controls what the generated substitution file will look like. + + `ibek` will then execute `msi` to generate the Databases from the + substitution file. + +2. The dbLoadRecord calls in the startup script will pass all macro substitutions + in-line. Removing the need for a substitution file. + +## Decision + +Proposal 1 is accepted. + +Some template files such as those in the `pmac` support module use the +following pattern: + +``` +substitute "P=$(PMAC):, M=CS$(CS):M1, ADDR=1, DESC=CS Motor A" +include "pmacDirectMotor.template" +``` + +This pattern is supported by msi but not by the EPICS dbLoadRecord command which +does not recognise the `substitute` command. + +## Consequences + +An extra file `ioc.subst` is seen in the runtime directory. In reality this +is easier to read than a full Database file. So can be useful for debugging. + +Finally those developers who are unable to use `ibek yaml` for some reason can +supply their own substitution file and ibek will expand it at runtime. This is +much more compact that supplying a full Database file and important due to the +1MB limit on K8S ConfigMaps. diff --git a/docs/explanations/decisions/0003-use-substitution-files.rst b/docs/explanations/decisions/0003-use-substitution-files.rst deleted file mode 100644 index 5f76d5d1..00000000 --- a/docs/explanations/decisions/0003-use-substitution-files.rst +++ /dev/null @@ -1,55 +0,0 @@ -3. Use of substitution files to generate EPICS Databases -======================================================== - -Date: 2023-11-30 - -Status ------- - -Accepted - -Context -------- - -There are two proposals for how EPICS Databases should be generated: - -1. At IOC startup ``ibek`` should generate a substitution file that describes the - required Databases. - - The IOC instance yaml combined with the definitions from support module yaml - controls what the generated substitution file will look like. - - ``ibek`` will then execute ``msi`` to generate the Databases from the - substitution file. - -2. The dbLoadRecord calls in the startup script will pass all macro substitutions - in-line. Removing the need for a substitution file. - - -Decision --------- - -Proposal 1 is accepted. - -Some template files such as those in the ``pmac`` support module use the -following pattern: - -.. code-block:: - - substitute "P=$(PMAC):, M=CS$(CS):M1, ADDR=1, DESC=CS Motor A" - include "pmacDirectMotor.template" - -This pattern is supported by msi but not by the EPICS dbLoadRecord command which -does not recognise the ``substitute`` command. - - -Consequences ------------- - -An extra file ``ioc.subst`` is seen in the runtime directory. In reality this -is easier to read than a full Database file. So can be useful for debugging. - -Finally those developers who are unable to use ``ibek yaml`` for some reason can -supply their own substitution file and ibek will expand it at runtime. This is -much more compact that supplying a full Database file and important due to the -1MB limit on K8S ConfigMaps. diff --git a/docs/explanations/decisions/0004-autosave-req-files.rst b/docs/explanations/decisions/0004-autosave-req-files.md similarity index 84% rename from docs/explanations/decisions/0004-autosave-req-files.rst rename to docs/explanations/decisions/0004-autosave-req-files.md index 3e3f6f7c..3cdf886d 100644 --- a/docs/explanations/decisions/0004-autosave-req-files.rst +++ b/docs/explanations/decisions/0004-autosave-req-files.md @@ -1,27 +1,23 @@ -4. How to configure autosave for IOCs -===================================== +# 4. How to configure autosave for IOCs Date: 2023-11-30 -Status ------- +## Status Accepted -Context -------- +## Context There is a choice of supplying the list of PVs to autosave by: - adding info tags to the Database Templates - supplying a raw req file with list of PVs to autosave -Decision --------- +## Decision We will go with req files for the following reasons: -- https://epics.anl.gov/tech-talk/2019/msg01600.php +- - adding info tags would require upstream changes to most support modules - default req files are already supplied in many support modules - req files are in common use and many facilities may already have their own @@ -38,8 +34,6 @@ Then override files can exist at the beamline level and / or at the IOC instance level. These will simply take the form of a req file with the same name as the one it is overriding. -Consequences ------------- +## Consequences Everything is nice and simple. - diff --git a/docs/explanations/decisions/0005-python-scripting.rst b/docs/explanations/decisions/0005-python-scripting.md similarity index 73% rename from docs/explanations/decisions/0005-python-scripting.rst rename to docs/explanations/decisions/0005-python-scripting.md index e8a02cc9..cec99b17 100644 --- a/docs/explanations/decisions/0005-python-scripting.rst +++ b/docs/explanations/decisions/0005-python-scripting.md @@ -1,21 +1,18 @@ -5. Use Python for scripting inside and outside containers -========================================================= +# 5. Use Python for scripting inside and outside containers Date: 2022-11-30 -Status ------- +## Status Accepted -Context -------- +## Context -Inside the container, we use the ``ibek`` tool for scripting. Outside we -use ``ec`` from ``epics-containers-cli``. +Inside the container, we use the `ibek` tool for scripting. Outside we +use `ec` from `epics-containers-cli`. Much of what these tools do is -call command line tools like ``docker``, ``helm``, ``kubectl``, compilers, +call command line tools like `docker`, `helm`, `kubectl`, compilers, etc. This seems like a natural fit for bash scripts. These features were originally implemented in bash but were converted to @@ -33,12 +30,10 @@ python for the following reasons: - because the packages can be pip installed they can be used in CI and inside multiple containers without having to copy the scripts around -Decision --------- +## Decision We always prefer Python and keep bash scripts to a minimum -Consequences ------------- +## Consequences Scripting is much easier to maintain and is more reliable. diff --git a/docs/explanations/docs-structure.rst b/docs/explanations/docs-structure.md similarity index 66% rename from docs/explanations/docs-structure.rst rename to docs/explanations/docs-structure.md index f25a09ba..c95c7005 100644 --- a/docs/explanations/docs-structure.rst +++ b/docs/explanations/docs-structure.md @@ -1,11 +1,10 @@ -About the documentation ------------------------ +# About the documentation - :material-regular:`format_quote;2em` - - The Grand Unified Theory of Documentation - - -- David Laing +> {material-regular}`format_quote;2em` +> +> The Grand Unified Theory of Documentation +> +>

-David Laing

There is a secret that needs to be understood in order to write good software documentation: there isn't one thing called *documentation*, there are four. @@ -15,4 +14,4 @@ They represent four different purposes or functions, and require four different approaches to their creation. Understanding the implications of this will help improve most documentation - often immensely. -`More information on this topic. `_ +[More information on this topic.](https://documentation.divio.com) diff --git a/docs/explanations/introduction.rst b/docs/explanations/introduction.md similarity index 68% rename from docs/explanations/introduction.rst rename to docs/explanations/introduction.md index 3c844098..7c8c9f04 100644 --- a/docs/explanations/introduction.rst +++ b/docs/explanations/introduction.md @@ -1,20 +1,18 @@ -.. _essential: +(essential)= -Essential Concepts -================== +# Essential Concepts -Overview --------- +## Overview -.. include:: ../overview.rst +```{include} ../overview.md +``` See below for more detail on each of these. -Concepts --------- +## Concepts + +### Images and Containers -Images and Containers -~~~~~~~~~~~~~~~~~~~~~ Containers provide the means to package up IOC software and execute it in a lightweight virtual environment. These packages are then saved into public or private image registries such as DockerHub or Github Container @@ -29,7 +27,7 @@ using docker or podman but the images can be run under Kubernetes' own container runtime. This article does a good job of explaining the relationship between docker / -containers and Kubernetes https://semaphoreci.com/blog/kubernetes-vs-docker +containers and Kubernetes An important outcome of using containers is that you can alter the environment inside the container to suit the IOC code, instead of altering the @@ -37,25 +35,18 @@ code to suit your infrastructure. At DLS, this means that we are able to use vanilla EPICS base and support modules. We no longer require our own forks of these repositories. -.. _generic iocs: +(generic-iocs)= -Generic IOCs and instances -"""""""""""""""""""""""""" +#### Generic IOCs and instances -An important principal of the approach presented here is that an IOC container -image represents a 'Generic' IOC. The Generic IOC image is used for all -IOC instances that connect to a given class of device. For example the -Generic IOC image here: -`ghcr.io/epics-containers/ioc-adaravis-linux-runtime:2023.10.2 -`_ -uses the AreaDetector driver ADAravis to connect to GigE cameras. +An important principal of the approach presented here is that an IOC container image represents a 'Generic' IOC. The Generic IOC image is used for all IOC instances that connect to a given class of device. For example the Generic IOC image here: [ghcr.io/epics-containers/ioc-adaravis-linux-runtime:2024.2.2 ](https://github.com/epics-containers/ioc-adaravis/pkgs/container/ioc-adaravis-linux-runtime) uses the AreaDetector driver ADAravis to connect to GigE cameras. An IOC instance runs in a container runtime by loading two things: - The Generic IOC image passed to the container runtime. - The IOC instance configuration. This is mapped into the container at runtime by mounting it into the filesystem. The mount point - for this configuration is always /epics/ioc/config. + for this configuration is always `/epics/ioc/config`. The configuration will bootstrap the unique properties of that instance. The following contents for the configuration are supported: @@ -67,34 +58,29 @@ The following contents for the configuration are supported: - start.sh a bash script to fully override the startup of the IOC. start.sh can refer to any additional files in the configuration directory. - -This approach reduces the number of images required and saves disk. It also -makes for simple configuration management. +This approach reduces the number of images required and saves disk and memory. It also makes for simpler configuration management. Throughout this documentation we will use the terms Generic IOC and IOC Instance. The word IOC without this context is ambiguous. +### Kubernetes -Kubernetes -~~~~~~~~~~ -https://kubernetes.io/ + Kubernetes easily and efficiently manages containers across clusters of hosts. When deploying an IOC into a Kubernetes cluster, you request the resources required by the IOC and Kubernetes will then schedule the IOC onto a suitable host. -It builds upon 15 years of experience of running production workloads at -Google, combined with best-of-breed ideas and practices from the community, -since it was open-sourced in 2014. +It builds upon years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community, since it was open-sourced in 2014. Today it is by far the dominant orchestration technology for containers. In this project we use Kubernetes and helm to provide a standard way of implementing these features: -- Auto start IOCs when servers come up +- Auto start IOCs the cluster comes up from power off - Manually Start and Stop IOCs -- Monitor IOC status and versions +- Monitor IOC health and versions - Deploy versioned IOCs to the beamline - Rollback to a previous IOC version - Allocate a server with adequate resources on which to run each IOC @@ -104,9 +90,7 @@ implementing these features: - Connect to an IOC and interact with its shell - debug an IOC by starting a bash shell inside it's container - -Kubernetes Alternative -~~~~~~~~~~~~~~~~~~~~~~ +### Kubernetes Alternative If you do not have the resources to maintain a Kubernetes cluster then this project supports installing IOC instances directly into the local docker or podman runtime @@ -117,7 +101,7 @@ Kubernetes and Helm in the technology stack. If you choose to use this approach then you may find it useful to have another tool for viewing and managing the set of containers you have deployed across your beamline servers. There are various solutions for this, one that has -been tested with **epics-containers** is Portainer https://www.portainer.io/. +been tested with **epics-containers** is Portainer . Portainer is a paid for product that provides excellent visibility and control of your containers through a web interface. It is very easy to install. @@ -126,9 +110,9 @@ The downside of this approach is that you will need to manually manage the resources available to each IOC instance and manually decide which server to run each IOC on. -Helm -~~~~ -https://helm.sh/ +### Helm + + Helm is the most popular package manager for Kubernetes applications. @@ -141,11 +125,10 @@ of the chart within the cluster. It also supports registries for storing version history of charts, much like docker. -In this project we use Helm Charts to define and deploy IOC instances. -Each beamline (or accelerator domain) has its own git repository that holds -the domain Helm Chart for its IOC Instances. Each IOC instance need only -provide a values.yaml file to override the default values in the domain -Helm Chart and a config folder as described in `generic iocs`. +In this project we use Helm Charts to define and deploy IOC instances. Each beamline or accelerator area has its own git {any}`ec-services-repo` that holds the Helm Charts for its IOC Instances. Each IOC instance need only provide: + +- a values.yaml file to override the default values in the repository's global Helm Chart +- a config folder as described in {any}`generic-iocs`. **epics-containers** does not use helm repositories for storing IOC instances. Such repositories only hold a zipped version of the chart and a values.yaml file, @@ -154,17 +137,11 @@ information. Instead we provide a command line tool for installing and updating IOCs. Which performs the following steps: - Clone the beamline repository at a specific tag to a temporary folder -- extract the beamline chart and apply the values.yaml to it -- additionally generate a config map from the config folder files - install the resulting chart into the cluster - remove the temporary folder -This means that we don't store the chart itself but we do store all of the -information required to re-generate it in a version tagged repository. - -Repositories -~~~~~~~~~~~~ +### Repositories All of the assets required to manage a set of IOC Instances for a beamline are held in repositories. @@ -182,49 +159,36 @@ There are many alternative services for storing these repositories, both in the cloud and on premises. Below we list the choices we have tested during the POC. -The 2 classes of repository are as follows: +The classes of repository are as follows: +```{eval-rst} :Source Repository: - - Holds the source code but also provides the - Continuous Integration actions for testing, building and publishing to - the image / helm repositories. - - These have been tested: + Holds the source code but also provides the Continuous Integration actions for testing, building and publishing to the image / helm repositories. These have been tested: - - github - - gitlab (on premises) + - github + - gitlab (on premises) - - epics-containers defines two classes of source repository: +:Generic IOC Source Repositories: - - Generic IOC source. Defines how a Generic IOC image is built, this does - not typically include source code, but instead is a set of instructions - for building the Generic IOC image by compiling source from a number - of upstream support module repositories. Boilerplate IOC source code - is also included in the Generic IOC source repository and can be - customized if needed. + Define how a Generic IOC image is built, this does not typically include source code, but instead is a set of instructions for building the Generic IOC image by compiling source from a number of upstream support module repositories. Boilerplate IOC source code is also included in the Generic IOC source repository and can be customized if needed. - - Beamline / Accelerator Domain source. Defines the IOC instances for a - Domain. This includes the IOC boot scripts and - any other configuration required to make the IOC instance unique. - For **ibek** based IOCs, each IOC instance is defined by an **ibek** - yaml file only. +:EC Services Source Repositories: -:An OCI image repository: + Define the IOC instances for a beamline or accelerator area. This includes the IOC boot scripts and any other configuration required to make the IOC instance unique. For ibek based IOCs, each IOC instance is defined by an ibek yaml file only. - - Holds the Generic IOC container images and their - dependencies. Also used to hold the helm charts that define the shared - elements between all domains. +:An OCI Image Repository: - The following have been tested: + Holds the Generic IOC container images and their dependencies. Also used to hold he helm charts that define the shared elements between all domains. - - Github Container Registry - - DockerHub - - Google Cloud Container Registry + The following have been tested: + - Github Container Registry + - DockerHub + - Google Cloud Container Registry +``` -Continuous Integration -~~~~~~~~~~~~~~~~~~~~~~ +### Continuous Integration Our examples all use continuous integration to get from pushed source to the published images, IOC instances helm charts and documentation. @@ -235,6 +199,8 @@ tags and the tags of their built resources. There are these types of CI: +```{eval-rst} + :Generic IOC source: - builds a Generic IOC container image - runs some tests against that image - these will eventually include @@ -242,7 +208,7 @@ There are these types of CI: - publishes the image to github packages (only if the commit is tagged) or other OCI registry -:beamline definition source: +:`ec-services-repo` source: - prepares a helm chart from each IOC instance definition - tests that the helm chart is deployable (but does not deploy it) - locally launches each IOC instance and loads its configuration to @@ -253,8 +219,15 @@ There are these types of CI: - builds the sphinx docs - publishes it to github.io pages with version tag or branch tag. -Scope ------ +:global helm chart source: + - ``ec-helm-chars`` repo only + - packages a helm chart from source + - publishes it to github packages (only if the commit is tagged) + or other OCI registry +``` + +## Scope + This project initially targets x86_64 Linux Soft IOCs and RTEMS IOC running on MVME5500 hardware. Soft IOCs that require access to hardware on the server (e.g. USB or PCIe) will be supported by mounting the hardware into @@ -267,51 +240,44 @@ in the future. Python soft IOCs are also supported. GUI generation for engineering screens will be supported via the PVI project. -See https://github.com/epics-containers/pvi. +See . +## Additional Tools -Additional Tools ----------------- +### edge-containers-cli -epics-containers-cli -~~~~~~~~~~~~~~~~~~~~ This is the developer's 'outside of the container' helper tool. The command line entry point is **ec**. The project is a python package featuring simple command -line functions for deploying, monitoring building and debugging -Generic IOCs and IOC instances. It is a wrapper +line functions for deploying and monitoring IOC instances. It is a wrapper around the standard command line tools kubectl, podman/docker, helm, and git but saves typing and provides help and command line completion. It also can teach you how to use these tools by showing you the commands it is running. -See `CLI` for details. +See {any}`CLI` for moore details. +### **ibek** -**ibek** -~~~~~~~~ IOC Builder for EPICS and Kubernetes is the developer's 'inside the container' helper tool. It is a python package that is installed into the Generic IOC container images. It is used: - at container build time: to fetch and build EPICS support modules -- at container build time: to generate the IOC source code and compile it - at container run time: to extract all useful build artifacts into a runtime image -See https://github.com/epics-containers/ibek. +See . + +### PVI -PVI -~~~ The Process Variables Interface project is a python package that is installed inside Generic IOC container images. It is used to give structure to the IOC's Process Variables allowing us to: -- add metadata to the IOCs DB records for use by `Bluesky`_ and `Ophyd`_ +- add metadata to the IOCs DB records for use by [Bluesky] and [Ophyd] - auto generate screens for the device (as bob, adl or edm files) -.. _Bluesky: https://blueskyproject.io/ -.. _Ophyd: https://github.com/bluesky/ophyd-async - - +[bluesky]: https://blueskyproject.io/ +[ophyd]: https://github.com/bluesky/ophyd-async diff --git a/docs/explanations/ioc-source.rst b/docs/explanations/ioc-source.md similarity index 71% rename from docs/explanations/ioc-source.rst rename to docs/explanations/ioc-source.md index b5fa36c2..232b7147 100644 --- a/docs/explanations/ioc-source.rst +++ b/docs/explanations/ioc-source.md @@ -1,10 +1,8 @@ -.. _ioc-source: +(ioc-source)= -Dev Container vs Runtime Container -================================== +# Dev Container vs Runtime Container -Introduction ------------- +## Introduction The dev container is where all development of IOCs and support modules will take place. The runtime container is where the IOC will run when deployed @@ -21,31 +19,29 @@ the following goals: they are not lost when the container is rebuilt or deleted The details of which folders are mounted where in the container are -shown here: `container-layout`. +shown here: {any}`container-layout`. -The ioc-XXX project folder is found in the container at ``/workspaces/ioc-XXX``, +The ioc-XXX project folder is found in the container at `/workspaces/ioc-XXX`, along with all of it's peers (because the parent folder is mounted -at ``/workspaces``). +at `/workspaces`). - -The ioc Folder --------------- +## The ioc Folder The ioc folder contains the Generic IOC source code. It is typically the same for all Generic IOCs but is included in the ioc-XXX repo in /ioc so that it can be modified if necessary. At container build time this folder is copied into the container at -``/epics/generic-source/ioc`` and it is compiled so that the binaries are +`/epics/generic-source/ioc` and it is compiled so that the binaries are available at runtime. -In the dev container the ``/epics/generic-source`` folder has the project +In the dev container the `/epics/generic-source` folder has the project folder ioc-XXX mounted over the top of it. This means: - the project folder ioc-XXX is mounted in two locations in the container - - ``/workspaces/ioc-XXX`` - - ``/epics/generic-source`` -- the ioc source folder ``/epics/generic-source/ioc`` is also mounted over + \- `/workspaces/ioc-XXX` + \- `/epics/generic-source` +- the ioc source folder `/epics/generic-source/ioc` is also mounted over and now contains the source only. The compiled binaries are no longer visible inside the dev container. @@ -53,19 +49,17 @@ It is for this reason that a newly created dev container needs to have the IOC binaries re-compiled. But this is a good thing, because now any changes you make to the IOC source code can be compiled and tested, but also those changes are now visible on the host filesystem inside the project folder -``ioc-XXX/ioc``. This avoids loss of work. +`ioc-XXX/ioc`. This avoids loss of work. -Finally the ``ioc`` folder is always soft linked from ``/epics/ioc`` so that +Finally the `ioc` folder is always soft linked from `/epics/ioc` so that the source and binaries are always in a known location. -Summing Up ----------- +## Summing Up The above description makes things sound rather complicated. However, you can for the most part ignore the details and just remember: -- use ``/epics/ioc`` to compile and run the IOC. +- use `/epics/ioc` to compile and run the IOC. - you are free to make changes to the above folder and recompile - the changes you make will be visible on the host filesystem in the original project folder. - diff --git a/docs/explanations/kubernetes_cluster.rst b/docs/explanations/kubernetes_cluster.md similarity index 88% rename from docs/explanations/kubernetes_cluster.rst rename to docs/explanations/kubernetes_cluster.md index f3607260..6c827584 100644 --- a/docs/explanations/kubernetes_cluster.rst +++ b/docs/explanations/kubernetes_cluster.md @@ -1,22 +1,10 @@ -Kubernetes Cluster Config -========================= +# Kubernetes Cluster Config -Cluster Options ---------------- +## Cluster Options Three cluster topologies were considered for this project. -:Cluster per beamline: - This could be as simple as - a single server: the K3S installation described in - `setup_kubernetes` may be sufficient. The documentation at - https://rancher.com/docs/k3s/ also details how to make a high availability - cluster, requiring a minimum of 4 servers. - This approach keeps the configuration of the clusters quite straightforward - but at the cost of having multiple separate clusters to maintain. Also - it requires control plane servers for every beamline, whereas a centralized - approach would only need a handful of control plane servers for the entire - facility. +```{eval-rst} :Central Facility Cluster: A central facility cluster that runs @@ -43,6 +31,17 @@ have a cluster per beamline and a cluster for the accelerator. The separate clusters allow us to have separate: +- failure domain +``` + +## Current Approach + +DLS is using Cluster per Beamline as the preferred topology. We +will continue to have a central cluster for shared services but in addition will +have a cluster per beamline and a cluster for the accelerator. + +The separate clusters allow us to have separate: + - failure domain - security domain - administrative domain @@ -53,67 +52,60 @@ tooling available to help with this, as multi-cluster using cloud providers is quite a common pattern. We are currently investigating approaches to multi-cluster management. +(argus)= -.. _argus: - -DLS Argus Cluster ------------------ +## DLS Argus Cluster This section gives details of the topology and special configuration used by the DLS argus cluster to enable running IOCs on a Beamline. -Overview -~~~~~~~~ +### Overview Argus is the production DLS cluster. It comprises 22 bare metal worker nodes, with a 3 node control plane that runs in VMs. The control plane nodes run the K8s master processes such as the API server, controller manager etc. Each control plane node runs an etcd backend. -.. figure:: ../images/clusterHA.png +:::{figure} ../images/clusterHA.png +::: To load balance across the K8s API running on the control plane nodes, there is a haproxy load balancer. The DNS endpoint argus.api.diamond.ac.uk (which all nodes use as the main API endpoint) points to a single haproxy IP. The IP is HA by virtue of a pair of VMs that both run haproxy, bind on all IPs, and use VRRP/keepalived to make sure the IP is always up. Haproxy has the 3 control plane nodes as a target backend. -.. figure:: ../images/kubeadm-ha-topology-stacked-etcd.png +:::{figure} ../images/kubeadm-ha-topology-stacked-etcd.png +::: The cluster uses Kubeadm to deploy the K8s control plane in containers. It is provided by K8s upstream, and is architecturally similar to Rancher Kubernetes Engine (RKE). Kubeadm supports upgrades/downgrades and easy provisioning of nodes. The cluster is connected using Weave as the CNI. Weave is the only CNI tested that passes Broadcast/Unicast/Multicast (BUM) traffic through the iptables that control network access for pods. Metallb is used as a component to support K8s loadBalancer Service objects. Ingress nginx from nginxinc is used as an ingress controller. Logs are collected from the stdout of all pods using a fluentd daemonset which ships logs to a centralized graylog server. Cluster authentication is via KeyCloak. The cluster sits in one rack, with a top of rack (TOR) switch/router connecting it to the rest of the network. The cluster nodes sit on the same /22 network which is routable via the TOR router (this router routes the /22 subnet to other racks via OSPF). Metallb pool IPs are allocated from within this /22 to ensure they are globally routable by the OSPF network; the metallb speaker pods respond to ARP requests originating from the TOR router looking for load balanced Service IPs. - **One of the Argus racks** -.. figure:: ../images/argus3.jpg +:::{figure} ../images/argus3.jpg +::: The cluster is built and managed using Ansible. Heavy use of the k8s module enables direct installation of K8s components by talking directly to the K8s API using the k8s module. Ansible also configures the haproxy API load balancer. Prometheus_operator provides the monitoring stack. Argus is a multi-tenant cluster. Namespaces are used to enforce multi-tenancy. A namespace is created on demand for each user, acting as a sandbox for them to get familiar with K8s. Applications deployed in production get their own “project” namespace. The project namespace has some associated policy that determines who can run pods in the namespace, what data can be accessed, and if pods can run with elevated privileges. This is enforced by a combination of RBAC and Pod Security Policy (PSP). The latter is a deprecated feature in K8s 1.21 and will soon be replaced with Open Policy Agent. - -Beamline Local Cluster Nodes -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +### Beamline Local Cluster Nodes As part of the investigation work some worker nodes in Argus have been connected that are physically located at the beamline. These nodes do not share the same rack as Argus, and hence are part of a different routed subnet to the /22 that the control plane and main workers are within. This model assumes one centralised control plane (and some Generic workers), and a set of beamline cluster nodes that may be distributed across the network (in different subnets). The beamline cluster nodes require a few interesting sets of configuration to make this architecture work. See the following subheadings for details. -Metallb Pools -+++++++++++++ +#### Metallb Pools Metallb cannot be used to provide loadBalancer services for pods running on the beamline cluster nodes. This is because metallb currently only supports a single pool of IPs to allocate from. In the case of Argus, the pool is allocated from within the /22 in which the control plane (and a few Generic workers) sit. Should a pod with a loadBalancer Service IP get brought up on a beamline cluster node, the traffic would not be routable because the beamline TOR switch does not send ARP messages for subnets that it is not directly connected to. This is not an issue running IOCs since they do not make use of loadBalancer Services. There is a feature request for Metallb to support address pools that is currently pending. -Node Labelling and Taints -+++++++++++++++++++++++++ +#### Node Labelling and Taints The beamline cluster worker nodes are labelled and tainted with the name of the beamline. This ensures that only pods running IOCs that are relevant to that beamline can be started on the beamline worker nodes. Pods that are to be scheduled there must tolerate the taint, and use node selection based on the label. Certain utility pods must also tolerate the beamline name taint. Pods such as fluentd (which provides pod log aggregation and shipping to a centralised graylog) need additional tolerations of the taint. However most standard utilities such as Prometheus, Weave (the CNI itself runs in a pod) and kube-proxy all have a toleration of all “noSchedule” taints built in. -Host Network -++++++++++++ +#### Host Network In order for IOCs to work within K8s pods, they typically need to see BUM traffic. This is because EPICS uses UDP Broadcast for IOC discovery. There are also other interesting network quirks that IOCs exhibit that make use of the CNI network overlay unsuitable. To get around this, pods running IOCs make use of the host network namespace. In other words, they see the interfaces on the underlying worker nodes, rather than a virtual interface that is connected to the cluster internal network that normal pods see. This is done by setting hostNetwork =  true in the pod spec. Access to the host network namespace requires privileged pods. Whilst this is allowed (Argus uses pod security policy to enforce the attributes of the pods that are scheduled), we do drop the capabilities that are not needed. This reduces the attack surface somewhat. We drop everything except NET_ADMIN and NET_BROADCAST. -Uses for Argus --------------- +## Uses for Argus The central cluster is used for many services other than EPICS IOCs. Below is a list of current and potential use cases: diff --git a/docs/explanations/net_protocols.rst b/docs/explanations/net_protocols.md similarity index 82% rename from docs/explanations/net_protocols.rst rename to docs/explanations/net_protocols.md index 0200a750..5d61bbeb 100644 --- a/docs/explanations/net_protocols.rst +++ b/docs/explanations/net_protocols.md @@ -1,16 +1,14 @@ -Channel Access and Other Protocols -================================== +# Channel Access and Other Protocols Explanations of the challenges and solutions to routing protocols to and from IOCs running under Kubernetes. -Container Network Interface ---------------------------- +## Container Network Interface A Kubernetes cluster will have a CNI (Container Network Interface) that provides some form of virtual network within which Pods communicate. -For a useful discussion of this subject see `Kubernetes CNI providers`_ +For a useful discussion of this subject see [Kubernetes CNI providers] In order to connect to a Pod from outside of the cluster you must configure a Service. A Service can provide an external IP and port to external clients @@ -24,11 +22,7 @@ from the Pod and the external client. Typically CNIs do not support broadcast traffic within their virtual LAN. - -.. _Kubernetes CNI providers: https://rancher.com/blog/2019/2019-03-21-comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/ - -Problems with CNI ------------------ +## Problems with CNI The following two behaviours for network protocols are not suitable for use between an external client and a kubernetes Pod: @@ -48,20 +42,21 @@ Initially we looked into workarounds to these issues. For example the diagram below shows a 'ca-forwarder' that sits on the EPICS client subnet and forwards requests to IOCs in the cluster. -.. figure:: ../images/caforwarder.png +:::{figure} ../images/caforwarder.png +::: However this 2nd diagram shows why this approach fails when the client is in the cluster itself. - -.. figure:: ../images/cabackwarder.png +:::{figure} ../images/cabackwarder.png +::: The conclusion of this study was that workarounds were fiddly and needed to be implemented on a per protocol basis, plus there is no guarantee that there is a solution for all protocols we will need. -Solution - hostNetwork ----------------------- +## Solution - hostNetwork + To get round these issues and all possible future network issues we: - Use remote worker nodes that sit in the beamline subnet @@ -76,6 +71,7 @@ knows the port number can connect because no NAT is in the way. The downside of this approach is that Pods need elevated privileges in order to be allowed to use hostNetwork. At DLS the K8S team has implemented a -set of restrictions that mitigate this issue. See `argus` for details +set of restrictions that mitigate this issue. See {any}`argus` for details of the remote worker nodes and suggestions for secure configuration. +[kubernetes cni providers]: https://rancher.com/blog/2019/2019-03-21-comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/ diff --git a/docs/explanations/repositories.rst b/docs/explanations/repositories.md similarity index 63% rename from docs/explanations/repositories.rst rename to docs/explanations/repositories.md index 1c35327c..8784a005 100644 --- a/docs/explanations/repositories.rst +++ b/docs/explanations/repositories.md @@ -1,26 +1,23 @@ -Source and Registry Locations -============================= +# Source and Registry Locations -.. note:: +:::{note} +**DLS Users** DLS is currently using these locations for assets: - **DLS Users** DLS is currently using these locations for assets: +- Generic IOC Source: `https://github.com/epics-containers` +- Beamline Source repos: `https://gitlab.diamond.ac.uk/controls/containers/beamline/` +- Accelerator Source repos: `https://gitlab.diamond.ac.uk/controls/containers/accelerator/` +- Generic IOC Container Images: `ghcr.io/epics-containers/` +- epics-containers Helm Charts: `https://github.com/orgs/epics-containers/packages?repo_name=ec-helm-charts` +::: - - Generic IOC Source: ``https://github.com/epics-containers`` - - Beamline Source repos: ``https://gitlab.diamond.ac.uk/controls/containers/beamline/`` - - Accelerator Source repos: ``https://gitlab.diamond.ac.uk/controls/containers/accelerator/`` - - Generic IOC Container Images: ``ghcr.io/epics-containers/`` - - epics-containers Helm Charts: ``https://github.com/orgs/epics-containers/packages?repo_name=ec-helm-charts`` - -Where to Keep Source Code -------------------------- +## Where to Keep Source Code There are two main kinds of source repositories used in epics-containers: - Generic IOC Source - Beamline / Accelerator Domain IOC Instance Source -Generic IOC Source Repositories -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +### Generic IOC Source Repositories For Generic IOCs it is recommended that these be stored in public repositories on GitHub. This allows the community to benefit from the work of others and @@ -39,8 +36,7 @@ Integration files for Generic IOCs work with GitHub actions, but also can work with DLS's internal GitLab instance (this could be adapted for other facilities' internal GitLab instances or alternative CI system). -IOC Instance Domain Repositories -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +### IOC Instance Domain Repositories These repositories are very much specific to a particular domain or beamline in a particular facility. For this reason there is no strong reason to make @@ -53,24 +49,19 @@ The CI for domain repos works both with GitHub actions and with DLS's internal GitLab instance (this could be adapted for other facilities' internal GitLab instances or alternative CI system). -BL45P -~~~~~ +### BL45P The test/example beamline at DLS for epics-containers is BL45P. The domain repository for this -is at https://github.com/epics-containers/bl45p. This will always be +is at . This will always be kept in a public repository as it is a live example of a domain repo. -Where to put Registries ------------------------ +## Where to put Registries -Generic IOC Container Images and Source Repos -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +### Generic IOC Container Images and Source Repos Usually GHCR but internal supported for license e.g. Nexus Repository Manager -IOC Instance Domain Repos -~~~~~~~~~~~~~~~~~~~~~~~~~ +### IOC Instance Domain Repos Internal git registry or private GitHub registry. - diff --git a/docs/how-to/builder2ibek.md b/docs/how-to/builder2ibek.md new file mode 100644 index 00000000..3d811db8 --- /dev/null +++ b/docs/how-to/builder2ibek.md @@ -0,0 +1,17 @@ +# Builder2ibek Conversion Tool + +:::{warning} +This page is only relevant to DLS users who are converting an xml +builder beamline to epics-containers. i.e. those whose beamlines +have a BLxxY-BUILDER project. +::: + +TODO: this page is WIP and will be updated by Feb 2024. + +`builder2ibek` is a tool to convert DLS builder XML to ibek instance YAML. +It is for working with converting IOC instances to epics-containers. + +At present (until a new python app distribution mechanism is in place) it +is installed at DLS in the following location: + +`/dls_sw/work/python3/ec-venv/bin/builder2ibek` diff --git a/docs/how-to/builder2ibek.rst b/docs/how-to/builder2ibek.rst deleted file mode 100644 index 0a623cb8..00000000 --- a/docs/how-to/builder2ibek.rst +++ /dev/null @@ -1,18 +0,0 @@ -Builder2ibek Conversion Tool -============================ - -.. warning:: - - This page is only relevant to DLS users who are converting an xml - builder beamline to epics-containers. i.e. those whose beamlines - have a BLxxY-BUILDER project. - -TODO: this page is WIP and will be updated by Feb 2024. - -``builder2ibek`` is a tool to convert DLS builder XML to ibek instance YAML. -It is for working with converting IOC instances to epics-containers. - -At present (until a new python app distribution mechanism is in place) it -is installed at DLS in the following location: - -``/dls_sw/work/python3/ec-venv/bin/builder2ibek`` diff --git a/docs/how-to/builder2ibek.support.md b/docs/how-to/builder2ibek.support.md new file mode 100644 index 00000000..8ab713cc --- /dev/null +++ b/docs/how-to/builder2ibek.support.md @@ -0,0 +1,103 @@ +# Builder2ibek.support Conversion Tool + +:::{warning} +This page is only relevant to DLS users who are converting +a DLS support module with builder support into an epics-containers +Generic IOC. i.e. support modules that have an `etc/builder.py` file. +::: + +TODO: this page is WIP and will be updated by Feb 2024. + +`builder2ibek.support` is a tool to convert DLS builder support modules +into ibek support YAML for the `ibek-support` repository. + +## builder2ibek.support example + +```bash +./builder2ibek.support.py /dls_sw/prod/R3.14.12.7/support/lakeshore340/2-6 ioc-lakeshore340/ibek-support/lakeshore340/lakeshore340.yaml +``` + +```xml + + + + + + + + + +.. code:: yaml + + # yaml-language-server: $schema=https://github.com/epics-containers/ibek/releases/download/1.2.0/ibek.support.schema.json + + module: lakeshore340 + + defs: + + - name: lakeshore340 + description: |- + Lakeshore 340 Temperature Controller + Notes: The temperatures in Kelvin are archived once every 10 secs. + args: + + - type: str + name: P + description: |- + Prefix for PV name + + - type: str + name: PORT + description: |- + Bus/Port Address (eg. ASYN Port). + + - type: str + name: ADDR + description: |- + Address on the bus + + - type: str + name: SCAN + description: |- + SCAN rate for non-temperature/voltage parameters. + + - type: str + name: TEMPSCAN + description: |- + SCAN rate for the temperature/voltage readings + + - type: id + name: name + description: |- + Object and gui association name + + - type: str + name: gda_name + description: |- + Name in gda interface file (Default = ) + + - type: str + name: gda_desc + description: |- + Description in gda interface file (Default = ) + + - type: int + name: LOOP + description: |- + Which heater PID loop to control (Default = 1) + default: 1 + + databases: + + - file: $(LAKESHORE340)/db/lakeshore340.template + args: + name: + SCAN: + gda_name: + P: + TEMPSCAN: + gda_desc: + PORT: + LOOP: + ADDR: +``` diff --git a/docs/how-to/builder2ibek.support.rst b/docs/how-to/builder2ibek.support.rst deleted file mode 100644 index d30ceaeb..00000000 --- a/docs/how-to/builder2ibek.support.rst +++ /dev/null @@ -1,108 +0,0 @@ -Builder2ibek.support Conversion Tool -==================================== - -.. warning:: - - This page is only relevant to DLS users who are converting - a DLS support module with builder support into an epics-containers - Generic IOC. i.e. support modules that have an ``etc/builder.py`` file. - -TODO: this page is WIP and will be updated by Feb 2024. - -``builder2ibek.support`` is a tool to convert DLS builder support modules -into ibek support YAML for the ``ibek-support`` repository. - - -builder2ibek.support example ----------------------------- - -.. code:: bash - - ./builder2ibek.support.py /dls_sw/prod/R3.14.12.7/support/lakeshore340/2-6 ioc-lakeshore340/ibek-support/lakeshore340/lakeshore340.yaml - - -.. code-block:: xml - - - - - - - - - - - .. code:: yaml - - # yaml-language-server: $schema=https://github.com/epics-containers/ibek/releases/download/1.2.0/ibek.support.schema.json - - module: lakeshore340 - - defs: - - - name: lakeshore340 - description: |- - Lakeshore 340 Temperature Controller - Notes: The temperatures in Kelvin are archived once every 10 secs. - args: - - - type: str - name: P - description: |- - Prefix for PV name - - - type: str - name: PORT - description: |- - Bus/Port Address (eg. ASYN Port). - - - type: str - name: ADDR - description: |- - Address on the bus - - - type: str - name: SCAN - description: |- - SCAN rate for non-temperature/voltage parameters. - - - type: str - name: TEMPSCAN - description: |- - SCAN rate for the temperature/voltage readings - - - type: id - name: name - description: |- - Object and gui association name - - - type: str - name: gda_name - description: |- - Name in gda interface file (Default = ) - - - type: str - name: gda_desc - description: |- - Description in gda interface file (Default = ) - - - type: int - name: LOOP - description: |- - Which heater PID loop to control (Default = 1) - default: 1 - - databases: - - - file: $(LAKESHORE340)/db/lakeshore340.template - args: - name: - SCAN: - gda_name: - P: - TEMPSCAN: - gda_desc: - PORT: - LOOP: - ADDR: - diff --git a/docs/how-to/debug.md b/docs/how-to/debug.md new file mode 100644 index 00000000..7efc52c7 --- /dev/null +++ b/docs/how-to/debug.md @@ -0,0 +1,88 @@ +# Debug an IOC instance locally + +:::{warning} +This is an early draft +::: + +This guide will show you how to debug an IOC instance locally. It will use the example IOC made in the [Create an IOC instance](../tutorials/create_ioc) guide. That IOC is called `bl01t-ea-test-02` in the guide but you may have chosen a different name. + +## Setting up + +Get the IOC Instance definition repository and deliberately break the IOC instance so that you can debug it. + +```bash +git clone git@github.com:YOUR_GITHUB_USERNAME/bl01t.git +cd bl01t +source environment.sh +code . +# now edit services/bl01t-ea-test-02/config/ioc.yaml +``` + +## Breaking the IOC instance + +Add the phrase 'deliberate_error' to the top of the `ioc.yaml` file. Then try to launch the IOC instance, but use the `-v` flag to see the underlying commands: + +```bash +ec -v deploy-local services/bl01t-ea-test-02 +``` + +You should see something like this (for docker users - podman users will see something similar): + +
$ ec -v deploy-local services/bl01t-ea-test-02
+docker --version
+docker buildx version
+Deploy TEMPORARY version 2024.2.17-b8.30 from /home/giles/tutorial/bl01t/services/bl01t-ea-test-02 to the local docker instance
+Are you sure ? [y/N]: y
+docker stop -t0 bl01t-ea-test-02
+docker rm -f bl01t-ea-test-02
+docker volume rm -f bl01t-ea-test-02_config
+docker volume create bl01t-ea-test-02_config
+docker rm -f busybox
+docker container create --name busybox -v bl01t-ea-test-02_config:/copyto busybox
+docker cp /home/giles/tutorial/bl01t/services/bl01t-ea-test-02/config/ioc.yaml busybox:copyto
+docker rm -f busybox
+docker run -dit --net host --restart unless-stopped -l is_IOC=true -l version=2024.2.17-b8.30 -v bl01t-ea-test-02_config:/epics/ioc/config/ -e IOC_NAME=bl01t-ea-test-02  --name bl01t-ea-test-02 ghcr.io/epics-containers/ioc-adsimdetector-linux-runtime:2024.2.2
+76c2834dac805780b3329af91c332abb90fb2692a510c11b888b82e48f60b44f
+docker ps -f name=bl01t-ea-test-02 --format '{{.Names}}'
+
+ +Now if you try these commands you should see that the IOC instance keeps restarting and that the logs show an error: + +```bash +ec ps +ec logs bl01t-ea-test-02 +``` + +## Debugging the IOC instance + +Now you can tell `ec` to stop the IOC instance and then run it in a way that you can debug it, by copying the command that `ec` used to run the IOC instance and adding the `--entrypoint bash` and removing `-d` flag and `--restart unless-stopped`. Also change the name to have a `-debug` suffix, like so: + +```bash +ec stop bl01t-ea-test-02 +docker run --entrypoint bash -it --net host -l is_IOC=true -l version=2024.2.17-b8.30 -v bl01t-ea-test-02_config:/epics/ioc/config/ -e IOC_NAME=bl01t-ea-test-02 --name bl01t-ea-test-02-debug ghcr.io/epics-containers/ioc-adsimdetector-linux-runtime:2024.2.2 +``` + +You should now be in a shell inside the container. You can look at the files and run the IOC instance manually to see what the error is. You can re-run the IOC instance multiple times and you can even install your favourite editor or debugging tools. + +e.g. + +```bash +apt update +apt install vim +ls /epics/ioc/config/ +cat /epics/ioc/config/ioc.yaml +cd /epics/ioc +./start.sh +# ctrl-d to exit +vim /epics/ioc/config/ioc.yaml +# fix the error +./start.sh +``` + +When you are done you can exit the container with `ctrl-d` and then remove it (or you can keep it around for later and restart it with `docker start -i bl01t-ea-test-02-debug`): + +```bash +docker rm -f bl01t-ea-test-02-debug +``` + +You can now apply the fix you made to the local filesystem and retry the deployment. \ No newline at end of file diff --git a/docs/how-to/debug.rst b/docs/how-to/debug.rst deleted file mode 100644 index 1c5cddbe..00000000 --- a/docs/how-to/debug.rst +++ /dev/null @@ -1,7 +0,0 @@ -Debug an IOC instance locally -============================= -Todo - -Debug an IOC instance in Kubernetes -=================================== -Todo \ No newline at end of file diff --git a/docs/how-to/ibek-support.rst b/docs/how-to/ibek-support.md similarity index 56% rename from docs/how-to/ibek-support.rst rename to docs/how-to/ibek-support.md index 427527bc..bad231e5 100644 --- a/docs/how-to/ibek-support.rst +++ b/docs/how-to/ibek-support.md @@ -1,9 +1,8 @@ -Updating and Testing ibek-support -================================= +# Updating and Testing ibek-support -.. Warning:: - - This is draft only and out of date. It will be updated soon. +:::{Warning} +This is draft only and out of date. It will be updated soon. +::: The ibek-defs repository contains ibek support yaml. Here is an example procedure for local testing of changes to support yaml in ibek-defs @@ -11,20 +10,20 @@ along side IOC yaml that uses it. (Suggest you do this inside a dev-e7 workspace devcontainer) -.. code-block:: bash - - cd my-workspace-folder +```bash +cd my-workspace-folder - # clone ibek-defs - git clone git@github.com:epics-containers/ibek-defs.git - # clone an example domain repo with example IOC yaml - git clone git@gitlab.diamond.ac.uk:controls/containers/accelerator/acc-psc.git +# clone ibek-defs +git clone git@github.com:epics-containers/ibek-defs.git +# clone an example domain repo with example IOC yaml +git clone git@gitlab.diamond.ac.uk:controls/containers/accelerator/acc-psc.git - # get latest ibek installed - pip install ibek +# get latest ibek installed +pip install ibek - cd acc-psc/iocs/sr25a-ioc-01 - ibek build-startup config/ioc.boot.yaml ../../../ibek-defs/*/*.yaml +cd acc-psc/services/sr25a-ioc-01 +ibek build-startup config/ioc.boot.yaml ../../../ibek-defs/*/*.yaml +``` This will get ibek generate a startup script and database generation script in the config folder. It uses config/ioc.boot.yaml as the description of diff --git a/docs/how-to/own_tools.rst b/docs/how-to/own_tools.md similarity index 74% rename from docs/how-to/own_tools.rst rename to docs/how-to/own_tools.md index fc55b4a9..bc983eeb 100644 --- a/docs/how-to/own_tools.rst +++ b/docs/how-to/own_tools.md @@ -1,24 +1,21 @@ -Choose Your Developer Environment -================================= +# Choose Your Developer Environment The tutorials walk through the use of a standard set of developer tools. You can use others if you wish but support is limited currently. -.. _own_editor: +(own-editor)= -Working with your own code editor ---------------------------------- +## Working with your own code editor If you have your own preferred code editor, you can use it instead of vscode. We recommend developing generic IOCs using a devcontainer. Devcontainer supporting tools are listed here -https://containers.dev/supporting. +. epics-containers has been tested with - vscode - Github Codespaces - TODO: add instructions for using other editors. Potentially we could enhance the epics-containers-cli to support other editors with minimal effort. diff --git a/docs/how-to/phoebus.rst b/docs/how-to/phoebus.md similarity index 65% rename from docs/how-to/phoebus.rst rename to docs/how-to/phoebus.md index cf5867ae..f6606997 100644 --- a/docs/how-to/phoebus.rst +++ b/docs/how-to/phoebus.md @@ -1,12 +1,11 @@ -Viewing Operator Interfaces with Phoebus -======================================== +# Viewing Operator Interfaces with Phoebus Phoebus is a Java application that can be used to view operator interfaces. epics-containers will support auto generation of engineering screens for -Phoebus using `PVI `_. +Phoebus using [PVI](https://github.com/epics-containers/pvi). This is the initial target for epics-containers GUIs, other OPI formats may be supported in the future. OPI file generation is work in progress and this page will be updated when -it is ready (est Feb 2024). \ No newline at end of file +it is ready (est Feb 2024). diff --git a/docs/how-to/useful_k8s.md b/docs/how-to/useful_k8s.md new file mode 100644 index 00000000..f7283b58 --- /dev/null +++ b/docs/how-to/useful_k8s.md @@ -0,0 +1,161 @@ +# Kubernetes Additional How To's + +(install-dashboard)= + +## Install the Kubernetes Dashboard + +The dashboard gives you a nice GUI for exploring and controlling your cluster. +It is very useful for new users to get an understanding of what Kubernetes +has to offer. + +These commands should be run after you have set up your Kubernetes cluster and +setup the environment variables as described in {any}`../tutorials/setup_k8s`. + +Execute this on your workstation: + +```bash +GITHUB_URL=https://github.com/kubernetes/dashboard/releases +VERSION_KUBE_DASHBOARD=$(curl -w '%{url_effective}' -I -L -s -S ${GITHUB_URL}/latest -o /dev/null | sed -e 's|.*/||') +kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/${VERSION_KUBE_DASHBOARD}/aio/deploy/recommended.yaml +``` + +Then create the admin user and role by executing the following: + +```bash +kubectl apply -f - </dev/null):0 +export LIBGL_ALWAYS_INDIRECT=1 + +# IP ADDRESS from above kubectl command +./opi/stexample-gui.sh -e EPICS_CA_ADDR_LIST=192.168.86.33 +``` + +[64bit raspberry pi os]: https://www.raspberrypi.org/forums/viewtopic.php?t=275370 +[dashboard screen url]: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/workloads?namespace=epics-iocs +[docker for wsl]: https://docs.docker.com/docker-for-windows/wsl/ +[wsl gui support]: https://docs.microsoft.com/en-us/windows/wsl/tutorials/gui-apps +[wsl2 instructions]: https://docs.microsoft.com/en-us/windows/wsl/install-win10 +[x11 server for windows]: https://sourceforge.net/projects/vcxsrv/ diff --git a/docs/how-to/useful_k8s.rst b/docs/how-to/useful_k8s.rst deleted file mode 100644 index 56121a6e..00000000 --- a/docs/how-to/useful_k8s.rst +++ /dev/null @@ -1,161 +0,0 @@ - -Kubernetes Additional How To's -============================== - -.. _install_dashboard: - -Install the Kubernetes Dashboard --------------------------------- - -The dashboard gives you a nice GUI for exploring and controlling your cluster. -It is very useful for new users to get an understanding of what Kubernetes -has to offer. - -These commands should be run after you have set up your Kubernetes cluster and -setup the environment variables as described in `../tutorials/setup_k8s`. - -Execute this on your workstation: - -.. code-block:: bash - - GITHUB_URL=https://github.com/kubernetes/dashboard/releases - VERSION_KUBE_DASHBOARD=$(curl -w '%{url_effective}' -I -L -s -S ${GITHUB_URL}/latest -o /dev/null | sed -e 's|.*/||') - kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/${VERSION_KUBE_DASHBOARD}/aio/deploy/recommended.yaml - -Then create the admin user and role by executing the following: - -.. code-block:: bash - - kubectl apply -f - </dev/null):0 - export LIBGL_ALWAYS_INDIRECT=1 - - # IP ADDRESS from above kubectl command - ./opi/stexample-gui.sh -e EPICS_CA_ADDR_LIST=192.168.86.33 - - -.. _WSL2 instructions: https://docs.microsoft.com/en-us/windows/wsl/install-win10 -.. _docker for WSL: https://docs.docker.com/docker-for-windows/wsl/ -.. _X11 Server for Windows: https://sourceforge.net/projects/vcxsrv/ -.. _WSL GUI Support: https://docs.microsoft.com/en-us/windows/wsl/tutorials/gui-apps diff --git a/docs/images/bl01t-actions.png b/docs/images/bl01t-actions.png index 3ed00773..2c837afc 100644 Binary files a/docs/images/bl01t-actions.png and b/docs/images/bl01t-actions.png differ diff --git a/docs/index.md b/docs/index.md index a34247c7..2a0bb795 100644 --- a/docs/index.md +++ b/docs/index.md @@ -5,16 +5,6 @@ html_theme.sidebar_secondary.remove: true ```{include} ../README.md :end-before: 631291db1751 +STEP 15/19: RUN StreamDevice/install.sh 2.8.24 +... etc ... +``` - STEP 14/19: COPY ibek-support/StreamDevice/ StreamDevice/ - --> 631291db1751 - STEP 15/19: RUN StreamDevice/install.sh 2.8.24 - ... etc ... - -- copy the hash of the step you want to debug e.g. ``631291db1751`` in this case +- copy the hash of the step you want to debug e.g. `631291db1751` in this case - podman run -it --entrypoint /bin/bash 631291db1751 # (the hash you copied) Now we have a prompt inside the part-built container and can retry the failed command. -.. code-block:: bash - - cd /workspaces/ioc-lakeshore340/ibek-support - StreamDevice/install.sh 2.8.24 +```bash +cd /workspaces/ioc-lakeshore340/ibek-support +StreamDevice/install.sh 2.8.24 +``` You should see the same error again. @@ -93,74 +89,70 @@ when building a new support module. It implies that there is some dependency missing. There is a good chance this is a system dependency, in which case we want to search the Ubuntu repositories for the missing package. -A really good way to investigate this kind of error is with ``apt-file`` -which is a command line tool for searching Debian packages. ``apt-file`` is +A really good way to investigate this kind of error is with `apt-file` +which is a command line tool for searching Debian packages. `apt-file` is not currently installed in the devcontainer. So you have two choices: - Install it in the devcontainer - this is temporary and will be lost when the container is rebuilt. Ideal if you don't have install rights on your workstation. - - Install it on your workstation - ideal if you have rights as you only need to install it once. TODO: consider adding apt-file to the base container developer target. Whether inside the container or in your workstation terminal, install -``apt-file`` like this: +`apt-file` like this: -.. code-block:: bash - - # drop the sudo from the start of the command if using podman - sudo apt update - sudo apt install apt-file +```bash +# drop the sudo from the start of the command if using podman +sudo apt update +sudo apt install apt-file +``` Now we can search for the missing file: -.. code-block:: bash - - apt-file search pcre.h +```bash +apt-file search pcre.h +``` There are a few results, but the most promising is: - libpcre3-dev: /usr/include/pcre.h +> libpcre3-dev: /usr/include/pcre.h Pretty much every time you are missing a header file you will find it in a -system package with a name ending in ``-dev``. +system package with a name ending in `-dev`. Now we can install the missing package in the container and retry the build: -.. code-block:: bash - - apt-get install -y libpcre3-dev - StreamDevice/install.sh 2.8.24 +```bash +apt-get install -y libpcre3-dev +StreamDevice/install.sh 2.8.24 +``` You should find the build succeeds. But this is not the whole story. There -is another line in ``install.h`` that I added to make this work: +is another line in `install.h` that I added to make this work: -.. code-block:: bash +```bash +ibek support add-config-macro ${NAME} PCRE_LIB /usr/lib/x86_64-linux-gnu +``` - ibek support add-config-macro ${NAME} PCRE_LIB /usr/lib/x86_64-linux-gnu - -This added a macro to ``CONFIG_SITE.linux-x86_64.Common`` that tells the +This added a macro to `CONFIG_SITE.linux-x86_64.Common` that tells the Makefiles to add an extra include path to the compiler command line. working out how to do this is a matter of taking a look in the Makefiles. But the nice thing is that you can experiment with things inside the container and get them working without having to keep rebuilding the container. -Note that ``ibek support add-config-macro`` is idempotent, so you can run it -multiple times without getting repeated entries in the CONFIG. All ``ibek`` +Note that `ibek support add-config-macro` is idempotent, so you can run it +multiple times without getting repeated entries in the CONFIG. All `ibek` commands behave this way as far as possible. Once you are happy with your manual changes you can make them permanent by adding to the install.sh or Dockerfile, then try a full rebuild. -Making Changes Inside the Container ------------------------------------ +## Making Changes Inside the Container You will find that the container includes busybox tools, vim and ifconfig. These should provide enough tools to investigate and fix most build problems. You are also free to use apt-get to install any other tools you need as demonstrated above. (type busybox to see the list of available tools). - - diff --git a/docs/tutorials/deploy_example.md b/docs/tutorials/deploy_example.md new file mode 100644 index 00000000..4b9784a7 --- /dev/null +++ b/docs/tutorials/deploy_example.md @@ -0,0 +1,261 @@ +# Deploying and Managing IOC Instances + +## Introduction + +This tutorial will show you how to deploy and manage the example IOC Instance +that came with the template beamline repository. +You will need to have your own `bl01t` beamline repository +from the previous tutorial. + +For these early tutorials we are not using Kubernetes and instead are deploying +IOCs to the local docker or podman instance. So for these tutorials we +shall pretend that your workstation is one of the IOC servers on the fictitious +beamline `BL01T`. + +## Continuous Integration + +Before we change anything, we shall make sure that the beamline repository CI +is working as expected. To do this go to the following URL (make sure you insert +your GitHub account name where indicated): + +``` +https://github.com/YOUR_GITHUB_ACCOUNT/bl01t/actions +``` + +You should see something like the following: + +:::{figure} ../images/bl01t-actions.png +the GitHub Actions page for the example beamline repository +::: + +This is a list of all the Continuous Integration (CI) jobs that have been +executed (or are executing) for your beamline repository. There should be +two jobs listed, one for when you pushed the main branch and one for when you +tagged with the `CalVer` version number. + +If you click on the most recent job you can drill in and see the steps that +were executed. The most interesting step is `Run IOC checks`. This +is executing the script `.github/workflows/ci_verify.sh`. This goes through +each of the IOC Instances in the `services` folder and checks that they +have valid configuration. + +For the moment just check that your CI passed and if not review that you +have followed the instructions in the previous tutorial correctly. + +## Set up Environment for BL01T Beamline + +The standard way to set up your environment for any ec services repository is to get the environment.sh script from the domain repository and source it. + +Start this section of the tutorial inside the vscode project that you created in the previous tutorial. Make sure you have a terminal open and the current working directory is your `bl01t` project root folder. + +First make sure you have the local binaries folder in your path by adding +the following to the end of you `$HOME/.bash_profile` file: + +```bash +export PATH="$PATH:~/.local/bin" +``` + +Then follow these steps (make sure you insert your GitHub account name +where indicated): + +```bash +# make sure we have the path setup from the bash_profile +source ~/.bash_profile + +mkdir -p ~/.local/bin +# make a copy of the environment.sh script named after the beamline +cp environment.sh ~/.local/bin/bl01t +source bl01t +``` + +Once you have done this and logged out and back in again to pick up your new +profile you should be able to enable the `bl01t` environment as follows: + +```bash +# first make sure you have loaded your virtual environment for the ec tool +source $HOME/ec-venv/bin/activate # DLS users don't need this step +source bl01t +``` + +## Deploy the Example IOC Instance + +For this section we will be making use of the epics-containers-cli tool. +This command line entry point for the tool is `ec`. For more +details see: {any}`CLI` or try `ec --help`. + +The simplest command to check that the tool is working is `ps` which lists +the IOC Instances that are currently running: + +```bash +ec ps +``` + +You should see no IOCs listed as you have not yet started an IOC Instance. + +The following command will deploy the example IOC instance to your local +machine (unless you have skipped ahead and set up your Kubernetes config +in which case the same command will deploy to your Kubernetes cluster). + +```bash +cd bl01t # (if you are not already in your beamline repo) +ec deploy-local services/bl01t-ea-test-01 +``` + +You will be prompted to say that this is a *TEMPORARY* deployment. This is +because we are deploying directly from the local filesystem. You should only +use this for testing purposes because there is no guarantee that you could +ever roll back to this version of the IOC (as it is lost as soon as filesystem +changes are made). Local filesystem deployments are given a beta version +number to indicate that they are not permanent. + +You can now see the beta IOC instance running with: + +
$ ec ps
+            name          version   state                                                             image
+bl01t-ea-test-01 2024.2.16-b15.11 running ghcr.io/epics-containers/ioc-adsimdetector-linux-runtime:2024.2.1
+ +At the end of the last tutorial we tagged the beamline repository with a +`CalVer` version number and pushed it up to GitHub. This means that we +can now use that tagged release of the IOC instance. First let's +check that the IOC instance version is available as expected. The following +command lists all of the tagged versions of the IOC instance that are +available in the GitHub repository. + +
$ ec instances bl01t-ea-test-01
+Available instances for bl01t-ea-test-01:
+2024.2.1
+
+ +:::{note} +The above command is the first one to look at your github repository. +This is how it finds out the versions +of the IOC instance that are available. If you get an error it may be +because you set EC_SERVICES_REPO incorrectly in environment.sh. Check it +and source it again to pick up any changes. +::: + +:::{hint} +ec supports command line completion, which means that entering ` ` will give hints on the command line: + +```bash +$ ec +attach deploy exec list logs start template +delete deploy-local instances log-history restart stop validate +$ ec instances +$ ec instances bl01t-ea-ioc-0 +bl01t-ea-test-01 bl01t-ea-test-02 +``` + +To enable this behavior in your shell run the command `ec --install-completion` +::: + +Now that we know the latest version number we can deploy a release version. +This command will extract the IOC instance using the tag from GitHub and deploy +it to your local machine: + +```bash +$ ec deploy bl01t-ea-test-01 2024.2.1 +bdbd155d437361fe88bce0faa0ddd3cd225a9026287ac5e73545aeb4ab3a67e9 + +$ ec ps +IOC NAME VERSION STATUS IMAGE +bl01t-ea-test-01 2024.2.1 Up 4 seconds ghcr.io/epics-containers/ioc-adsimdetector-linux-runtime:2023.10.5 +``` + +### IMPORTANT: deploy-local vs deploy + +Be aware of the distinction of `deploy-local` vs `deploy`. Both of these commands create a running instance of the IOC in the target environment (currently your local machine - later on a Kubernetes Cluster). However, `deploy-local` gets the IOC instance description YAML direct from your local filesystem. This means it is not likely to be available for re-deployment later on. `deploy` gets the IOC instance description YAML from the GitHub repository with a specific tag and therefore is a known state that can be recovered at a later date. + +Always strive to have released versions of IOC instances deployed in your +environments. `deploy-local` is only for temporary testing purposes. + +## Managing the Example IOC Instance + +### Starting and Stopping IOCs + +To stop / start the example IOC try the following commands. Note that +`ec ps -a` shows you all IOCs including stopped ones. + +```bash +ec ps -a +ec stop bl01t-ea-test-01 +ec ps -a +ec start bl01t-ea-test-01 +ec ps +``` + +:::{Note} +Generic IOCs. + +You may have noticed that the IOC instance has is showing that it has +an image `ghcr.io/epics-containers/ioc-adsimdetector-linux-runtime:2024.2.1`. + +This is a Generic IOC image and all IOC Instances must be based upon one +of these images. This IOC instance has no startup script and is therefore +not functional, it could have been based on any Generic IOC. +::: + +### Monitoring and interacting with an IOC shell + +To attach to the IOC shell you can use the following command. HOWEVER, this +will attach you to nothing in the case of this example IOC as it has no +shell. In the next tutorial we will use this command to interact with +iocShell. + +```bash +ec attach bl01t-ea-test-01 +``` + +Use the command sequence ctrl-P then ctrl-Q to detach from the IOC. **However, +there are issues with both VSCode and IOC shells capturing ctrl-P. until +this is resolved it may be necessary to close the terminal window to detach.** +You can also restart and detach from the IOC using ctrl-D or ctrl-C, or +by typing `exit`. + +To run a bash shell inside the IOC container: + +```bash +ec exec bl01t-ea-test-01 +``` + +Once you have a shell inside the container you could inspect the following +folders: + +```{eval-rst} +=================== ======================================================= +ioc code /epics/ioc +support modules /epics/support +EPICS binaries /epics/epics-base +IOC instance config /epics/ioc/config +IOC startup script /epics/runtime +=================== ======================================================= +``` + +Being at a terminal prompt inside the IOC container can be useful for debugging +and testing. You will have access to caget and caput, plus other EPICS tools, +and you can can inspect files such as the IOC startup script. + +### Logging + +To get the current logs for the example IOC: + +```bash +ec logs bl01t-ea-test-01 +``` + +Or follow the IOC log until you hit ctrl-C: + +```bash +ec logs bl01t-ea-test-01 -f +``` + +You should see the log of ibek loading and generating the IOC startup assets and then the ioc shell startup script log. + +You can also attach to the IOC and check that it has started correctly by using the 'dbl' command to list all the records in it's IOC database. + +```bash +ec attach bl01t-ea-test-01 +dbl +# ctrl-p ctrl-q to detach +``` + diff --git a/docs/tutorials/deploy_example.rst b/docs/tutorials/deploy_example.rst deleted file mode 100644 index fc6e1b4a..00000000 --- a/docs/tutorials/deploy_example.rst +++ /dev/null @@ -1,273 +0,0 @@ -Deploying and Managing IOC Instances -==================================== - -Introduction ------------- - -This tutorial will show you how to deploy and manage the example IOC Instance -that came with the template beamline repository. -You will need to have your own ``bl01t`` beamline repository -from the previous tutorial. - -For these early tutorials we are not using Kubernetes and instead are deploying -IOCs to the local docker or podman instance. So for these tutorials we -shall pretend that your workstation is one of the IOC servers on the fictitious -beamline ``BL01T``. - -Continuous Integration ----------------------- - -Before we change anything, we shall make sure that the beamline repository CI -is working as expected. To do this go to the following URL (make sure you insert -your GitHub account name where indicated): - -.. code:: - - https://github.com:**YOUR GITHUB ACCOUNT**/bl01t/actions - -You should see something like the following: - -.. figure:: ../images/bl01t-actions.png - - the GitHub Actions page for the example beamline repository - -This is a list of all the Continuous Integration (CI) jobs that have been -executed (or are executing) for your beamline repository. There should be -two jobs listed, one for when you pushed the main branch and one for when you -tagged with the ``CalVer`` version number. - -If you click on the most recent job you can drill in and see the steps that -were executed. The most interesting step is ``Run bash ./ci_verify.sh``. This -is executing the script in the root of your beamline repository that verifies -each IOC instance in the ``iocs`` folder. In future we can make this script -more sophisticated when we have simulated hardware to test against. - -For the moment just check that your CI passed and if not review that you -have followed the instructions in the previous tutorial correctly. - -Set up Environment for BL01T Beamline -------------------------------------- - -The standard way to set up your environment for any domain is to get -the environment.sh script from the domain repository and source it. - -First make sure you have the local binaries folder in your path by adding -the following to the end of you ``$HOME/.bash_profile`` file: - -.. code-block:: bash - - export PATH="$PATH:~/.local/bin" - -Then follow these steps (make sure you insert your GitHub account name -where indicated): - -.. code-block:: bash - - mkdir -p ~/.local/bin - curl -o ~/.local/bin/bl01t https://raw.githubusercontent.com/**YOUR GITHUB ACCOUNT**/bl01t/main/environment.sh?token=$(date +%s) - source ~/.bash_profile - source bl01t - -Once you have done this and logged out and back in again to pick up your new -profile you should be able to enable the ``bl01t`` environment as follows: - -.. code-block:: bash - - # first make sure you have loaded your virtual environment for the ec tool - source $HOME/ec-venv/bin/activate # DLS users don't need this step - source bl01t - - -Deploy the Example IOC Instance -------------------------------- - -For this section we will be making use of the epics-containers-cli tool. -This command line entry point for the tool is ``ec``. For more -details see: `CLI` or try ``ec --help``. - -The simplest command to check that the tool is working is ``ps`` which lists -the IOC Instances that are currently running: - -.. code-block:: bash - - ec ps - -You should see some headings and an empty list as you have not yet started an -IOC Instance. - -The following command will deploy the example IOC instance to your local -machine (unless you have skipped ahead and set up your Kubernetes config -in which case the same command will deploy to your Kubernetes cluster). - -.. code-block:: bash - - cd bl01t # (if you are not already in your beamline repo) - ec ioc deploy-local iocs/bl01t-ea-ioc-01 - -You will be prompted to say that this is a *TEMPORARY* deployment. This is -because we are deploying directly from the local filesystem. You should only -use this for testing purposes because there is no guarantee that you could -ever roll back to this version of the IOC (as it is lost as soon as filesystem -changes are made). Local filesystem deployments are given a beta version -number to indicate that they are not permanent. - -You can now see the beta IOC instance running with: - -.. code-block:: bash - - $ ec ps - IOC NAME VERSION STATUS IMAGE - bl01t-ea-ioc-01 2024.1.19-b11.53 Up 6 minutes ghcr.io/epics-containers/ioc-adsimdetector-linux-runtime:2024.1.1 - -At the end of the last tutorial we tagged the beamline repository with a -``CalVer`` version number and pushed it up to GitHub. This means that we -can now use that tagged release of the IOC instance. First let's -check that the IOC instance version is available as expected. The following -command lists all of the tagged versions of the IOC instance that are -available in the GitHub repository. - -.. code-block:: bash - - $ ec ioc instances bl01t-ea-ioc-01 - Available instance versions for bl01t-ea-ioc-01: - 2024.1.1 - -.. note:: - - The above command is the first one to look at your github repository. - This is how it finds out the versions - of the IOC instance that are available. If you get an error it may be - because you set EC_SERVICES_REPO incorrectly in environment.sh. Check it - and source it again to pick up any changes. - -.. hint:: - - ec supports command line completion, which means that entering `` `` will give hints on the command line: - - .. code-block:: bash - - $ ec ioc - attach deploy exec list logs start template - delete deploy-local instances log-history restart stop validate - $ ec ioc instances - $ ec ioc instances bl01t-ea-ioc-0 - bl01t-ea-ioc-01 bl01t-ea-ioc-02 - - To enable this behavior in your shell run the command ``ec --install-completion`` - -Now that we know the latest version number we can deploy a release version. -This command will extract the IOC instance using the tag from GitHub and deploy -it to your local machine: - -.. code-block:: bash - - $ ec ioc deploy bl01t-ea-ioc-01 2024.1.1 - bdbd155d437361fe88bce0faa0ddd3cd225a9026287ac5e73545aeb4ab3a67e9 - - $ ec ps - IOC NAME VERSION STATUS IMAGE - bl01t-ea-ioc-01 2024.1.1 Up 4 seconds ghcr.io/epics-containers/ioc-adsimdetector-linux-runtime:2023.10.5 - -IMPORTANT: deploy-local vs deploy -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Be aware of the distinction of ``deploy-local`` vs ``deploy``. Both of these -commands create a running instance of the IOC in the target environment (currently -your local machine - later on a Kubernetes Cluster). However, ``deploy-local`` -gets the IOC instance description YAML direct from your local filesystem. This -means it is not likely to be available for re-deployment later on. ``deploy`` -gets the IOC instance description YAML from the GitHub repository with able -specific tag and therefore is a known state that can be recovered at a later -date. - -Always strive to have released versions of IOC instances deployed in your -environments. ``deploy-local`` is only for temporary testing purposes. - -Managing the Example IOC Instance ---------------------------------- - -Starting and Stopping IOCs -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -To stop / start the example IOC try the following commands. Note that -``ec ps -a`` shows you all IOCs including stopped ones. - -.. code-block:: bash - - ec ps -a - ec ioc stop bl01t-ea-ioc-01 - ec ps -a - ec ioc start bl01t-ea-ioc-01 - ec ps - -.. Note:: - - Generic IOCs. - - You may have noticed that the IOC instance has is showing that it has - an image ``ghcr.io/epics-containers/ioc-adsimdetector-linux-runtime:2024.1.1``. - - This is a Generic IOC image and all IOC Instances must be based upon one - of these images. This IOC instance has no startup script and is therefore - not functional, it could have been based on any Generic IOC. - -Monitoring and interacting with an IOC shell -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -To attach to the IOC shell you can use the following command. HOWEVER, this -will attach you to nothing in the case of this example IOC as it has no -shell. In the next tutorial we will use this command to interact with -iocShell. - -.. code-block:: bash - - ec ioc attach bl01t-ea-ioc-01 - -Use the command sequence ctrl-P then ctrl-Q to detach from the IOC. **However, -there are issues with both VSCode and IOC shells capturing ctrl-P. until -this is resolved it may be necessary to close the terminal window to detach.** -You can also restart and detach from the IOC using ctrl-D or ctrl-C, or -by typing ``exit``. - -To run a bash shell inside the IOC container: - -.. code-block:: bash - - ec ioc exec bl01t-ea-ioc-01 - -Once you have a shell inside the container you could inspect the following -folders: - -=================== ======================================================= -ioc code /epics/ioc -support modules /epics/support -EPICS binaries /epics/epics-base -IOC instance config /epics/ioc/config -IOC startup script /epics/runtime -=================== ======================================================= - -Being at a terminal prompt inside the IOC container can be useful for debugging -and testing. You will have access to caget and caput, plus other EPICS tools, -and you can can inspect files such as the IOC startup script. - -Logging -~~~~~~~ - -To get the current logs for the example IOC: - -.. code-block:: bash - - ec ioc logs bl01t-ea-ioc-01 - -Or follow the IOC log until you hit ctrl-C: - -.. code-block:: bash - - ec ioc logs bl01t-ea-ioc-01 -f - -You will notice that this IOC simply prints out a message regarding what -you can place in the /epics/ioc/config folder. In the next tutorial -we will look at how to configure a real EPICS IOC. - - - diff --git a/docs/tutorials/dev_container.md b/docs/tutorials/dev_container.md new file mode 100644 index 00000000..b0480633 --- /dev/null +++ b/docs/tutorials/dev_container.md @@ -0,0 +1,365 @@ +# Developer Containers + +(ioc-change-types)= + +## Types of Changes + +Containerized IOCs can be modified in 3 distinct places (in order of decreasing +frequency of change but increasing complexity): + +(changes_1)= +### Changing the IOC instance + +This means making changes to the IOC instance folders +which appear in the `iocs` folder of an {any}`ec-services-repo`. e.g.: + +- changing the EPICS DB (or the `ibek` files that generate it) +- altering the IOC boot script (or the `ibek` files that generate it) +- changing the version of the Generic IOC used in values.yaml +- for Kubernetes: the values.yaml can override any settings used by helm + so these can also be adjusted on a per IOC instance basis. +- for Kubernetes: changes to the global values.yaml + file found in `beamline-chart`, these affect all IOCs in the repository. + +(changes_2)= +### Changing the Generic IOC + +This involves altering how the Generic IOC container image +is built. This means making changes to an `ioc-XXX` +source repo and publishing a new version of the container image. +Types of changes include: + + - changing the EPICS base version + - changing the versions of EPICS support modules compiled into the IOC binary + - adding new support modules + - altering the system dependencies installed into the container image + +(changes_3)= +### Changing the dependencies + +Sometimes you will need to alter the support modules used by the Generic IOC. To make use of these changes would require: + +- publishing a new release of the support module, +- updating and publishing the Generic IOC +- updating and publishing the IOC instance + +## Need for a Developer Container + +For all of the above types of changes, the epics-containers approach allows local testing of the changes before going through the publishing cycle. This allows us to have a fast 'inner loop' of development and testing. + +Also, epics-containers provides a mechanism for creating a separate workspace for working on all of the above elements in one place. + +The earlier tutorials were firmly in the realm of [](changes_1) above. It was adequate for us to install a container platform, IDE and python and that is all we needed. + +Once you get to level of [](changes_2) you need to have compilers and build tools installed. You might also require system level dependencies. AreaDetector, that we used earlier has a long list of system dependencies that need to be installed in order to compile it. Traditionally we have installed all of these onto developer workstations or separately compiled the dependencies as part of the build. + +These tools and dependencies will differ from one Generic IOC to the next. + +When using epics-containers we don't need to install any of these tools or dependencies on our local machine. Instead we can use a developer container, and in fact our Generic IOC *is* our developer container. + +When the CI builds a Generic IOC it creates [two targets](https://github.com/orgs/epics-containers/packages?repo_name=ioc-adsimdetector) + +| | | +|---|---| +| **developer** | this target installs all the build tools and build time dependencies into the container image. It then compiles the support modules and IOC. | +| **runtime** | this target installs only the runtime dependencies into the container. It also extracts the built runtime assets from the developer target. | + +The developer stage of the build is a necessary step in order to get a +working runtime container. However, we choose to keep this stage as an additional +build target and it then becomes a perfect candidate for a developer container. + +VSCode has excellent support for using a container as a development environment. +The next section will show you how to use this feature. Note that you can use +any IDE that supports remote development in a container, you could also +simply launch the developer container in a shell and use it via CLI only. + +If you want to use the CLI and terminal based editors like `neovim` then +you should use the developer container CLI to get your developer container +started. This means the configuration in `.devcontainer/devcontainer.json` +is used to start the container. This is necessary as that is where the +useful host filesystem mounts and other config items are defined. See +[devcontainer-cli](https://code.visualstudio.com/docs/devcontainers/devcontainer-cli) +for details. + +## Starting a Developer Container + +:::{Warning} +DLS Users and Redhat Users: + +There is a +[bug in VSCode devcontainers extension](https://github.com/microsoft/vscode-remote-release/issues/8557) +at the time of writing that makes it incompatible with podman and an SELinux +enabled /tmp directory. This will affect most Redhat users and you will see an +error regarding permissions on the /tmp folder when VSCode is building your +devcontainer. + +Here is a workaround that disables SELinux labels in podman. +Paste this into a terminal: + +```bash +sed -i ~/.config/containers/containers.conf -e '/label=false/d' -e '/^\[containers\]$/a label=false' +``` +::: + +### Preparation + +For this section we will work with the ADSimDetector Generic IOC that we used in +previous tutorials. Let's go and fetch a version of the Generic IOC source and +build it locally. + +For the purposes of this tutorial we will place the source in a folder right +next to your test beamline `bl01t`: + +```bash +# starting from folder bl01t so that the clone is next to bl01t +cd .. +git clone git@github.com:epics-containers/ioc-adsimdetector.git +cd ioc-adsimdetector +./build +``` + +This will take a few minutes to complete. A philosophy of epics-containers is +that Generic IOCs build all of their own support. This is to avoid problematic +dependency trees. For this reason building something as complex as AreaDetector +will take a few minutes when you first build it. + +A nice thing about containers is that the build is cached so that a second build +will be almost instant unless you have changed something that requires some +steps to be rebuilt. + +:::{note} +Before continuing this tutorial make sure you have not left the IOC +bl01t-ea-test-02 running from a previous tutorial. Execute this command +outside of the devcontainer to stop it: + +```bash +ec stop bl01t-ea-test-02 +``` +::: + +### Launching the Developer Container + +In the this section we are going to use vscode to launch a developer container. +This means that all vscode terminals and editors will be running inside a container +and accessing the container filesystem. This is a very convenient way to work +because it makes it possible to archive away the development environment +along side the source code. It also means that you can easily share the +development environment with other developers. + +For epics-containers the generic IOC *is* the developer container. When +you build the developer target of the container in CI it will contain all the +build tools and dependencies needed to build the IOC. It will also contain +the IOC source code and the support module source code. For this reason +we can also use the same developer target image to make the developer +container itself. We then have an environment that encompasses all the +source you could want to change inside of a Generic IOC, and the +tools to build and test it. + +It is also important to understand that although your vscode session is +entirely inside the container, some of your host folders have been mounted +into the container. This is done so that your important changes to source +code would not be lost if the container were rebuilt. See [](container-layout) +for details of which host folders are mounted into the container. + +Once built, open the project in VSCode: + +```bash +code . +``` + +When it opens, VSCode may prompt you to open in a devcontainer. If not then click +the green icon in the bottom left of the VSCode window and select +`Reopen in Container`. + +You should now be *inside* the container. All terminals started in VSCode will +be running inside the container. Every file that you open with the VSCode editor +will be inside the container. + +There are some caveats because some folders are mounted from the host file +system. For example, the `ioc-adsimdetector` project folder +is mounted into the container as a volume. It is mounted under +`/workspaces/ioc-adsimdetector`. This means that you can edit the source code +from your local machine and the changes will be visible inside the container and +outside the container. This is a good thing as you should consider the container +filesystem to be a temporary filesystem that will be destroyed when the container +is rebuilt or deleted. + +### Preparing the IOC for Testing + +:::{note} +Troubleshooting: if you are experiencing problems with the devcontainer you +can try resetting your vscode and vscode server caches on your host machine. +To do this, exit vscode use the following command and restart vscode: + +```bash +rm -rf ~/.vscode/* ~/.vscode-server/* +``` +::: + +Now that you are *inside* the container you have access to the tools built into +it, this includes `ibek`. + +The first commands you should run are as follows: + +```bash +# open a terminal: Menu -> Terminal -> New Terminal +cd /epics/ioc +make +``` + +It is useful to understand that `/epics/ioc` is a soft link to the IOC source that came with your generic IOC source code. Therefore if you edit this code and recompile it, the changes will be visible inside the container and outside the container. Meaning that the repository `ioc-adsimdetector` is now showing your changes in it's `ioc` folder and you could push them +up to GitHub if you wanted. + +epics-containers devcontainers have carefully curated host filesystem mounts. This allows the developer environment to look as similar as possible to the runtime container. It also will preserve any important changes that you make in the host file system. This is essential because the container filesystem is temporary and will be destroyed when the container is rebuilt or deleted. + +See [](container-layout) for details of which host folders are mounted into the container. + +The IOC source code is entirely boilerplate, `/epics/ioc/iocApp/src/Makefile` determines which dbd and lib files to link by including two files that `ibek` generated during the container build. You can see these files in `/epics/support/configure/lib_list` and `/epics/support/configure/dbd_list`. + +Although all Generic IOCs derived from ioc-template start out with the same generic source, you are free to change them if there is a need for different compilation options etc. + +The Generic IOC should now be ready to run inside of the container. To do this: + +```bash +cd /epics/ioc +./start.sh +``` + +You will just see the default output of a Generic IOC that has no Instance +configuration. Hit `Ctrl-C` to stop the this default script. + +Next we will add some instance configuration from one of the +IOC instances in the `bl01t` beamline. + +To do this we will add some other folders to our VSCode workspace to make it +easier to work with `bl01t` and to investigate the container filesystem. + +## Adding the Beamline to the Workspace + +To meaningfully test the Generic IOC we will need an instance to test it +against. We will use the `bl01t` beamline that you already made. The devcontainer +has been configured to mount some useful host folders into the container +including the parent folder of the workspace as `/workspaces` so we can work on +multiple peer projects. + +In VSCode click the `File` menu and select `Add Folder to Workspace`. +Navigate to `/workspaces` and you will see all the peers of your `ioc-adsimdetector` +folder (see {any}`container-layout` below). Choose the `bl01t` folder and add it to the +workspace - you may see an error but if so clicking "Cancel" will +clear it. + +Also take this opportunity to add the folder `/epics` to the workspace. This +is the root folder in which all of the EPICS source and built files are +located. + +:::{note} +Docker Users: your account inside the container will not be the owner of +/epics files. vscode may try to open the repos in epics-base and support/\* +and git will complain about ownership. You can cancel out of these errors +as you should not edit project folders inside of `/epics` - they were +built by the container and should be considered immutable. We will learn +how to work on support modules in later tutorials. This error should only +be seen on first launch. podman users will have no such problem because they +will be root inside the container and root built the container. + +To mitigate this problem you can tell vscode not to look for git repos in subfolders, see [](scm_settings). +::: + +You can now easily browse around the `/epics` folder and see all the +support modules and epics-base. This will give you a feel for the layout of +files in the container. Here is a summary (where WS is your workspace on your +host. i.e. the root folder under which your projects are all cloned): + +(container-layout)= + +```{eval-rst} +.. list-table:: Developer Container Layout + :widths: 25 35 45 + :header-rows: 1 + + * - Path Inside Container + - Host Mount Path + - Description + + * - /epics/support + - N/A + - root of compiled support modules + + * - /epics/epics-base + - N/A + - compiled epics-base + + * - /epics/ioc + - WS/ioc-adsimdetector/ioc + - soft link to IOC source tree + + * - /epics/runtime + - N/A + - generated startup script and EPICS database files + + * - /epics/ibek-defs + - N/A + - All ibek *Support yaml* files + + * - /epics/pvi-defs + - N/A + - all PVI definitions from support modules + + * - /epics/opi + - N/A + - all OPI files (generated or copied from support) + + * - /workspaces + - WS + - all peers to Generic IOC source repo + + * - /workspaces/ioc-adsimdetector + - WS/ioc-adsimdetector + - Generic IOC source repo (in this example) + + * - /epics/generic-source + - WS/ioc-adsimdetector + - A second - fixed location mount of the Generic IOC source repo to allow `ibek` to find it easily. +``` + +IMPORTANT: remember that the container filesystem is temporary and will be +destroyed when the container is rebuilt or deleted. All folders above with +`Host Mount Path` `N/A` are in the container filesystem. The devcontainer +has been configured to mount the most useful host folders, but note that +all support modules are in the container filesystem. Later we will learn +how to work on support modules, first ensuring that they are made available +in the host filesystem. + +Also note that VSCode keeps your developer container until you rebuild it +or explicitly delete it. Restarting your PC and coming back to the same +devcontainer does keep all state. This can make you complacent about doing +work in the container filesystem, but it is still not recommended. + +(choose-ioc-instance)= + +## Choose the IOC Instance to Test + +Now that we have the beamline repo visible in our container we can easily supply some instance configuration to the Generic IOC. This will use the `ibek` tool convenience function `dev instance` which declares which IOC instance you want to work on in the developer container. + +Try the following: + +``` +cd /epics/ioc +ibek dev instance /workspaces/bl01t/services/bl01t-ea-test-02 +# check the it worked - should see a symlink to the config folder +ls -l config +./start.sh +``` + +This removed any existing config folder and replaced it with the config from the IOC instance bl01t-ea-test-02 by symlinking to that IOC Instance's config folder. Note that we used a soft link, this means we can edit the config, restart the IOC to test it and the changes will already be in place in the beamline repository. You could therefore open a shell onto the beamline repository at `/workspaces/bl01t` and commit and push the changes. + +## Wrapping Up + +We now have a tidy development environment for working on the Generic IOC, +IOC Instances and even the support modules inside the Generic IOC, all in one +place. We can easily test our changes in place too. In particular note that +we are able to test changes without having to go through a container build +cycle. + +In the following tutorials we will look at how to make changes at each of the +3 levels listed in {any}`ioc-change-types`. diff --git a/docs/tutorials/dev_container.rst b/docs/tutorials/dev_container.rst deleted file mode 100644 index f8b35e9d..00000000 --- a/docs/tutorials/dev_container.rst +++ /dev/null @@ -1,407 +0,0 @@ -Developer Containers -==================== - -.. _ioc_change_types: - -Types of Changes ----------------- - -Containerized IOCs can be modified in 3 distinct places (in order of decreasing -frequency of change but increasing complexity): - -#. The IOC instance: this means making changes to the IOC instance folders - which appear in the ``iocs`` folder of a domain repository. e.g.: - - - changing the EPICS DB (or the ``ibek`` files that generate it) - - altering the IOC boot script (or the ``ibek`` files that generate it) - - changing the version of the Generic IOC used in values.yaml - - for Kubernetes: the values.yaml can override any settings used by helm - so these can also be adjusted on a per IOC instance basis. - - for Kubernetes: changes to the global values.yaml - file found in ``beamline-chart``, these affect all IOCs in the domain. - -#. The Generic IOC: i.e. altering how the Generic IOC container image - is built. This means making changes to an ``ioc-XXX`` - source repo and publishing a new version of the container image. - Types of changes include: - - - changing the EPICS base version - - changing the versions of EPICS support modules compiled into the IOC binary - - adding new support modules - - altering the system dependencies installed into the container image - -#. The dependencies - Support modules used by the Generic IOC. Changes to support - module repos. To make use of these changes would require: - - - publishing a new release of the support module, - - updating and publishing the Generic IOC - - updating and publishing the IOC instance - -For all of the above, the epics-containers approach allows -local testing of the changes before going through the publishing cycle. -This allows us to have a fast 'inner loop' of development and testing. - -Also, epics-containers provides a mechanism for creating a separate workspace for -working on all of the above elements in one place. - -Need for a Developer Container ------------------------------- - -The earlier tutorials were firmly in the realm of ``1`` above. -It was adequate for us to install a container platform, IDE and python -and that is all we needed. - -Once you get to level ``2`` changes you need to have compilers and build tools -installed. You might also require system level dependencies. AreaDetector, -that we used earlier has a long list of system dependencies that need to be -installed in order to compile it. Traditionally we have installed all of these -onto developer workstations or separately compiled the dependencies as part of -the build. - -These tools and dependencies will differ from one Generic IOC to the next. - -When using epics-containers we don't need to install any of these tools or -dependencies on our local machine. Instead we can use a developer container, -and in fact our Generic IOC *is* our developer container. - -When the CI builds a Generic IOC it creates -`two targets `_ - -:developer: this target installs all the build tools and build time dependencies - into the container image. It then compiles the support modules and IOC. - -:runtime: this target installs only the runtime dependencies into the container. - It also extracts the built runtime assets from the developer target. - -The developer stage of the build is a necessary step in order to get a -working runtime container. However, we choose to keep this stage as an additional -build target and it then becomes a perfect candidate for a developer container. - -VSCode has excellent support for using a container as a development environment. -The next section will show you how to use this feature. Note that you can use -any IDE that supports remote development in a container, you could also -simply launch the developer container in a shell and use it via CLI only. - -If you want to use the CLI and terminal based editors like ``neovim`` then -you should use the developer container CLI to get your developer container -started. This means the configuration in ``.devcontainer/devcontainer.json`` -is used to start the container. This is necessary as that is where the -useful host filesystem mounts and other config items are defined. See -`devcontainer-cli `_ -for details. - - -Starting a Developer Container ------------------------------- - -.. Warning:: - - DLS Users and Redhat Users: - - There is a - `bug in VSCode devcontainers extension `_ - at the time of writing that makes it incompatible with podman and an SELinux - enabled /tmp directory. This will affect most Redhat users and you will see an - error regarding permissions on the /tmp folder when VSCode is building your - devcontainer. - - Here is a workaround that disables SELinux labels in podman. - Paste this into a terminal: - - .. code-block:: bash - - sed -i ~/.config/containers/containers.conf -e '/label=false/d' -e '/^\[containers\]$/a label=false' - -Preparation -~~~~~~~~~~~ - -For this section we will work with the ADSimDetector Generic IOC that we used in -previous tutorials. Let's go and fetch a version of the Generic IOC source and -build it locally. - -For the purposes of this tutorial we will place the source in a folder right -next to your test beamline ``bl01t``: - -.. code-block:: bash - - # starting from folder bl01t so that the clone is next to bl01t - cd .. - git clone git@github.com:epics-containers/ioc-adsimdetector.git - cd ioc-adsimdetector - ./build - -This will take a few minutes to complete. A philosophy of epics-containers is -that Generic IOCs build all of their own support. This is to avoid problematic -dependency trees. For this reason building something as complex as AreaDetector -will take a few minutes when you first build it. - -A nice thing about containers is that the build is cached so that a second build -will be almost instant unless you have changed something that requires some -steps to be rebuilt. - -.. note:: - - Before continuing this tutorial make sure you have not left the IOC - bl01t-ea-ioc-02 running from a previous tutorial. Execute this command - outside of the devcontainer to stop it: - - .. code-block:: bash - - ec ioc stop bl01t-ea-ioc-02 - -Launching the Developer Container -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -In the this section we are going to use vscode to launch a developer container. -This means that all vscode terminals and editors will be running inside a container -and accessing the container filesystem. This is a very convenient way to work -because it makes it possible to archive away the development environment -along side the source code. It also means that you can easily share the -development environment with other developers. - -For epics-containers the generic IOC >>>is<<< the developer container. When -you build the developer target of the container in CI it will contain all the -build tools and dependencies needed to build the IOC. It will also contain -the IOC source code and the support module source code. For this reason -we can also use the same developer target image to make the developer -container itself. We then have an environment that encompasses all the -source you could want to change inside of a Generic IOC, and the -tools to build and test it. - -It is also important to understand that although your vscode session is -entirely inside the container, some of your host folders have been mounted -into the container. This is done so that your important changes to source -code would not be lost if the container were rebuilt. See `container-layout`_ -for details of which host folders are mounted into the container. - -Once built, open the project in VSCode: - -.. code-block:: bash - - code . - -When it opens, VSCode may prompt you to open in a devcontainer. If not then click -the green icon in the bottom left of the VSCode window and select -``Reopen in Container``. - -You should now be *inside* the container. All terminals started in VSCode will -be inside the container. Every file that you open with the VSCode editor -will be inside the container. - - -There are some caveats because some folders are mounted from the host file -system. For example, the ``ioc-adsimdetector`` project folder -is mounted into the container as a volume. It is mounted under -``/epics/ioc-adsimdetector``. This means that you can edit the source code -from your local machine and the changes will be visible inside the container and -outside the container. This is a good thing as you should consider the container -filesystem to be a temporary filesystem that will be destroyed when the container -is rebuilt or deleted. - -Preparing the IOC for Testing -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. note:: - - Troubleshooting: if you are experiencing problems with the devcontainer you - can try resetting your vscode and vscode server caches on your host machine. - To do this, exit vscode use the following command and restart vscode: - - .. code-block:: bash - - rm -rf ~/.vscode/* ~/.vscode-server/* - -Now that you are *inside* the container you have access to the tools built into -it, this includes ``ibek``. - -The first commands you should run are as follows: - -.. code-block:: bash - - cd /epics/ioc - make - -It is useful to understand that /epics/ioc is a soft link to the IOC source -that came with your generic IOC source code. Therefore if you edit this -code and recompile it, the changes will be visible inside the container and -outside the container. Meaning that the repository ``ioc-adsimdetector`` is -now showing your changes in it's ``ioc`` folder and you could push them -up to GitHub if you wanted. - -The above is true because your project folder ioc-adsimdetector is mounted into -the container's filesystem with a bind mount at the same place that the -ioc files were originally placed by the container build. - -epics-containers devcontainers have carefully curated host filesystem mounts. -This allows the developer environment to look as similar as possible to the -runtime container. -It also will preserve any important changes that you make in the host file system. -This is essential because the container filesystem is temporary and will be -destroyed when the container is rebuilt or deleted. - -See `container-layout`_ for details of which host folders are mounted into the -container. - -The IOC source code is entirely boilerplate, ``/epics/ioc/iocApp/src/Makefile`` -determines which dbd and lib files to link by including two files that -``ibek`` generated during the container build. You can see these files in -``/epics/support/configure/lib_list`` and ``/epics/support/configure/dbd_list``. - -Although all Generic IOCs derived from ioc-template start out with the same -generic source, you are free to change them if there is -a need for different compilation options etc. - -The Generic IOC should now be ready to run inside of the container. To do this: - -.. code-block:: bash - - cd /epics/ioc - ./start.sh - -You will just see the default output of a Generic IOC that has no Instance -configuration. Hit ``Ctrl-C`` to stop the this default script. - -Next we will add some instance configuration from one of the -IOC instances in the ``bl01t`` beamline. - -To do this we will add some other folders to our VSCode workspace to make it -easier to work with ``bl01t`` and to investigate the container filesystem. - -Adding the Beamline to the Workspace ------------------------------------- - -To meaningfully test the Generic IOC we will need an instance to test it -against. We will use the ``bl01t`` beamline that you already made. The devcontainer -has been configured to mount some useful host folders into the container -including the parent folder of the workspace as ``/workspaces`` so we can work on -multiple peer projects. - -In VSCode click the ``File`` menu and select ``Add Folder to Workspace``. -Navigate to ``/workspaces`` and you will see all the peers of your ``ioc-adsimdetector`` -folder (see `container-layout` below). Choose the ``bl01t`` folder and add it to the -workspace - you may see an error but if so clicking "Cancel" will -clear it. - -Also take this opportunity to add the folder ``/epics`` to the workspace. This -is the root folder in which all of the EPICS source and built files are -located. - -.. note:: - - Docker Users: your account inside the container will not be the owner of - /epics files. vscode will try to open the repos in epics-base and support/* - and git will complain about ownership. You can cancel out of these errors - as you should not edit project folders inside of ``/epics`` - they were - built by the container and should be considered immutable. We will learn - how to work on support modules in later tutorials. This error should only - be seen on first launch. podman users will have no such problem because they - will be root inside the container and root built the container. - -You can now easily browse around the ``/epics`` folder and see all the -support modules and epics-base. This will give you a feel for the layout of -files in the container. Here is a summary (where WS is your workspace on your -host. i.e. the root folder under which your projects are all cloned): - -.. _container-layout: - -.. list-table:: Developer Container Layout - :widths: 25 35 45 - :header-rows: 1 - - * - Path Inside Container - - Host Mount Path - - Description - - * - /epics/support - - N/A - - root of compiled support modules - - * - /epics/epics-base - - N/A - - compiled epics-base - - * - /epics/ioc - - WS/ioc-adsimdetector/ioc - - soft link to IOC source tree - - * - /epics/ibek-defs - - N/A - - All ibek *Support yaml* files - - * - /epics/pvi-defs - - N/A - - all PVI definitions from support modules - - * - /epics/opi - - N/A - - all OPI files (generated or copied from support) - - * - /workspaces - - WS - - all peers to Generic IOC source repo - - * - /workspaces/ioc-adsimdetector - - WS/ioc-adsimdetector - - Generic IOC source repo (in this example) - - * - /epics/generic-source - - WS/ioc-adsimdetector - - A second - fixed location mount of the Generic IOC source repo - -IMPORTANT: remember that the container filesystem is temporary and will be -destroyed when the container is rebuilt or deleted. All folders above with -``Host Mount Path`` ``N/A`` are in the container filesystem. The devcontainer -has been configured to mount the most useful host folders, but note that -all support modules are in the container filesystem. Later we will learn -how to work on support modules, first ensuring that they are made available -in the host filesystem. - -Also note that VSCode keeps your developer container until you rebuild it -or explicitly delete it. Restarting your PC and coming back to the same -devcontainer does keep all state. This can make you complacent about doing -work in the container filesystem, this is not recommended. - -.. _choose-ioc-instance: - -Choose the IOC Instance to Test -------------------------------- - -Now that we have the beamline repo visible in our container we can -easily supply some instance configuration to the Generic IOC. -Try the following: - -.. code:: - - cd /epics/ioc - rm -r config - ln -s /workspaces/bl01t/iocs/bl01t-ea-ioc-02/config . - # check the ln worked - ls -l config - ./start.sh - -This removed the boilerplate config and replaced it with the config from -the IOC instance bl01t-ea-ioc-02. Note that we used a soft link, this -means we can edit the config, restart the IOC to test it and the changes -will already be in place in the beamline repo. You can even open a shell -onto the beamline repo and commit and push the changes. - -.. note:: - - The manual steps above were shown to demonstrate the process. In practice - you can use this command to do the same thing: - - .. code-block:: bash - - ibek dev instance /workspaces/bl01t/iocs/bl01t-ea-ioc-02 - -Wrapping Up ------------ - -We now have a tidy development environment for working on the Generic IOC, -IOC Instances and even the support modules inside the Generic IOC, all in one -place. We can easily test our changes in place too. In particular note that -we are able to test changes without having to go through a container build -cycle. - -In the following tutorials we will look at how to make changes at each of the -3 levels listed in `ioc_change_types`. diff --git a/docs/tutorials/generic_ioc.md b/docs/tutorials/generic_ioc.md new file mode 100644 index 00000000..016714ae --- /dev/null +++ b/docs/tutorials/generic_ioc.md @@ -0,0 +1,683 @@ +# Create a Generic IOC + +In this tutorial you will learn how to take an existing support module and +create a Generic IOC that builds it. You will also learn how to embed an +example IOC instance into the Generic IOC for testing and demonstration. + +This is a type 2. change from the list at {any}`ioc-change-types`. + +## Lakeshore 340 Temperature Controller + +The example we will use is a Lakeshore 340 temperature controller. This +is a Stream Device based support module that has historically been internal +to Diamond Light Source. + +See details of the device: +[lakeshore 340](https://www.lakeshore.com/products/categories/overview/discontinued-products/discontinued-products/model-340-cryogenic-temperature-controller) + +:::{note} +DLS has an existing IOC building tool `XML Builder` for traditional +IOCs. It has allowed DLS to a have concise way of describing a beamline for many +years. However, it requires some changes to the support modules and for this +reason DLS maintain's a fork of all upstream support modules it uses. +epics-containers is intended to remove this barrier to collaboration and +use support modules from public repositories wherever appropriate. This +includes external publishing of previously internal support modules. +::: + +The first step was to publish the support module to a public repository, +it now lives at: + + + +The project required a little genericizing as follows: + +- add an Apache V2 LICENCE file in the root +- Make sure that configure/RELEASE has an include of RELEASE.local at the end +- change the make file to skip the `XML Builder` /etc folder + +The commit where these changes were made is +[0ff410a3e1131](https://github.com/DiamondLightSource/lakeshore340/commit/0ff410a3e1131c96078837424b2dfcdb4af2c356) + +Something like these steps may be required when publishing any +facility's previously internal support modules. + +## Create a New Generic IOC project + +By convention Generic IOC projects are named `ioc-XXX` where `XXX` is the +name of the primary support module. So here we will be building +`ioc-lakeshore340`. + +Much like creating a new beamline we have a template project that can be used +as the starting point for a new Generic IOC. Again we will create this in +your personal GitHub user space. + + +## Steps + +1. Go to your GitHub account home page. Click on 'Repositories' and then 'New', give your new repository the name `ioc-lakeshore340` plus a description, then click 'Create repository'. + +1. From a command line with your virtual environment activated. Use copier to start to make a new repository like this: + + ```bash + pip install copier + # this will create the folder ioc-lakeshore340 in the current directory + copier copy gh:epics-containers/ioc-template --trust ioc-lakeshore340 + ``` +1. Answer the copier template questions as follows: + + +
🎤 A name for this project. By convention the name will start with ioc- and
+  have a lower case suffix of the primary support module. e.g.
+  ioc-adsimdetector
+     ioc-lakeshore340
+  🎤 A One line description of the module
+     Generic IOC for the lakeshore 340 temperature controller
+  🎤 Git platform hosting the repository.
+     github.com
+  🎤 The GitHub organisation that will contain this repo.
+     YOUR_GITHUB_ACCOUNT
+  🎤 Remote URI of the repository.
+     git@github.com:YOUR_GITHUB_ACCOUNT/ioc-lakeshore340.git
+  
+ +1. Make the first commit and push the repository to GitHub. + + ```bash + cd ioc-lakeshore340 + git add . + git commit -m "initial commit" + git push -u origin main + ``` + +1. Get the Generic IOC container built, open the project in vscode and launch the devcontainer. + + ```bash + ./build + # DLS users make sure you have done: module load vscode + code . + # reopen in container + ``` + +As soon as you pushed the project, GitHub Actions CI will start building the project. This will make a container image of the template project, but not publish it because there is no release tag as yet. You can watch this by clicking on the `Actions` tab in your new repository. + +You might think building the template project was a waste of GitHub CPU. But, this is not so, because of container build cacheing. The next time you build the project in CI, with your changes, it will re-use most of the steps and be much faster. + +## Prepare the New Repo for Development + +There are only three places where you need to change the Generic IOC template +to make your own Generic IOC. + +1. **Dockerfile** - add in the support modules you need +2. **README.md** - change to describe your Generic IOC +3. **ibek-support** - add new support module recipes into this submodule + +To work on this project we will use local developer container. All +changes and testing will be performed inside this developer container. + +Once the developer container is running it is always instructive to have the +`/epics` folder added to your workspace: + +- File -> Add Folder to Workspace +- Select `/epics` +- Click ignore if you see an error +- File -> Save Workspace As... +- Choose the default `/workspaces/ioc-lakeshore340/ioc-lakeshore340.code-workspace` + +Note that workspace files are not committed to git. They are specific to your local development environment. Saving a workspace allows you to reopen the same set of folders in the developer container, using the *Recent* list shown when opening a new VSCode window. + +Now is a good time to edit the README.md file and change it to describe your Generic IOC as you see fit. However the template will have placed some basic information in there for you already. + +## Initial Changes to the Dockerfile + +The Dockerfile is the recipe for building the container image. It is a set of steps that get run inside a container. The starting container filesystem state is determined by a `FROM` line at the top of the Dockerfile. + +In the Generic IOC template the `FROM` line gets a version of the epics-containers base image. It then demonstrates how to add a support module to the container image. The `iocStats` support module is added and built by the template. It is recommended to keep this module as the default +behaviour in Kubernetes is to use `iocStats` to monitor the health of the IOC. + +Thus you can start adding support modules by adding more `COPY` and `RUN` lines to the Dockerfile. Just like those for the `iocStats` module. + +The rest of the Dockerfile is boilerplate and for best results you only need to remove the comment below and replace it with the additional support modules you need. Doing this means it is easy to adopt changes to the original template Dockerfile in the future. + +```dockerfile +################################################################################ +# TODO - Add further support module installations here +################################################################################ +``` + +Because lakeshore340 support is a StreamDevice we will need to add in the +required dependencies. These are `asyn` and `StreamDevice`. We will +first install those inside our devcontainer as follows: + +```bash +# open a new terminal in VSCode (Terminal -> New Terminal) +cd /workspaces/ioc-lakeshore340/ibek-support +asyn/install.sh R4-42 +StreamDevice/install.sh 2.8.24 +``` + +This pulls the two support modules from GitHub and builds them in our devcontainer. +Now any IOC instances we run in the devcontainer will be able to use these support +modules. + +Next, make sure that the next build of our `ioc-lakeshore340` container +image will have the same support built in by updating the Dockerfile as follows: + +```dockerfile +COPY ibek-support/asyn/ asyn/ +RUN asyn/install.sh R4-42 + +COPY ibek-support/StreamDevice/ StreamDevice/ +RUN StreamDevice/install.sh 2.8.24 +``` + +The above commands added `StreamDevice` and its dependency `asyn`. +For each support module +we copy it's `ibek-support` folder and then run the `install.sh` script. The +only argument to `install.sh` is the git tag for the version of the support +module required. `ibek-support` is a submodule used by all the Generic IOC +projects that contains recipes for building support modules, it will be covered +in more detail as we learn to add our own recipe for lakeshore340 below. + +You may think that there is a lot of duplication here e.g. `asyn` appears +3 times. However, this is explicitly +done to make the build cache more efficient and speed up development. +For example we could copy everything out of the ibek-support directory +in a single command but then if I changed a StreamDevice ibek-support file the +build would have to re-fetch and re-make all the support modules. By +only copying the files we are about to use in the next step we can +massively increase the build cache hit rate. + +:::{note} +These changes to the Dockerfile mean that if we were to exit the devcontainer, +and then run `./build` again, it would would add the `asyn` and +`StreamDevice` support modules to the container image. Re-launching the +devcontainer would then have the new support modules available right away. + +This is a common pattern for working in these devcontainers. You can +try out installing anything you need. Then once happy with it, add the +commands to the Dockerfile, so that these changes become permanent. +::: + +## Prepare The ibek-support Submodule + +Now we are ready to add the lakeshore340 support module to our project. In +order to do so we must first add a recipe for it to `ibek-support`. + +The `ibek-support` submodule is used to share information about how to build +and use support modules. It contains three kinds of files: + +1. install.sh - These are used to fetch and build support modules. They are + run from the Dockerfile as described above. +2. IBEK support module `definitions`: These are used to help IOCs build their + iocShell boot scripts and EPICS Database from YAML descriptions. +3. PVI definitions: These are used to add structure to the set of PV's a + device exposes. This structure allows us to auto-generate engineering + screens for the device. See . + +`ibek-support` is curated for security reasons, therefore we need to work with +a fork of it so we can add our own recipe for lakeshore340. If you make changes +to `ibek-support` that are generally useful you can use a pull request to get them +merged into the main repo. + +Perform the following steps to create a fork and update the submodule: + +- goto +- uncheck `Copy the main branch only` +- click `Create Fork` +- click on `<> Code` and copy the *HTTPS* URL +- cd to the ioc-lakeshore340 directory + +```bash +git submodule set-url ibek-support +git submodule init +git submodule update +cd ibek-support +git fetch +git checkout tutorial-KEEP # see note below +cd .. +``` + +We are using the `tutorial-KEEP` branch which is a snapshot of the `ibek-support` state +appropriate for this tutorial. Normally you would use the `main` branch and +then create your own branch off of that to work in. + +:::{note} +IMPORTANT: we used an *HTTPS* URL for the `ibek-support` submodule, not +a *SSH* URL. This is because other clones of `ioc-lakeshore340` will not +be guaranteed to have the required SSH keys. HTTPS is fine for reading, but +to write you need SSH. Therefore add the following to your `~/.gitconfig`: + +``` +[url "ssh://git@github.com/"] + insteadOf = https://github.com/ +``` + +This tells git to use SSH for all GitHub URLs, when it sees an HTTP URL. +::: + +The git submodule allows us to share the `ibek-support` definitions between all +ioc-XXX projects but also allows each project to have its copy fixed to +a particular commit (until updated with `git pull`) see + for more information. + +## Create install.sh For The lakeshore340 + +The first file we will create is the `install.sh` script for lakeshore340. +This is a simple script that fetches the support module from GitHub and +builds it. + +These scripts draw heavily on the `ibek` tool to do tasks that most support +modules require. They are also as close to identical as possible for simple +support modules. + +IMPORTANT points to note are: + +- Although we are using `ibek` we are really just automating what an EPICS + engineer would do manually. This is very much using the vanilla EPICS build + system that comes with EPICS base, along with the vanilla Make and Config files + that come with each support module. These steps are:- + + - make sure we have the necessary system dependencies installed + - fetch a version of the support module from GitHub + - add a RELEASE.local to enable dependency resolution + - optionally add CONFIG_SITE.local to apply settings for the build environment + - run make to build the support module + - take a note of the dbds and libs that we build so that we can use them + to make our IOC instance later + +- This is a bash script so although we encourage a very standard structure, + you can do anything you like. For example this support module has to + compile a 3rd party library before it can build the support module itself. + [ADAravis install.sh](https://github.com/gilesknap/ibek-support/blob/46fd9394f6bf07da97ab7971e6b3f09a623a42f6/ADAravis/install.sh#L17-L44) + +To make your lakeshore340 install.sh script: + +```bash +cd ibek-support +mkdir lakeshore340 +cp iocStats/install.sh lakeshore340/install.sh +code lakeshore340/install.sh +``` + +Now edit the install.sh script to look like the code block below. + +The changes required for any support module you care to build would be: + +- change the NAME variable to match the name of the support module + +- add in `ibek support apt-install` lines for any system dependencies. + These can be for the developer stage or the runtime stage or both. + +- change the `ibek support add-*` lines to declare the libs and DBDs + that this module will publish. + +- add extra release macros for RELEASE.local (the RELEASE macro for + the current support module is added automatically). Or add + CONFIG entries for CONFIG_SITE.local as required. + None of these were required for lakeshore340. To see how to use these + functions see + + - ibek support add-release-macro --help + - ibek support add-to-config-site --help + +```bash +#!/bin/bash + +# ARGUMENTS: +# $1 VERSION to install (must match repo tag) +VERSION=${1} +NAME=lakeshore340 +FOLDER=$(dirname $(readlink -f $0)) + +# log output and abort on failure +set -xe + +# doxygen is used in documentation build for the developer stage +ibek support apt-install --only=dev doxygen + +# get the source and fix up the configure/RELEASE files +ibek support git-clone ${NAME} ${VERSION} --org https://github.com/DiamondLightSource/ + +ibek support register ${NAME} + +# declare the libs and DBDs that are required in ioc/iocApp/src/Makefile +# None required for a stream device ------------------------------------ +#ibek support add-libs +#ibek support add-dbds + +# compile the support module +ibek support compile ${NAME} +# prepare *.bob, *.pvi, *.ibek.support.yaml for access outside the container. +ibek support generate-links ${FOLDER} +``` + +Having made these changes you can now test the script by running it: + +```bash +cd /workspaces/ioc-lakeshore340/ibek-support +chmod +x lakeshore340/install.sh +lakeshore340/install.sh 2-6-2 +``` + +You now have lakeshore340 support in your developer container. Let's go ahead +and add that into the Dockerfile: + +```dockerfile +COPY ibek-support/lakeshore340/ lakeshore340/ +RUN lakeshore340/install.sh 2-6-2 +``` + +This means you can compile an IOC with lakeshore340 support in this container +but we don't yet have a way to generate startup scripts and EPICS Databases +for the instances. We will do that next. + +## Create Support YAML for the lakeshore340 + +When making an IOC instance from a Generic IOC, the instance needs to supply +an iocShell startup script and an EPICS Database. You can supply hand +crafted `st.cmd` and `ioc.subst` files for this purpose. The Generic IOC +we have made above is already capable of using such files. + +For this exercise we will use the full capabilities of `ibek` to generate +these files from a YAML description of the IOC instance. To do this we need +to create a YAML file that describes what the instance YAML is allowed to +make. + +TODO: a detailed description of the YAML files' structure and purpose should +be included in the `ibek` documentation and linked here. +The current version of this is here +[entities](https://epics-containers.github.io/ibek/main/developer/explanations/entities.html) +but it is rather out of date. + +To create an `ibek` support YAML file we need to provide a list of `definitions` . +Each `definition` gives: + +- a name and description for the `definition` + +- a list of arguments that an + instance of this `definition` may supply, with each having: + + - a type (string, integer, float, boolean, enum) + - a name + - a description + - optionally a default value + +- A list of database templates to instantiate for each instance of this `definition` + \- including values for the Macros in the template + +- A list of iocShell command line entries to add before or after `iocInit` + +In all of the fields Jinja templating can be used to combine the values of +arguments into the final output. At its simplest this is just the name of +an argument in double curly braces e.g. `{{argument_name}}`. But, it can +also be used to do more complex things as a Python interpreter is evaluating +the text inside the curly braces and that interpreter has the values of +all the `definition` arguments available in its context. +See + +:::{note} +IMPORTANT: the file created below MUST have the suffix `.ibek.support.yaml`. +This means it is a support yaml file for `ibek`. This is important because +when `install.sh` calls `ibek support generate-links` it will look for +files with this suffix and make links to them in the `ibek-defs` folder. + +In turn when you run `ibek ioc generate-schema` it will look in the +`ibek-defs` folder for all the support definition YAML files and combine +them into a single schema file. +::: + +To make a lakeshore340 YAML file, go to the folder +`/workspaces/ioc-lakeshore340/ibek-support/lakeshore340/` +and create a file called `lakeshore340.ibek.support.yaml`. Add the following +contents: + +```yaml +# yaml-language-server: $schema=https://github.com/epics-containers/ibek/releases/download/1.6.2/ibek.support.schema.json + +module: lakeshore340 + +defs: + - name: lakeshore340 + description: |- + Lakeshore 340 Temperature Controller + Notes: The temperatures in Kelvin are archived once every 10 secs. + args: + - type: str + name: P + description: |- + Prefix for PV name + + - type: str + name: PORT + description: |- + Bus/Port Address (eg. ASYN Port). + + - type: int + name: ADDR + description: |- + Address on the bus + default: 0 + + - type: int + name: SCAN + description: |- + SCAN rate for non-temperature/voltage parameters. + default: 5 + + - type: int + name: TEMPSCAN + description: |- + SCAN rate for the temperature/voltage readings + default: 5 + + - type: id + name: name + description: |- + Object and gui association name + + - type: int + name: LOOP + description: |- + Which heater PID loop to control (Default = 1) + default: 1 + + databases: + - file: $(LAKESHORE340)/db/lakeshore340.template + args: + name: + SCAN: + P: + TEMPSCAN: + PORT: + LOOP: + ADDR: + + pre_init: + - value: | + epicsEnvSet "STREAM_PROTOCOL_PATH", "$(LAKESHORE340)/lakeshore340App/protocol/" +``` + +This file declares a list of arguments, one for each of the database template +macros that it needs to substitute. It then declares that we need to instantiate +the `lakeshore340.template` database template and passes all of the arguments +verbatim to the template. + +Finally it declares that we need to add a line to the iocShell startup script +that allows the IOC to find the module's StreamDevice protocol files. + +Note that in the list of DB args or in the startup lines we can use combinations +of arguments to make the final output. + +e.g. to make a more descriptive PV prefix we could use: + +```yaml +databases: + - file: $(LAKESHORE340)/db/lakeshore340.template + args: + P: "{{P + ':' + name + ':'}}" +``` + +Finally, also note that the top line refers to a schema file. This is the global +`ibek` schema for support module definition YAML. A single schema is used +for all support modules and is published along side the latest release of `ibek`. +This means that a schema aware editor can provide auto-completion and validation +for your support module YAML files. The VSCode extension here + +adds this capability. + +:::{note} +Because this is a DLS module originally, it has an `etc/builder.py` file +that is used by the `XML Builder` tool. `ibek` has a converter +that will translate this file into an `ibek` YAML file. Only DLS users +can take advantage of this because it needs access to all the dependent +DLS support module forks to work. See {any}`../how-to/builder2ibek.support` +::: + +## Example IOC instance + +In order to test our Generic IOC we now require an IOC instance to launch it. +For this exercise we will build an example instance right into the Generic IOC. +This is a great way to allow developers to experiment with the container, +but it is most likely to require a simulation of some kind to take the place +of a real piece of hardware for the instance to talk to. + +Before creating the instance it is useful to have a schema for the YAML we +are about to write. To generate a schema for this specific Generic IOC +perform the following command: + +```bash +ibek ioc generate-schema > /tmp/ibek.ioc.schema.json +``` + +This will make a schema that allows declaration of instances of the +definitions defined in the support YAML file we made above. But ALSO combines +in the definitions from the `devIocStats` support module and all other +modules that have been built inside this container. + +Once this repository is published to GitHub, the schema will be available +as part of the release at the following URL: + +``` +https://github.com//ioc-lakeshore340/releases/download//ibek.ioc.schema.json +``` + +This would then be the URL you would put at the top of any IOC instances using +your released Generic IOC. + +To create the instance we create a folder: + +> /workspaces/ioc-lakeshore340/ioc-examples/bl16i-ea-ioc-07/config/ + +and create a file in there called: + +> bl16i-ea-ioc-07.yaml + +with the following contents: + +```yaml +# yaml-language-server: $schema=/tmp/ibek.ioc.schema.json + +ioc_name: "{{ ioc_yaml_file_name }}" + +description: auto-generated by https://github.com/epics-containers/builder2ibek + +entities: + - type: devIocStats.iocAdminSoft + IOC: "{{ ioc_name | upper }}" + + - type: asyn.AsynIP + name: p1 + port: 127.0.0.1:5401 + + - type: lakeshore340.lakeshore340 + ADDR: 12 + LOOP: 2 + P: BL16I-EA-LS340-01 + PORT: p1 + SCAN: 5 + TEMPSCAN: 2 + name: lakeshore +``` + +The above YAML file declares an IOC instance that has the following 3 +`entities` (which is what we call instances of `definitions` in `ibek`): + +- A devIocStats object that will supply monitoring PVs +- An asyn IP port that will be used to talk to the simulator +- A lakeshore340 object that will talk to the simulator via the asyn port + +This instance is now ready to run inside the developer container. To do so +perform the following steps: + +```bash +cd /epics/support/lakeshore340/etc/simulations/ +./lakeshore340_sim.py +``` + +Now create a new terminal in VSCode (Terminal -> New Terminal) and run: + +```bash +ibek dev instance /workspaces/ioc-lakeshore340/ioc-examples/bl16i-ea-ioc-07 +cd /epics/ioc +make +./start.sh +``` + +If all is well then you should see the IOC start up and connect to the +simulator. You will see the simulator logging the queries it receives. + +TODO: it is possible to launch the bob file in: + +> /epics/support/lakeshore340/lakeshore340App/opi/bob/lakeshore340.bob + +to see a GUI for this IOC instance. However, I'm reserving writing about +GUI until I have the PVI integration done on this module and we can see +the auto-generated GUI. + +To investigate what `ibek` did to make the Generic IOC binary and the +IOC instance files, take a look at the following files. + +- `/epics/runtime` - the runtime assets created from a combination of the + : instance YAML and all the referenced support YAML + +- `/epics/ioc/iocApp/Makefile` - this picks up the libs and DBDs from the + support module builds which record their dbds and libs in: + + - `/epics/support/configure/dbd_list` + - `/epics/support/configure/lib_list` + +- `/epics/ioc/support/configure/RELEASE` - a global release file that contains + macros for all the support built in the container. This is soft linked + to `configure/RELEASE.local` in each support module. + +- `/epics/support/configure/RELEASE.shell` - created along with the global + release file. Sets all the release macros as shell environment variables + for passing into the ioc startup script. + +:::{note} +Because this IOC instance is a copy of a real IOC at DLS it comes +from a builder XML file originally. DLS users with builder beamlines +can use `builder2ibek` to convert their builder XML files into +`ibek` YAML IOC instance files. See {any}`../how-to/builder2ibek`. +Note this is distinct from making support YAML files with +`builder2ibek.support`. +::: + +## Experimenting With Changes to the IOC Instance and Generic IOC + +Inside the developer container you can add and remove support, change the +IOC instance YAML file and re-build the IOC instance until everything is +working as you want it to. At that point you can push the changes to GitHub +and the CI should build a container image. Once that has succeeded you can +tag the release and the CI will publish the container image to GHCR. + +Note that building the IOC binary is required after any change to the set +of support modules inside this container. However it is not required after +changes to the IOC instance YAML file. If you want to change the instance +you can: + +- edit the YAML file +- stop the IOC +- start the IOC with `./start.sh` +- that's it diff --git a/docs/tutorials/generic_ioc.rst b/docs/tutorials/generic_ioc.rst deleted file mode 100644 index 25083b80..00000000 --- a/docs/tutorials/generic_ioc.rst +++ /dev/null @@ -1,691 +0,0 @@ -Create a Generic IOC -==================== - -In this tutorial you will learn how to take an existing support module and -create a Generic IOC that builds it. You will also learn how to embed an -example IOC instance into the Generic IOC for testing and demonstration. - -This is a type 2. change from the list at `ioc_change_types`. - -Lakeshore 340 Temperature Controller ------------------------------------- - -The example we will use is a Lakeshore 340 temperature controller. This -is a Stream Device based support module that has historically been internal -to Diamond Light Source. - -See details of the device: -`lakeshore 340 `_ - -.. note:: - - DLS has an existing IOC building tool ``XML Builder`` for traditional - IOCs. It has allowed DLS to a have concise way of describing a beamline for many - years. However, it requires some changes to the support modules and for this - reason DLS maintain's a fork of all upstream support modules it uses. - epics-containers is intended to remove this barrier to collaboration and - use support modules from public repositories wherever appropriate. This - includes external publishing of previously internal support modules. - -The first step was to publish the support module to a public repository, -it now lives at: - -https://github.com/DiamondLightSource/lakeshore340 - -The project required a little genericizing as follows: - -- add an Apache V2 LICENCE file in the root -- Make sure that configure/RELEASE has an include of RELEASE.local at the end -- change the make file to skip the ``XML Builder`` /etc folder - -The commit where these changes were made is -`0ff410a3e1131 `_ - -Something like these steps may be required when publishing any -facility's previously internal support modules. - - -Create a New Generic IOC project --------------------------------- - -By convention Generic IOC projects are named ``ioc-XXX`` where ``XXX`` is the -name of the primary support module. So here we will be building -``ioc-lakeshore340``. - -Much like creating a new beamline we have a template project that can be used -as the starting point for a new Generic IOC. Again we will create this in -your personal GitHub user space. - -Go to the Generic IOC template project at: - -https://github.com/epics-containers/ioc-template - -Click on the ``Use this template`` button and create a new repository called -``ioc-lakeshore340`` in your personal GitHub account. - -As soon as you do this the build in GitHub Actions CI will start building the -project. This will make a container image of the template project, but -not publish it because there is no release tag as yet. You can watch this -by clicking on the ``Actions`` tab in your new repository. - -You might think building the template project was a waste of GitHub CPU. But, -this is not so, because of container build cacheing. The next time you build -the project in CI, with your changes, it will re-use most of the steps -and be much faster. - -Prepare the New Repo for Development ------------------------------------- - -There are only three places where you need to change the Generic IOC template -to make your own Generic IOC. - -#. Dockerfile - add in the support modules you need -#. README.md - change to describe your Generic IOC -#. ibek-support - add new support module recipes into this submodule - -To work on this project we will make a local developer container. All -changes and testing will be performed inside this developer container. - -To get the developer container up and running: - -.. code-block:: bash - - git clone git@github.com:/ioc-lakeshore340.git - cd ioc-lakeshore340 - ./build - code . - # choose "Reopen in Container" - -Once the developer container is running it is always instructive to have the -``/epics`` folder added to your workspace: - -- File -> Add Folder to Workspace -- Select ``/epics`` -- Click ignore if you see an error -- File -> Save Workspace As... -- Choose the default ``/workspaces/ioc-lakeshore340/ioc-lakeshore340.code-workspace`` - -Note that workspace files are not committed to git. They are specific to your -local development environment. Saving a workspace allows you to reopen the -same set of folders in the developer container, using the *Recent* list shown -when opening a new VSCode window. - -Now is a good time to edit the README.md file and change it to describe your -Generic IOC as you see fit. - -Initial Changes to the Dockerfile ---------------------------------- - -The Dockerfile is the recipe for building the container image. It is a set -of steps that get run inside a container. The starting container filesystem -state is determined by a ``FROM`` line at the top of the Dockerfile. - -In the Generic IOC template the ``FROM`` line gets a version of the -epics-containers base image. It then demonstrates how to add a support module -to the container image. The ``iocStats`` support module is added and built -by the template. It is recommended to keep this module as the default -behaviour in Kubernetes is to use ``iocStats`` to monitor the health of -the IOC. - -Thus you can start adding support modules by adding more ``COPY`` and ``RUN`` -lines to the Dockerfile. Just like those for the ``iocStats`` module. - -The rest of the Dockerfile is boilerplate and for best results you only need -to remove the comment below and replace it with the additional support -modules you need. Doing this means it is easy to adopt changes to the original -template Dockerfile in the future. - -.. code-block:: dockerfile - - ################################################################################ - # TODO - Add further support module installations here - ################################################################################ - -Because lakeshore340 support is a StreamDevice we will need to add in the -required dependencies. These are ``asyn`` and ``StreamDevice``. We will -first install those inside our devcontainer as follows: - -.. code-block:: bash - - # open a new terminal in VSCode (Terminal -> New Terminal) - cd /workspaces/ioc-lakeshore340/ibek-support - asyn/install.sh R4-42 - StreamDevice/install.sh 2.8.24 - -This pulls the two support modules from GitHub and builds them in our devcontainer. -Now any IOC instances we run in the devcontainer will be able to use these support -modules. - -Next, make sure that the next build of our ``ioc-lakeshore340`` container -image will have the same support built in by updating the Dockerfile as follows: - -.. code-block:: dockerfile - - COPY ibek-support/asyn/ asyn/ - RUN asyn/install.sh R4-42 - - COPY ibek-support/StreamDevice/ StreamDevice/ - RUN StreamDevice/install.sh 2.8.24 - -The above commands added ``StreamDevice`` and its dependency ``asyn``. -For each support module -we copy it's ``ibek-support`` folder and then run the ``install.sh`` script. The -only argument to ``install.sh`` is the git tag for the version of the support -module required. ``ibek-support`` is a submodule used by all the Generic IOC -projects that contains recipes for building support modules, it will be covered -in more detail as we learn to add our own recipe for lakeshore340 below. - -You may think that there is a lot of duplication here e.g. ``asyn`` appears -3 times. However, this is explicitly -done to make the build cache more efficient and speed up development. -For example we could copy everything out of the ibek-support directory -in a single command but then if I changed a StreamDevice ibek-support file the -build would have to re-fetch and re-make all the support modules. By -only copying the files we are about to use in the next step we can -massively increase the build cache hit rate. - -.. note:: - - These changes to the Dockerfile mean that if we were to exit the devcontainer, - and then run ``./build`` again, it would would add the ``asyn`` and - ``StreamDevice`` support modules to the container image. Re-launching the - devcontainer would then have the new support modules available right away. - - This is a common pattern for working in these devcontainers. You can - try out installing anything you need. Then once happy with it, add the - commands to the Dockerfile, so that these changes become permanent. - - -Prepare The ibek-support Submodule ----------------------------------- - -Now we are ready to add the lakeshore340 support module to our project. In -order to do so we must first add a recipe for it to ``ibek-support``. - -The ``ibek-support`` submodule is used to share information about how to build -and use support modules. It contains three kinds of files: - -#. install.sh - These are used to fetch and build support modules. They are - run from the Dockerfile as described above. - -#. IBEK support module ``definitions``: These are used to help IOCs build their - iocShell boot scripts and EPICS Database from YAML descriptions. - -#. PVI definitions: These are used to add structure to the set of PV's a - device exposes. This structure allows us to auto-generate engineering - screens for the device. See https://github.com/epics-containers/pvi. - -``ibek-support`` is curated for security reasons, therefore we need to work with -a fork of it so we can add our own recipe for lakeshore340. If you make changes -to ``ibek-support`` that are generally useful you can use a pull request to get them -merged into the main repo. - -Perform the following steps to create a fork and update the submodule: - -- goto https://github.com/epics-containers/ibek-support/fork -- uncheck ``Copy the main branch only`` -- click ``Create Fork`` -- click on ``<> Code`` and copy the *HTTPS* URL -- cd to the ioc-lakeshore340 directory - -.. code-block:: bash - - git submodule set-url ibek-support - git submodule init - git submodule update - cd ibek-support - git fetch - git checkout tutorial-KEEP # see note below - cd .. - -We are using the ``tutorial-KEEP`` branch which is a snapshot of the ``ibek-support`` state -appropriate for this tutorial. Normally you would use the ``main`` branch and -then create your own branch off of that to work in. - -.. note:: - - IMPORTANT: we used an *HTTPS* URL for the ``ibek-support`` submodule, not - a *SSH* URL. This is because other clones of ``ioc-lakeshore340`` will not - be guaranteed to have the required SSH keys. HTTPS is fine for reading, but - to write you need SSH. Therefore add the following to your ``~/.gitconfig``: - - .. code-block:: - - [url "ssh://git@github.com/"] - insteadOf = https://github.com/ - - This tells git to use SSH for all GitHub URLs, when it sees an HTTP URL. - - -The git submodule allows us to share the ``ibek-support`` definitions between all -ioc-XXX projects but also allows each project to have its copy fixed to -a particular commit (until updated with ``git pull``) see -https://git-scm.com/book/en/v2/Git-Tools-Submodules for more information. - - -Create install.sh For The lakeshore340 --------------------------------------- - -The first file we will create is the ``install.sh`` script for lakeshore340. -This is a simple script that fetches the support module from GitHub and -builds it. - -These scripts draw heavily on the ``ibek`` tool to do tasks that most support -modules require. They are also as close to identical as possible for simple -support modules. - -IMPORTANT points to note are: - -- Although we are using ``ibek`` we are really just automating what an EPICS - engineer would do manually. This is very much using the vanilla EPICS build - system that comes with EPICS base, along with the vanilla Make and Config files - that come with each support module. These steps are:- - - - make sure we have the necessary system dependencies installed - - fetch a version of the support module from GitHub - - add a RELEASE.local to enable dependency resolution - - optionally add CONFIG_SITE.local to apply settings for the build environment - - run make to build the support module - - take a note of the dbds and libs that we build so that we can use them - to make our IOC instance later -- This is a bash script so although we encourage a very standard structure, - you can do anything you like. For example this support module has to - compile a 3rd party library before it can build the support module itself. - `ADAravis install.sh `_ - -To make your lakeshore340 install.sh script: - -.. code-block:: bash - - cd ibek-support - mkdir lakeshore340 - cp iocStats/install.sh lakeshore340/install.sh - code lakeshore340/install.sh - -Now edit the install.sh script to look like the code block below. - -The changes required for any support module you care to build would be: - -- change the NAME variable to match the name of the support module -- add in ``ibek support apt-install`` lines for any system dependencies. - These can be for the developer stage or the runtime stage or both. -- change the ``ibek support add-*`` lines to declare the libs and DBDs - that this module will publish. -- add extra release macros for RELEASE.local (the RELEASE macro for - the current support module is added automatically). Or add - CONFIG entries for CONFIG_SITE.local as required. - None of these were required for lakeshore340. To see how to use these - functions see - - - ibek support add-release-macro --help - - ibek support add-to-config-site --help - -.. code-block:: bash - - #!/bin/bash - - # ARGUMENTS: - # $1 VERSION to install (must match repo tag) - VERSION=${1} - NAME=lakeshore340 - FOLDER=$(dirname $(readlink -f $0)) - - # log output and abort on failure - set -xe - - # doxygen is used in documentation build for the developer stage - ibek support apt-install --only=dev doxygen - - # get the source and fix up the configure/RELEASE files - ibek support git-clone ${NAME} ${VERSION} --org https://github.com/DiamondLightSource/ - - ibek support register ${NAME} - - # declare the libs and DBDs that are required in ioc/iocApp/src/Makefile - # None required for a stream device ------------------------------------ - #ibek support add-libs - #ibek support add-dbds - - # compile the support module - ibek support compile ${NAME} - # prepare *.bob, *.pvi, *.ibek.support.yaml for access outside the container. - ibek support generate-links ${FOLDER} - -Having made these changes you can now test the script by running it: - -.. code-block:: bash - - cd /workspaces/ioc-lakeshore340/ibek-support - chmod +x lakeshore340/install.sh - lakeshore340/install.sh 2-6-2 - -You now have lakeshore340 support in your developer container. Let's go ahead -and add that into the Dockerfile: - -.. code-block:: dockerfile - - COPY ibek-support/lakeshore340/ lakeshore340/ - RUN lakeshore340/install.sh 2-6-2 - -This means you can compile an IOC with lakeshore340 support in this container -but we don't yet have a way to generate startup scripts and EPICS Databases -for the instances. We will do that next. - -Create Support YAML for the lakeshore340 ----------------------------------------- - -When making an IOC instance from a Generic IOC, the instance needs to supply -an iocShell startup script and an EPICS Database. You can supply hand -crafted ``st.cmd`` and ``ioc.subst`` files for this purpose. The Generic IOC -we have made above is already capable of using such files. - -For this exercise we will use the full capabilities of ``ibek`` to generate -these files from a YAML description of the IOC instance. To do this we need -to create a YAML file that describes what the instance YAML is allowed to -make. - -TODO: a detailed description of the YAML files' structure and purpose should -be included in the ``ibek`` documentation and linked here. -The current version of this is here -`entities `_ -but it is rather out of date. - -To create an ``ibek`` support YAML file we need to provide a list of ``definitions`` . -Each ``definition`` gives: - -- a name and description for the ``definition`` -- a list of arguments that an - instance of this ``definition`` may supply, with each having: - - - a type (string, integer, float, boolean, enum) - - a name - - a description - - optionally a default value - -- A list of database templates to instantiate for each instance of this ``definition`` - - including values for the Macros in the template - -- A list of iocShell command line entries to add before or after ``iocInit`` - -In all of the fields Jinja templating can be used to combine the values of -arguments into the final output. At its simplest this is just the name of -an argument in double curly braces e.g. ``{{argument_name}}``. But, it can -also be used to do more complex things as a Python interpreter is evaluating -the text inside the curly braces and that interpreter has the values of -all the ``definition`` arguments available in its context. -See https://jinja.palletsprojects.com/en/3.0.x/templates/ - - -.. note:: - - IMPORTANT: the file created below MUST have the suffix ``.ibek.support.yaml``. - This means it is a support yaml file for ``ibek``. This is important because - when ``install.sh`` calls ``ibek support generate-links`` it will look for - files with this suffix and make links to them in the ``ibek-defs`` folder. - - In turn when you run ``ibek ioc generate-schema`` it will look in the - ``ibek-defs`` folder for all the support definition YAML files and combine - them into a single schema file. - -To make a lakeshore340 YAML file, go to the folder -``/workspaces/ioc-lakeshore340/ibek-support/lakeshore340/`` -and create a file called ``lakeshore340.ibek.support.yaml``. Add the following -contents: - -.. code-block:: yaml - - # yaml-language-server: $schema=https://github.com/epics-containers/ibek/releases/download/1.6.2/ibek.support.schema.json - - module: lakeshore340 - - defs: - - name: lakeshore340 - description: |- - Lakeshore 340 Temperature Controller - Notes: The temperatures in Kelvin are archived once every 10 secs. - args: - - type: str - name: P - description: |- - Prefix for PV name - - - type: str - name: PORT - description: |- - Bus/Port Address (eg. ASYN Port). - - - type: int - name: ADDR - description: |- - Address on the bus - default: 0 - - - type: int - name: SCAN - description: |- - SCAN rate for non-temperature/voltage parameters. - default: 5 - - - type: int - name: TEMPSCAN - description: |- - SCAN rate for the temperature/voltage readings - default: 5 - - - type: id - name: name - description: |- - Object and gui association name - - - type: int - name: LOOP - description: |- - Which heater PID loop to control (Default = 1) - default: 1 - - databases: - - file: $(LAKESHORE340)/db/lakeshore340.template - args: - name: - SCAN: - P: - TEMPSCAN: - PORT: - LOOP: - ADDR: - - pre_init: - - value: | - epicsEnvSet "STREAM_PROTOCOL_PATH", "$(LAKESHORE340)/lakeshore340App/protocol/" - -This file declares a list of arguments, one for each of the database template -macros that it needs to substitute. It then declares that we need to instantiate -the ``lakeshore340.template`` database template and passes all of the arguments -verbatim to the template. - -Finally it declares that we need to add a line to the iocShell startup script -that allows the IOC to find the module's StreamDevice protocol files. - -Note that in the list of DB args or in the startup lines we can use combinations -of arguments to make the final output. - -e.g. to make a more descriptive PV prefix we could use: - -.. code-block:: yaml - - databases: - - file: $(LAKESHORE340)/db/lakeshore340.template - args: - P: "{{P + ':' + name + ':'}}" - -Finally, also note that the top line refers to a schema file. This is the global -``ibek`` schema for support module definition YAML. A single schema is used -for all support modules and is published along side the latest release of ``ibek``. -This means that a schema aware editor can provide auto-completion and validation -for your support module YAML files. The VSCode extension here -https://marketplace.visualstudio.com/items?itemName=redhat.vscode-yaml -adds this capability. - -.. note:: - - Because this is a DLS module originally, it has an ``etc/builder.py`` file - that is used by the ``XML Builder`` tool. ``ibek`` has a converter - that will translate this file into an ``ibek`` YAML file. Only DLS users - can take advantage of this because it needs access to all the dependent - DLS support module forks to work. See `../how-to/builder2ibek.support` - -Example IOC instance --------------------- - -In order to test our Generic IOC we now require an IOC instance to launch it. -For this exercise we will build an example instance right into the Generic IOC. -This is a great way to allow developers to experiment with the container, -but it is most likely to require a simulation of some kind to take the place -of a real piece of hardware for the instance to talk to. - -Before creating the instance it is useful to have a schema for the YAML we -are about to write. To generate a schema for this specific Generic IOC -perform the following command: - -.. code-block:: bash - - ibek ioc generate-schema > /tmp/ibek.ioc.schema.json - -This will make a schema that allows declaration of instances of the -definitions defined in the support YAML file we made above. But ALSO combines -in the definitions from the ``devIocStats`` support module and all other -modules that have been built inside this container. - -Once this repository is published to GitHub, the schema will be available -as part of the release at the following URL: - -.. code-block:: - - https://github.com//ioc-lakeshore340/releases/download//ibek.ioc.schema.json - -This would then be the URL you would put at the top of any IOC instances using -your released Generic IOC. - -To create the instance we create a folder: - - /workspaces/ioc-lakeshore340/ioc-examples/bl16i-ea-ioc-07/config/ - -and create a file in there called: - - bl16i-ea-ioc-07.yaml - -with the following contents: - -.. code-block:: yaml - - # yaml-language-server: $schema=/tmp/ibek.ioc.schema.json - - ioc_name: "{{ ioc_yaml_file_name }}" - - description: auto-generated by https://github.com/epics-containers/builder2ibek - - entities: - - type: devIocStats.iocAdminSoft - IOC: "{{ ioc_name | upper }}" - - - type: asyn.AsynIP - name: p1 - port: 127.0.0.1:5401 - - - type: lakeshore340.lakeshore340 - ADDR: 12 - LOOP: 2 - P: BL16I-EA-LS340-01 - PORT: p1 - SCAN: 5 - TEMPSCAN: 2 - name: lakeshore - - -The above YAML file declares an IOC instance that has the following 3 -``entities`` (which is what we call instances of ``definitions`` in ``ibek``): - -- A devIocStats object that will supply monitoring PVs -- An asyn IP port that will be used to talk to the simulator -- A lakeshore340 object that will talk to the simulator via the asyn port - -This instance is now ready to run inside the developer container. To do so -perform the following steps: - -.. code-block:: bash - - cd /epics/support/lakeshore340/etc/simulations/ - ./lakeshore340_sim.py - -Now create a new terminal in VSCode (Terminal -> New Terminal) and run: - -.. code-block:: bash - - ibek dev instance /workspaces/ioc-lakeshore340/ioc-examples/bl16i-ea-ioc-07 - cd /epics/ioc - make - ./start.sh - -If all is well then you should see the IOC start up and connect to the -simulator. You will see the simulator logging the queries it receives. - -TODO: it is possible to launch the bob file in: - - /epics/support/lakeshore340/lakeshore340App/opi/bob/lakeshore340.bob - -to see a GUI for this IOC instance. However, I'm reserving writing about -GUI until I have the PVI integration done on this module and we can see -the auto-generated GUI. - -To investigate what ``ibek`` did to make the Generic IOC binary and the -IOC instance files, take a look at the following files. - -- ``/epics/runtime`` - the runtime assets created from a combination of the - instance YAML and all the referenced support YAML - -- ``/epics/ioc/iocApp/Makefile`` - this picks up the libs and DBDs from the - support module builds which record their dbds and libs in: - - - ``/epics/support/configure/dbd_list`` - - ``/epics/support/configure/lib_list`` - -- ``/epics/ioc/support/configure/RELEASE`` - a global release file that contains - macros for all the support built in the container. This is soft linked - to ``configure/RELEASE.local`` in each support module. - -- ``/epics/support/configure/RELEASE.shell`` - created along with the global - release file. Sets all the release macros as shell environment variables - for passing into the ioc startup script. - -.. note:: - - Because this IOC instance is a copy of a real IOC at DLS it comes - from a builder XML file originally. DLS users with builder beamlines - can use ``builder2ibek`` to convert their builder XML files into - ``ibek`` YAML IOC instance files. See `../how-to/builder2ibek`. - Note this is distinct from making support YAML files with - ``builder2ibek.support``. - -Experimenting With Changes to the IOC Instance and Generic IOC --------------------------------------------------------------- - -Inside the developer container you can add and remove support, change the -IOC instance YAML file and re-build the IOC instance until everything is -working as you want it to. At that point you can push the changes to GitHub -and the CI should build a container image. Once that has succeeded you can -tag the release and the CI will publish the container image to GHCR. - -Note that building the IOC binary is required after any change to the set -of support modules inside this container. However it is not required after -changes to the IOC instance YAML file. If you want to change the instance -you can: - -- edit the YAML file -- stop the IOC -- start the IOC with ``./start.sh`` -- that's it - - - - - diff --git a/docs/tutorials/intro.md b/docs/tutorials/intro.md new file mode 100644 index 00000000..7a3e4c73 --- /dev/null +++ b/docs/tutorials/intro.md @@ -0,0 +1,29 @@ +# Tutorials Introduction + +Welcome to the epics-containers tutorial series. + +These tutorials will introduce you to building, deploying and managing +containerized EPICS IOCs. They are intended to be self contained +and should be possible to work through without any prior knowledge. + +However, to get the most out of the tutorials it would be best to start with +some background in the following topics. + +```{eval-rst} +================================================================ ================ +**An introduction to containers** https://www.docker.com/resources/what-container/ +**Managing containers on a workstation: introduction to docker** https://docs.docker.com/get-started/overview/ +**Podman, a recommended docker alternative** https://docs.podman.io/en/latest/Introduction.html +**Orchestrating containers in a cluster with Kubernetes** https://kubernetes.io/docs/concepts/overview/ +**Managing packages in a Kubernetes Cluster with Helm** https://helm.sh/docs/intro/quickstart/ +**Introduction to EPICS** https://docs.epics-controls.org/en/latest/guides/EPICS_Intro.html +================================================================ ================ +``` + +With the above background in hand you should then read the overview of +epics-containers architecture here: {any}`../explanations/introduction` + +To work through the tutorials you will need a workstation which can be +Linux, Mac or Windows. All the software required is open source and available +for download. You will also need to have a GitHub account which you can create +for free. diff --git a/docs/tutorials/intro.rst b/docs/tutorials/intro.rst deleted file mode 100644 index 75ce8aa2..00000000 --- a/docs/tutorials/intro.rst +++ /dev/null @@ -1,31 +0,0 @@ -Tutorials Introduction -====================== - -Welcome to the epics-containers tutorial series. - -These tutorials will introduce you to building, deploying and managing -containerized EPICS IOCs. They are intended to be self contained -and should be possible to work through without any prior knowledge. - -However, to get the most out of the tutorials it would be best to start with -some background in the following topics. - -- An introduction to containers https://www.docker.com/resources/what-container/ -- Managing containers on a workstation: introduction to docker - https://docs.docker.com/get-started/overview/ -- Podman, a recommended docker alternative - https://docs.podman.io/en/latest/Introduction.html -- Orchestrating containers in a cluster with Kubernetes - https://kubernetes.io/docs/concepts/overview/ -- Managing packages in a Kubernetes Cluster with Helm - https://helm.sh/docs/intro/quickstart/ -- Introduction to EPICS - https://docs.epics-controls.org/en/latest/guides/EPICS_Intro.html - -With the above background in hand you should then read the overview of -epics-containers architecture here: `../explanations/introduction` - -To work through the tutorials you will need a workstation which can be -Linux, Mac or Windows. All the software required is open source and available -for download. You will also need to have a GitHub account which you can create -for free. diff --git a/docs/tutorials/ioc_changes1.md b/docs/tutorials/ioc_changes1.md new file mode 100644 index 00000000..ec333e81 --- /dev/null +++ b/docs/tutorials/ioc_changes1.md @@ -0,0 +1,175 @@ +# Changing the IOC Instance + +This tutorial will make a very simple change to the example IOC `bl01t-ea-test-02`. +This is a type 1 change from {any}`ioc-change-types`, types 2, 3 will be covered in the +following 2 tutorials. + +Strictly speaking, Type 1 changes do not require a devcontainer. You created +and deployed the IOC instance in a previous tutorial without one. It is up to +you how you choose to make these types of changes. Types 2,3 do require a +devcontainer because they involve compiling Generic IOC / support module code. + +These instructions are for running inside the devcontainer. If you closed your developer container from the last tutorial, then open it again now. To do open your `bl01t` folder in vscode and then press `Ctrl-Shift-P` and type `Remote-Containers: Reopen in Container`. + +We are going to add a hand crafted EPICS DB file to the IOC instance. This will be a simple record that we will be able to query to verify that the change is working. We will use the version of the IOC instance that used `ioc.yaml`. If you changed to using raw startup assets in the previous tutorial then revert to using `ioc.yaml` for this tutorial or see [](raw-startup-assets). + +:::{note} +Before doing this tutorial make sure you have not left the IOC bl01t-ea-test-02 running from a previous tutorial. Execute this command *outside* of the devcontainer to stop it: + +```bash +ec stop bl01t-ea-test-02 +``` +::: + +Make the following changes in your test IOC config folder +(`bl01t/services/bl01t-ea-test-02/config`): + +1. Add a file called `extra.db` with the following contents. + + ```text + record(ai, "bl01t-ea-test-02:TEST") { + field(DESC, "Test record") + field(DTYP, "Soft Channel") + field(SCAN, "Passive") + field(VAL, "1") + } + ``` + +2. Add the following lines to the end `ioc.yaml` (verify that the indentation + matches the above entry so that `- type:` statements line up): + + ```yaml + - type: epics.StartupCommand + command: dbLoadRecords(config/extra.db) + ``` + +## Locally Testing Your changes + +You can immediately test your changes by running the IOC locally. The following +command will run the IOC locally using the config files in your test IOC config +folder: + +```bash +# stop any existing IOC shell by hitting Ctrl-D or typing exit +cd /epics/ioc +./start.sh +``` + +If all is well you should see your iocShell prompt and the output should +show `dbLoadRecords(config/extra.db)`. + +Test your change +from another terminal (VSCode menus -> Terminal -> New Terminal) like so: + +```bash +caget bl01t-ea-test-02:TEST +``` + +If you see the value 1 then your change is working. + +:::{Note} +If you are using podman, you are likely to see *"Identical process variable names on multiple servers"* warnings. This is because caget can see the PV on the host network and the container network, but as these are the same IOC this is not a problem. + +You can change this and make your devcontainer network isolated by removing the line `"--net=host",` from `.devcontainer/devcontainer.json`, but it is convenient to leave it if you want to run OPI tools locally on the +host. You may want to isolate your development network if multiple developers are working on the same subnet. In this case some other solution is required for running OPI tools on the host (TODO add link to solution - likely to be container networks). +::: + +Because of the symlink between `/epics/ioc/config` and +`/workspaces/bl01t/services/bl01t-ea-test-02/config` the same files you are testing +by launching the IOC inside of the devcontainer are also ready to be +committed and pushed to the bl01t repo. i.e.: + +```bash +# Do this from a host terminal (not a devcontainer terminal) +cd bl01t +git add . +git commit -m "Added extra.db" +git push +# tag a new version of the beamline repo +git tag 2023.11.2 +git push origin 2023.11.2 +# deploy the new version of the IOC to the local docker / podman instance +ec deploy bl01t-ea-test-02 2023.11.2 +``` + +You can now see that the versioned IOC instance is running and loading the extra.db by looking at its log with: + +```bash +ec logs bl01t-ea-test-02 +``` + + +The above steps were performed on a host terminal because we are using `ec`. However all of the steps except for the `ec` command could have been done *inside* the devcontainer starting with `cd /workspaces/bl01t` which is where your project is mounted *inside* the devcontainer. + +We choose not to have `ec` installed inside of the devcontainer because that would involve containers in containers which adds too much complexity. + +If you like working entirely from the vscode window you can open a terminal in vscode *outside* of the devcontainer. To do so, press `Ctrl-Shift-P` and choose the commnd `Terminal: Create New Integrated Terminal (Local)`. This will open a terminal to the host. You can then run `ec` from there. + +(raw-startup-assets)= +## Raw Startup Assets + +If you plan not to use `ibek` runtime asset creation you could use the raw +startup assets from the previous tutorial. If you do this then the process +above is identical except that you will add the `dbLoadRecords` command to +the end of `st.cmd`. + +## More about ibek Runtime Asset Creation + +The set of `entities` that you may create in your ioc.yaml is defined by the +`ibek` IOC schema that we reference at the top of `ioc.yaml`. +The schema is in turn defined by the set of support modules that were compiled +into the Generic IOC (ioc-adsimdetector). Each support module has an +`ibek` *support YAML* file that contributes to the schema. + +The *Support yaml* files are in the folder `/epics/ibek-defs` inside of the +container. They were placed there during the compilation of the support +modules at Generic IOC build time. + +It can be instructive to look at these files to see what entities are available +to *IOC instances*. For example the global support yaml file +`/epics/ibek-defs/epics.ibek.support.yaml` contains the following: + +```yaml +- name: StartupCommand + description: Adds an arbitrary command in the startup script before iocInit + args: + - type: str + name: command + description: command string + default: "" + pre_init: + - type: text + value: "{{ command }}" + +- name: PostStartupCommand + description: Adds an arbitrary command in the startup script after iocInit + args: + - type: str + name: command + description: command string + default: "" + post_init: + - type: text + value: "{{ command }}" +``` + +These two definitions allow you to add arbitrary commands to the startup script +before and after iocInit. This is how we added the `dbLoadRecords` command. + +If you want to specify multiple lines in a command you can use the following +syntax for multi-line stings: + +> ```yaml +> - type: epics.StartupCommand +> command: | +> # loading extra records +> dbLoadRecords(config/extra.db) +> # loading even more records +> dbLoadRecords(config/extra2.db) +> ``` + +This would place the 4 lines verbatim into the startup script (except that +they would not be indented - the nesting whitespace is stripped). + +In later tutorials we will see where the *Support yaml* files come from and +how to add your own. diff --git a/docs/tutorials/ioc_changes1.rst b/docs/tutorials/ioc_changes1.rst deleted file mode 100644 index 9fa4f09a..00000000 --- a/docs/tutorials/ioc_changes1.rst +++ /dev/null @@ -1,186 +0,0 @@ -Changing the IOC Instance -========================= - -This tutorial will make a very simple change to the example IOC ``bl01t-ea-ioc-02``. -This is a type 1 change from `ioc_change_types`, types 2, 3 will be covered in the -following 2 tutorials. - -Strictly speaking, Type 1 changes do not require a devcontainer. You created -and deployed the IOC instance in a previous tutorial without one. It is up to -you how you choose to make these types of changes. Types 2,3 do require a -devcontainer because they involve compiling Generic IOC / support module code. - -We are going to add a hand crafted EPICS DB file to the IOC instance. This will -be a simple record that we will be able to query to verify that the change -is working. - -.. note:: - - Before doing this tutorial make sure you have not left the IOC - bl01t-ea-ioc-02 running from a previous tutorial. Execute this command - outside of the devcontainer to stop it: - - .. code-block:: bash - - ec ioc stop bl01t-ea-ioc-02 - -Make the following changes in your test IOC config folder -(``bl01t/iocs/bl01t-ea-ioc-02/config``): - -1. Add a file called ``extra.db`` with the following contents. - - .. code-block:: text - - record(ai, "BL01T-EA-IOC-02:TEST") { - field(DESC, "Test record") - field(DTYP, "Soft Channel") - field(SCAN, "Passive") - field(VAL, "1") - } - -2. Add the following lines to the end ``ioc.yaml`` (verify that the indentation - matches the above entry so that ``- type:`` statements line up): - - .. code-block:: yaml - - - type: epics.StartupCommand - command: dbLoadRecords(config/extra.db) - -Locally Testing Your changes ----------------------------- - -You can immediately test your changes by running the IOC locally. The following -command will run the IOC locally using the config files in your test IOC config -folder: - -.. code-block:: bash - - # stop any existing IOC shell by hitting Ctrl-D or typing exit - cd /epics/ioc - ./start.sh - -If all is well you should see your iocShell prompt and the output should -show ``dbLoadRecords(config/extra.db)``. - -Test your change -from another terminal (VSCode menus -> Terminal -> New Terminal) like so: - -.. code-block:: bash - - caget BL01T-EA-IOC-02:TEST - -If you see the value 1 then your change is working. - -.. Note:: - - You are likely to see - *"Identical process variable names on multiple servers"* warnings. This is - because caget can see the PV on the host network and the container network, - but as these are the same IOC this is not a problem. - - You can change this and make your devcontainer network isolated by removing - the line ``"--net=host",`` from ``.devcontainer/devcontainer.json``, but - it is convenient to leave it if you want to run OPI tools locally on the - host. You may want to isolate your development network if multiple - developers are working on the same subnet. In this case some other solution - is required for running OPI tools on the host (TODO add link to solution). - -Because of the symlink between ``/epics/ioc/config`` and -``/workspaces/bl01t/iocs/bl01t-ea-ioc-02/config`` the same files you are testing -by launching the IOC inside of the devcontainer are also ready to be -committed and pushed to the bl01t repo. i.e.: - -.. code-block:: bash - - # Do this from a host terminal (not a devcontainer terminal) - cd bl01t - git add . - git commit -m "Added extra.db" - git push - # tag a new version of the beamline repo - git tag 2023.11.2 - git push origin 2023.11.2 - # deploy the new version of the IOC to the local docker / podman instance - ec ioc deploy bl01t-ea-ioc-02 2023.11.2 - -The above steps were performed on a host terminal because we are using ``ec``. -However all of the steps except for the ``ec`` command could have been done -*inside* the devcontainer starting with ``cd /workspaces/bl01t``. - -We choose not to have ``ec`` installed inside of the devcontainer because -that would involve containers in containers which adds too much complexity. - -If you like working entirely from the vscode window you can open a terminal -in vscode *outside* of the devcontainer. To do so, press ``Ctrl-Shift-P`` and -choose the commnd ``Terminal: Create New Integrated Terminal (Local)``. -This will open a terminal to the host. You can then run ``ec`` from there. - -Raw Startup Assets ------------------- - -If you plan not to use ``ibek`` runtime asset creation you could use the raw -startup assets from the previous tutorial. If you do this then the process -above is identical except that you will add the ``dbLoadRecords`` command to -the end of ``st.cmd``. - -More about ibek Runtime Asset Creation --------------------------------------- - -The set of ``entities`` that you may create in your ioc.yaml is defined by the -``ibek`` IOC schema that we reference at the top of ``ioc.yaml``. -The schema is in turn defined by the set of support modules that were compiled -into the Generic IOC (ioc-adsimdetector). Each support module has an -``ibek`` *support YAML* file that contributes to the schema. - -The *Support yaml* files are in the folder ``/epics/ibek-defs`` inside of the -container. They were placed there during the compilation of the support -modules at Generic IOC build time. - -It can be instructive to look at these files to see what entities are available -to *IOC instances*. For example the global support yaml file -``/epics/ibek-defs/epics.ibek.support.yaml`` contains the following: - -.. code:: yaml - - - name: StartupCommand - description: Adds an arbitrary command in the startup script before iocInit - args: - - type: str - name: command - description: command string - default: "" - pre_init: - - type: text - value: "{{ command }}" - - - name: PostStartupCommand - description: Adds an arbitrary command in the startup script after iocInit - args: - - type: str - name: command - description: command string - default: "" - post_init: - - type: text - value: "{{ command }}" - -These two definitions allow you to add arbitrary commands to the startup script -before and after iocInit. This is how we added the ``dbLoadRecords`` command. - -If you want to specify multiple lines in a command you can use the following -syntax for multi-line stings: - - .. code-block:: yaml - - - type: epics.StartupCommand - command: | - # loading extra records - dbLoadRecords(config/extra.db) - # loading even more records - dbLoadRecords(config/extra2.db) - -This would place the 4 lines verbatim into the startup script (except that -they would not be indented - the nesting whitespace is stripped). - -In later tutorials we will see where the *Support yaml* files come from and -how to add your own. diff --git a/docs/tutorials/ioc_changes2.md b/docs/tutorials/ioc_changes2.md new file mode 100644 index 00000000..cfb621da --- /dev/null +++ b/docs/tutorials/ioc_changes2.md @@ -0,0 +1,208 @@ +# Changing a Generic IOC + +This is a type 2 change from {any}`ioc-change-types`. + +The changes that you can make in an IOC instance are limited to what +the author of the associated Generic IOC has made configurable. +Therefore you will +occasionally need to update the Generic IOC that your instance is using. +Some of the reasons for doing this are: + +- Update one or more support modules to new versions +- Add additional support such as autosave or iocStats +- For ibek generated IOC instances, you may need to add or change functionality + in the support YAML file. + +:::{note} +If you are considering making a change to a Generic IOC because you +want to add support for a second device, this is allowed but you should +consider the alternative of creating a new Generic IOC. +If you keep your Generic IOCs simple and focused on a single device, they +will be smaller and there will be less of them. IOCs' records can still be +linked via CA links and this is preferable to recompiling a Generic IOC +for every possible combination of devices. Using Kubernetes to +manage multiple small services is cleaner than having a handful of +monolithic services. +::: + +This tutorial will make some changes to the generic IOC `ioc-adsimdetector` +that you already used in earlier tutorials. + +For this exercise we will work locally inside the `ioc-adsimdetector` +developer container. Following tutorials will show how to fork repositories +and push changes back to GitHub + +For this exercise we will be using an example IOC Instance to test our changes. Instead of working with a beamline repository, we will use the example ioc instance inside `ioc-adsimdetector`. It is a good idea for Generic IOC authors to include an example IOC Instance in their repository for testing changes in isolation. Obviously, this is easy for a simulation IOC, for IOCs that normally connect to real hardware this would require an additional simulator of some kind. + +## Preparation + +First, clone the `ioc-adsimdetector` repository and make sure the container +build is working: + +```console +git clone git@github.com:epics-containers/ioc-adsimdetector.git +cd ioc-adsimdetector +./build +code . +# Choose "Reopen in Container" +# Or ctrl+shift+p and choose "Remote-Containers: Reopen in Container" +``` + +Note that if you do not see the prompt to reopen in container, you can open +the `Remote` menu with `Ctrl+Alt+O` and select `Reopen in Container`. + +The `build` script does two things. + +- it fetches the git submodule called `ibek-support`. This submodule is shared between all the Generic IOC container images and contains the support YAML files that tell `ibek` how to build support modules inside the container environment and how to use them at runtime. +- it builds the Generic IOC container image developer target locally using podman or docker. + +## Verify the Example IOC Instance is working + +When a new Generic IOC developer container is opened, there are two things +that need to be done before you can run an IOC instance inside of it. + +- Build the IOC binary +- Select an IOC instance definition to run + +The folder `ioc` inside of the `ioc-adsimdetector` is where the IOC source code resided. However our containers always make a symlink to this folder at `/epics/ioc`. This is so that it is always in the same place and can easily be found by ibek (and the developer!). Therefore you can build the IOC binary with the +following command: + +```console +cd /epics/ioc +make +``` + +The IOC instance definition is a YAML file that tells `ibek` what the runtime assets (ie. EPICS DB and startup script) should look like. Previous tutorials selected the IOC instance definition from a beamline repository. In this case we will use the example IOC instance that comes with `ioc-adsimdetector`. The following command will select the example IOC instance: + +```console +ibek dev instance /workspaces/ioc-adsimdetector/ioc_examples/bl01t-ea-test-02 +``` + +The above command removes the existing config folder `/epics/ioc/config` and +symlinks in the chosen IOC instance definition's `config` folder. + +Now run the IOC: + +```console +cd /epics/ioc +./start.sh +``` + +You should see an iocShell prompt and no error messages above. + +Let us also make sure we can see the simulation images that the IOC is +producing. For this we need the `c2dv` tool that we used earlier. You +can use the same virtual environment that you created earlier, or create +a new one and install again. Note that these commands are to be run +in a terminal outside of the developer container. + +```console +python3 -m venv c2dv +source ~/c2dv/bin/activate +pip install c2dataviewer +``` + +Run the `c2dv` tool and connect it to our IOCs PVA output: + +```console +c2dv --pv BL01T-EA-TST-02:PVA:OUTPUT & +``` + +Back inside the developer container, you can now start the detector and +the PVA plugin, by opening a new terminal and running the following: + +```console +caput BL01T-EA-TST-02:PVA:EnableCallbacks 1 +caput BL01T-EA-TST-02:DET:Acquire 1 +``` + +You should see the moving image in the `c2dv` window. We now have a working +IOC instance that we can use to test our changes. + +## Making a change to the Generic IOC + +One interesting way of changing a Generic IOC is to modify the support YAML +for one of the support modules. The support YAML describes the `entities` that +an IOC instance can make use of in its instance YAML file. This will be +covered in much more detail in {any}`generic_ioc`. + +For this exercise we will make a change to the `ioc-adsimdetector` support +YAML file. We will change the startup script that it generates so that the +simulation detector is automatically started when the IOC starts. + +To make this change we just need to have the startup script set the values +of the records `BL01T-EA-TST-02:DET:Acquire` and +`BL01T-EA-TST-02:PVA:EnableCallbacks` to 1. + +To make this change, open the file +`ibek-support/ADSimDetector/ADSimDetector.ibek.support.yaml` +and add a `post_init` section just after the `pre_init` section: + +```yaml +post_init: + - type: text + value: | + dbpf {{P}}{{R}}Acquire 1 +``` + +Next make a change to the file `ibek-support/ADCore/ADCore.ibek.support.yaml`. +Find the NDPvaPlugin section and also add a `post_init` section: + +```yaml +post_init: + - type: text + value: | + dbpf {{P}}{{R}}EnableCallbacks 1 +``` + +If you now go to the terminal where you ran your IOC, you can stop it with +`Ctrl+C` and then start it again with `./start.sh`. You should see the +following output at the end of the startup log: + +```console +dbpf BL01T-EA-TST-02:DET:Acquire 1 +DBF_STRING: "Acquire" +dbpf BL01T-EA-TST-02:PVA:EnableCallbacks 1 +DBF_STRING: "Enable" +epics> +``` + +You should also see the `c2dv` window update with the moving image again. + +If you wanted to publish these changes you would have to commit both the +`ibek-support` submodule and the `ioc-adsimdetector` repository and push +them in that order because of the sub-module dependency. But we won't be +pushing these changes as they are just for demonstration purposes. In later +tutorials we will cover making forks and doing pull requests for when you have +changes to share back with the community. + +Note: this is a slightly artificial example, as it would change the behaviour +for all instances of a PVA plugin and a simDetector. In a real IOC you would +do this on a per instance basis. + +Let us quickly do the instance YAML change to demonstrate the correct approach +to this auto-starting detector. + +Undo the support yaml changes: + +```console +cd /workspaces/ioc-adsimdetector/ibek-support +git reset --hard +``` + +Add the following to +`/workspaces/ioc-adsimdetector/ioc_examples/bl01t-ea-test-02/config/ioc.yaml`: + +```yaml +- type: epics.dbpf + pv: BL01T-EA-TST-02:DET:Acquire + value: "1" + +- type: epics.dbpf + pv: BL01T-EA-TST-02:PVA:EnableCallbacks + value: "1" +``` + +Now restart the IOC and you should see the same behaviour as before. Here +we have made the change on a per instance basis, and used the `dbpf` entity +declared globally in `ibek-support/_global/epics.ibek.support.yaml`. diff --git a/docs/tutorials/ioc_changes2.rst b/docs/tutorials/ioc_changes2.rst deleted file mode 100644 index e9c940c4..00000000 --- a/docs/tutorials/ioc_changes2.rst +++ /dev/null @@ -1,239 +0,0 @@ -Changing a Generic IOC -====================== - -This is a type 2 change from `ioc_change_types`. - -The changes that you can make in an IOC instance are limited to what -the author of the associated Generic IOC has made configurable. -Therefore you will -occasionally need to update the Generic IOC that your instance is using. -Some of the reasons for doing this are: - -- Update one or more support modules to new versions -- Add additional support such as autosave or iocStats -- For ibek generated IOC instances, you may need to add or change functionality - in the support YAML file. - -.. note:: - - If you are considering making a change to a Generic IOC because you - want to add support for a second device, this is allowed but you should - consider the alternative of creating a new Generic IOC. - If you keep your Generic IOCs simple and focused on a single device, they - will be smaller and there will be less of them. IOCs' records can still be - linked via CA links and this is preferable to recompiling a Generic IOC - for every possible combination of devices. Using Kubernetes to - manage multiple small services is cleaner than having a handful of - monolithic services. - - -This tutorial will make some changes to the generic IOC ``ioc-adsimdetector`` -that you already used in earlier tutorials. - -For this exercise we will work locally inside the ``ioc-adsimdetector`` -developer container. Following tutorials will show how to fork repositories -and push changes back to GitHub - -For this exercise we will be using an example IOC Instance to test our changes. -Instead of working with a beamline repository, we will use the example ioc instance -inside ``ioc-adsimdetector``. It is a good idea for Generic IOC authors to -include an example IOC Instance in their repository for testing changes in -isolation. - -Preparation ------------ - -First, clone the ``ioc-adsimdetector`` repository and make sure the container -build is working: - -.. code-block:: console - - git clone git@github.com:epics-containers/ioc-adsimdetector.git - cd ioc-adsimdetector - ./build - code . - # Choose "Reopen in Container" - -Note that if you do not see the prompt to reopen in container, you can open -the ``Remote`` menu with ``Ctrl+Alt+O`` and select ``Reopen in Container``. - -The ``build`` script does two things. - -- it fetches the git submodule called ``ibek-support``. This submodule is shared - between all the EPICS IOC container images and contains the support YAML files - that tell ``ibek`` how to build support modules inside the container - environment and how to use them at runtime. -- it builds the Generic IOC container image developer target locally using - podman or docker. - -Verify the Example IOC Instance is working ------------------------------------------- - -When a new Generic IOC developer container is opened, there are two things -that need to be done before you can run an IOC instance inside of it. - -- Build the IOC binary -- Select an IOC instance definition to run - -The folder ``ioc`` inside of the ``ioc-adsimdetector`` is where the IOC source code -resided. However our containers always make a symlink to this folder at -``/epics/ioc``. This is so that it is always in the same place and can easily be -found by ibek (and the developer!). Therefore you can build the binary with the -following command: - -.. code-block:: console - - cd /epics/ioc - make - -.. note:: - - Note that we are required to build the IOC. - This is even though the container you are using already had the IOC - source code built by its Dockerfile (``ioc-adsimdetector/Dockerfile`` - contains the same command). - - For a detailed explanation of why this is the case see `ioc-source` - -The IOC instance definition is a YAML file that tells ``ibek`` what the runtime -assets (ie. EPICS DB and startup script) should look like. Previous tutorials -selected the IOC instance definition from a beamline repository. In this case -we will use the example IOC instance that comes with ``ioc-adsimdetector``. The -following command will select the example IOC instance: - -.. code-block:: console - - ibek dev instance /workspaces/ioc-adsimdetector/ioc_examples/bl01t-ea-ioc-02 - -The above command removes the existing config folder ``/epics/ioc/config`` and -symlinks in the chosen IOC instance definition's ``config`` folder. - -Now run the IOC: - -.. code-block:: console - - cd /epics/ioc - ./start.sh - -You should see an iocShell prompt and no error messages above. - -Let us also make sure we can see the simulation images that the IOC is -producing. For this we need the ``c2dv`` tool that we used earlier. You -can use the same virtual environment that you created earlier, or create -a new one and install again. Note that these commands are to be run -in a terminal outside of the developer container. - -.. code-block:: console - - python3 -m venv c2dv - source ~/c2dv/bin/activate - pip install c2dataviewer - -Run the ``c2dv`` tool and connect it to our IOCs PVA output: - -.. code-block:: console - - c2dv --pv BL01T-EA-TST-03:PVA:OUTPUT & - - -Back inside the developer container, you can now start the detector and -the PVA plugin, by opening a new terminal and running the following: - -.. code-block:: console - - caput BL01T-EA-TST-03:PVA:EnableCallbacks 1 - caput BL01T-EA-TST-03:CAM:Acquire 1 - -You should see the moving image in the ``c2dv`` window. We now have a working -IOC instance that we can use to test our changes. - -Making a change to the Generic IOC ----------------------------------- - -One interesting way of changing a Generic IOC is to modify the support YAML -for one of the support modules. The support YAML describes the ``entities`` that -an IOC instance can make use of in its instance YAML file. This will be -covered in much more detail in `generic_ioc`. - -For this exercise we will make a change to the ``ioc-adsimdetector`` support -YAML file. We will change the startup script that it generates so that the -simulation detector is automatically started when the IOC starts. - -To make this change we just need to have the startup script set the values -of the records ``BL01T-EA-TST-03:CAM:Acquire`` and -``BL01T-EA-TST-03:PVA:EnableCallbacks`` to 1. - -To make this change, open the file -``ibek-support/ADSimDetector/ADSimDetector.ibek.support.yaml`` -and add a ``post_init`` section just after the ``pre_init`` section: - -.. code-block:: yaml - - post_init: - - type: text - value: | - dbpf {{P}}{{R}}Acquire 1 - -Next make a change to the file ``ibek-support/ADCore/ADCore.ibek.support.yaml``. -Find the NDPvaPlugin section and also add a ``post_init`` section: - -.. code-block:: yaml - - post_init: - - type: text - value: | - dbpf {{P}}{{R}}EnableCallbacks 1 - - -If you now go to the terminal where you ran your IOC, you can stop it with -``Ctrl+C`` and then start it again with ``./start.sh``. You should see the -following output at the end of the startup log: - -.. code-block:: console - - dbpf BL01T-EA-TST-03:CAM:Acquire 1 - DBF_STRING: "Acquire" - dbpf BL01T-EA-TST-03:PVA:EnableCallbacks 1 - DBF_STRING: "Enable" - epics> - -You should also see the ``c2dv`` window update with the moving image again. - -If you wanted to publish these changes you would have to commit both the -``ibek-support`` submodule and the ``ioc-adsimdetector`` repository and push -them in that order because of the sub-module dependency. But we won't be -pushing these changes as they are just for demonstration purposes. In later -tutorials we will cover making forks and doing pull requests for when you have -changes to share back with the community. - -Note: this is a slightly artificial example, as it would change the behaviour -for all instances of a PVA plugin and a simDetector. In a real IOC you would -do this on a per instance basis. - -Let us quickly do the instance YAML change to demonstrate the correct approach -to this auto-starting detector. - -Undo the support yaml changes: - -.. code-block:: console - - cd /workspaces/ioc-adsimdetector/ibek-support - git reset --hard - -Add the following to -``/workspaces/ioc-adsimdetector/ioc_examples/bl01t-ea-ioc-02/config/ioc.yaml``: - -.. code-block:: yaml - - - type: epics.dbpf - pv: BL01T-EA-TST-03:CAM:Acquire - value: "1" - - - type: epics.dbpf - pv: BL01T-EA-TST-03:PVA:EnableCallbacks - value: "1" - -Now restart the IOC and you should see the same behaviour as before. Here -we have made the change on a per instance basis, and used the ``dbpf`` entity -declared globally in ``ibek-support/_global/epics.ibek.support.yaml``. - diff --git a/docs/tutorials/rtems_ioc.md b/docs/tutorials/rtems_ioc.md new file mode 100644 index 00000000..4aeb0233 --- /dev/null +++ b/docs/tutorials/rtems_ioc.md @@ -0,0 +1,312 @@ +# RTEMS - Deploying an Example IOC + +:::{Warning} +This tutorial is out of date and will be updated in December 2023. +::: + +The previous tutorials walked through how to create a Generic linux soft +IOC and how to deploy an IOC instance using that Generic IOC. + +epics-containers also supports RTEMS 5 running on MVVME5500. We will +now will look at the differences for this architecture. Further +architectures will be supported in future. + +Each beamline or accelerator domain will require a server for +serving the IOC binaries and instance files to the RTEMS devices. This +needs to be set up for your test beamline before proceeding, +see {any}`rtems_setup`. + +Once you have the file server set up, deploying an IOC instance that uses +an RTEMS Generic IOC is very similar to {any}`deploy_example`. + +We will be adding +a new IOC instance to the `bl01t` beamline that we created in the previous +tutorials. You will need to have worked through the previous tutorials in +order to complete this one. + +## Preparing the RTEMS Boot loader + +To try this tutorial you will need a VME crate with an MVVME5500 processor card +installed. You will also need access to the serial console over ethernet +using a terminal server or similar. + +:::{note} +**DLS Users** for details of setting up RTEMS on your VME crates see +this [internal link](https://confluence.diamond.ac.uk/pages/viewpage.action?spaceKey=CNTRLS&title=RTEMS) + +The following crate is already running RTEMS and can be used for this +tutorial, but check with the accelerator controls team before using it: + +```{eval-rst} + +:console: ts0001 7007 +:crate monitor: ts0001 7008 +``` + +It is likely already set up as per the example below. +::: + +Use telnet to connect to the console of your target IOC. e.g. +`telnet ts0001 7007`. We want to get to the MOTLoad prompt which should look +like `MVME5500>`. If you see an IOC Shell prompt instead hit `Ctrl-D` to +exit and then `Esc` when you see +`Boot Script - Press to Bypass, to Continue` + +Now you want to set the boot script to load the IOC binary from the network via +TFTP and mount the instance files from the network via NFS. The command +`gevShow` will show you the current state of the global environment variables. +e.g. + +``` +MVME5500> gevShow +mot-/dev/enet0-cipa=172.23.250.15 +mot-/dev/enet0-snma=255.255.240.0 +mot-/dev/enet0-gipa=172.23.240.254 +mot-boot-device=/dev/em1 +rtems-client-name=bl01t-ea-test-02 +epics-script=172.23.168.203:/iocs:bl01t/bl01t-ea-test-02/config/st.cmd +mot-script-boot +dla=malloc 0x230000 +tftpGet -d/dev/enet1 -fbl01t/bl01t-ea-test-02/bin/RTEMS-beatnik/ioc.boot -m255.255.240.0 -g172.23.240.254 -s172.23.168.203 -c172.23.250.15 -adla +go -a0095F000 + +Total Number of GE Variables =7, Bytes Utilized =427, Bytes Free =3165 +``` + +Now use `gevEdit` to change the global variables to the values you need. +For this tutorial we will create an IOC called bl01t-ea-test-02 and for the +example we assume the file server is on 172.23.168.203. For the details of +setting up these parameters see your site documentation but the important +values to change for this tutorial IOC would be: + +```{eval-rst} + +:rtems-client-name: bl01t-ea-test-02 +:epics-script: 172.23.168.203:/iocs:bl01t/bl01t-ea-test-02/config/st.cmd +:mot-script-boot (2nd line): tftpGet -d/dev/enet1 -fbl01t/bl01t-ea-test-02/bin/RTEMS-beatnik/ioc.boot -m255.255.240.0 -g172.23.240.254 -s172.23.168.203 -c172.23.250.15 -adla +``` + +Now your `gevShow` should look similar to the example above. + +Meaning of the parameters: + +```{eval-rst} + +:rtems-client-name: a name for the IOC crate +:epics-script: an NFS address for the IOC's root folder +:mot-script-boot: a TFTP address for the IOC's binary boot file +``` + +Note that the IP parameters to the tftpGet command are respectively: +net mask, gateway, server address, client address. + +## Creating an RTEMS IOC Instance + +We will be adding a new IOC instance to the `bl01t` beamline that we created in +{doc}`create_beamline`. The first step is to make a copy of our existing IOC instance +and make some modifications to it. We will call this new IOC instance +`bl01t-ea-test-02`. + +```bash +cd bl01t +cp -r services/bl01t-ea-test-01 services/bl01t-ea-test-02 +# don't need this file for the new IOC +rm services/bl01t-ea-test-02/config/extra.db +``` + +We are going to make a very basic IOC with some hand coded database with +a couple of simple records. Therefore the Generic IOC that we use can just +be ioc-template. + +Generic IOCs have multiple targets, they always have a +`developer` target which is used for building and debugging the Generic IOC and +a `runtime` target which is lightweight and usually used when running the IOC +in the cluster. The matrix of targets also includes an architecture dimension, +at present the ioc-template supports two architectures, `linux` and +`rtems`, thus there are 4 targets in total as follows: + +- ghcr.io/epics-containers/ioc-template-linux-runtime +- ghcr.io/epics-containers/ioc-template-linux-developer +- ghcr.io/epics-containers/ioc-template-rtems-runtime +- ghcr.io/epics-containers/ioc-template-rtems-developer + +We want to run the RTEMS runtime target on the cluster so this will appear +at the top of the `values.yaml` file. In addition there are a number of +environment variables required for the RTEMS target that we also specify in +`values.yaml`. +Edit the file +`services/bl01t-ea-test-02/values.yaml` to look like this: + +```yaml +base_image: ghcr.io/epics-containers/ioc-template-rtems-runtime:23.4.2 + +env: +# This is used to set EPICS_IOC_ADDR_LIST in the liveness probe client +# It is only needed if auto addr list discovery would fail +- name: K8S_IOC_ADDRESS + value: 172.23.250.15 + +# RTEMS console connection details +- name: RTEMS_VME_CONSOLE_ADDR + value: ts0001.cs.diamond.ac.uk +- name: RTEMS_VME_CONSOLE_PORT + value: "7007" +- name: RTEMS_VME_AUTO_REBOOT + value: true +- name: RTEMS_VME_AUTO_PAUSE + value: true +``` + +If you are not at DLS you will need to change the above to match the +parameters of your RTEMS Crate. The environment variables are: + +```{eval-rst} +.. list-table:: RTEMS Environment Variables + :widths: 30 70 + :header-rows: 1 + + * - Variable + - Description + * - K8S_IOC_ADDRESS + - The IP address of the IOC (mot-/dev/enet0-cipa above) + * - RTEMS_VME_CONSOLE_ADDR + - Address of terminal server for console access + * - RTEMS_VME_CONSOLE_PORT + - Port of terminal server for console access + * - RTEMS_VME_AUTO_REBOOT + - true to reboot the hard IOC when the IOC container changes + * - RTEMS_VME_AUTO_PAUSE + - true to pause/unpause when the IOC container stops/starts +``` + +Edit the file `services/bl01t-ea-test-02/Chart.yaml` and change the 1st 4 lines +to represent this new IOC (the rest of the file is boilerplate): + +```yaml +apiVersion: v2 +name: bl01t-ea-test-02 +description: | + example RTEMS IOC for bl01t +``` + +For configuration we will create a simple database with a few of records and +a basic startup script. Add the following files to the +`services/bl01t-ea-test-02/config` directory. + +```{code-block} +:caption: bl01t-ea-test-02.db + +record(calc, "bl01t-ea-test-02:SUM") { + field(DESC, "Sum A and B") + field(CALC, "A+B") + field(SCAN, ".1 second") + field(INPA, "bl01t-ea-test-02:A") + field(INPB, "bl01t-ea-test-02:B") +} + +record(ao, "bl01t-ea-test-02:A") { + field(DESC, "A voltage") + field(EGU, "Volts") + field(VAL, "0.0") +} + +record(ao, "bl01t-ea-test-02:B") { + field(DESC, "B voltage") + field(EGU, "Volts") + field(VAL, "0.0") +} +``` + +```{code-block} +:caption: st.cmd + +# RTEMS Test IOC bl01t-ea-test-02 + +dbLoadDatabase "/services/bl01t/bl01t-ea-test-02/dbd/ioc.dbd" +ioc_registerRecordDeviceDriver(pdbbase) + +# db files from the support modules are all held in this folder +epicsEnvSet(EPICS_DB_INCLUDE_PATH, "/services/bl01t/bl01t-ea-test-02/support/db") + +# load our hand crafted database +dbLoadRecords("/services/bl01t/bl01t-ea-test-02/config/bl01t-ea-test-02.db") +# also make Database records for DEVIOCSTATS +dbLoadRecords(iocAdminSoft.db, "IOC=bl01t-ea-test-02") +dbLoadRecords(iocAdminScanMon.db, "IOC=bl01t-ea-test-02") + +iocInit +``` + +You now have a new helm chart in services/bl01t-ea-test-02 that describes an IOC +instance for your RTEMS device. Recall that this is not literally where the IOC +runs, it deploys a kubernetes pod that manages the RTEMS IOC. It does contain +the IOC's configuration and the IOC's binary code, which it will copy to the +file-server on startup. + +Finally you will need to tell the IOC to mount the Persistent Volume Claim +that the bl01t-ioc-files service is serving over NFS and TFTP. To do this +add the following lines to `services/bl01t-ea-test-02/values.yaml`: + +```yaml +# for RTEMS IOCS this is the PVC name for the filesystem where RTEMS +# IOCs look for their files - enable this in RTEMS IOCs only +nfsv2TftpClaim: bl01t-ioc-files-claim +``` + +You are now ready to deploy the IOC instance to the cluster and test it out. + +## Deploying an RTEMS IOC Instance + +To deploy an IOC instance to the cluster you can use one of two approaches: + +- push your beamline repo to GitHub and tag it. Then use `ec deploy` to + deploy the resulting versioned IOC instance. This was covered for linux IOCs + in {any}`deploy_example`. +- use `ec deploy-local` to directly deploy the local copy of the IOC + instance helm chart to kubernetes as a beta version. This was covered for + linux IOCs in --local_deploy_ioc--. + +Both types of deployment of IOC instances above work exactly the same for +linux and RTEMS IOCs. We will do the latter as it is quicker for +the purposes of the tutorial. + +Execute the following commands: + +```bash +cd bl01t +ec deploy-local services/bl01t-ea-test-02 +``` + +When an RTEMS Kubernetes pod runs up it will make a telnet connection to +the hard IOC's console and present the console as stdin/stdout of the +container. This means once you have done the above deployment the command: + +```bash +ec logs bl01t-ea-test-02 -f +``` + +will show the RTEMS console output, and follow it along (`-f`) as the IOC +starts up. You can hit `^C` to stop following the logs. + +You can also attach to the container and interact with the RTEMS console via +the telnet connection with: + +```bash +ec attach bl01t-ea-test-02 +``` + +Most likely for the first deploy your IOC will still be sitting at the +`MVME5500>` prompt. If you see this prompt when you attach then you need +to type `reset` to restart the boot-loader. This should then go through +the boot-loader startup and eventually start the IOC. + +## Checking your RTEMS IOC + +To verify that your RTEMS IOC is working you should be able to execute the +following commands and get correct sum of the A and B values: + +```bash +caput bl01t-ea-test-02:A 12 +caput get bl01t-ea-test-02:B 13 +caget get bl01t-ea-test-02:SUM +``` diff --git a/docs/tutorials/rtems_ioc.rst b/docs/tutorials/rtems_ioc.rst deleted file mode 100644 index 7ca8bf1b..00000000 --- a/docs/tutorials/rtems_ioc.rst +++ /dev/null @@ -1,307 +0,0 @@ -RTEMS - Deploying an Example IOC -================================ - -.. Warning:: - - This tutorial is out of date and will be updated in December 2023. - -The previous tutorials walked through how to create a Generic linux soft -IOC and how to deploy an IOC instance using that Generic IOC. - -epics-containers also supports RTEMS 5 running on MVVME5500. We will -now will look at the differences for this architecture. Further -architectures will be supported in future. - -Each beamline or accelerator domain will require a server for -serving the IOC binaries and instance files to the RTEMS devices. This -needs to be set up for your test beamline before proceeding, -see `rtems_setup`. - -Once you have the file server set up, deploying an IOC instance that uses -an RTEMS Generic IOC is very similar to `deploy_example`. - -We will be adding -a new IOC instance to the ``bl01t`` beamline that we created in the previous -tutorials. You will need to have worked through the previous tutorials in -order to complete this one. - -Preparing the RTEMS Boot loader -------------------------------- - -To try this tutorial you will need a VME crate with an MVVME5500 processor card -installed. You will also need access to the serial console over ethernet -using a terminal server or similar. - -.. note:: - - **DLS Users** for details of setting up RTEMS on your VME crates see - this `internal link `_ - - The following crate is already running RTEMS and can be used for this - tutorial, but check with the accelerator controls team before using it: - - :console: ts0001 7007 - :crate monitor: ts0001 7008 - - It is likely already set up as per the example below. - -Use telnet to connect to the console of your target IOC. e.g. -``telnet ts0001 7007``. We want to get to the MOTLoad prompt which should look -like ``MVME5500>``. If you see an IOC Shell prompt instead hit ``Ctrl-D`` to -exit and then ``Esc`` when you see -``Boot Script - Press to Bypass, to Continue`` - -Now you want to set the boot script to load the IOC binary from the network via -TFTP and mount the instance files from the network via NFS. The command -``gevShow`` will show you the current state of the global environment variables. -e.g. - -.. code-block:: - - MVME5500> gevShow - mot-/dev/enet0-cipa=172.23.250.15 - mot-/dev/enet0-snma=255.255.240.0 - mot-/dev/enet0-gipa=172.23.240.254 - mot-boot-device=/dev/em1 - rtems-client-name=bl01t-ea-ioc-02 - epics-script=172.23.168.203:/iocs:bl01t/bl01t-ea-ioc-02/config/st.cmd - mot-script-boot - dla=malloc 0x230000 - tftpGet -d/dev/enet1 -fbl01t/bl01t-ea-ioc-02/bin/RTEMS-beatnik/ioc.boot -m255.255.240.0 -g172.23.240.254 -s172.23.168.203 -c172.23.250.15 -adla - go -a0095F000 - - Total Number of GE Variables =7, Bytes Utilized =427, Bytes Free =3165 - -Now use ``gevEdit`` to change the global variables to the values you need. -For this tutorial we will create an IOC called bl01t-ea-ioc-02 and for the -example we assume the file server is on 172.23.168.203. For the details of -setting up these parameters see your site documentation but the important -values to change for this tutorial IOC would be: - -:rtems-client-name: bl01t-ea-ioc-02 -:epics-script: 172.23.168.203:/iocs:bl01t/bl01t-ea-ioc-02/config/st.cmd -:mot-script-boot (2nd line): tftpGet -d/dev/enet1 -fbl01t/bl01t-ea-ioc-02/bin/RTEMS-beatnik/ioc.boot -m255.255.240.0 -g172.23.240.254 -s172.23.168.203 -c172.23.250.15 -adla - -Now your ``gevShow`` should look similar to the example above. - -Meaning of the parameters: - -:rtems-client-name: a name for the IOC crate -:epics-script: an NFS address for the IOC's root folder -:mot-script-boot: a TFTP address for the IOC's binary boot file - -Note that the IP parameters to the tftpGet command are respectively: -net mask, gateway, server address, client address. - - -Creating an RTEMS IOC Instance ------------------------------- - -We will be adding a new IOC instance to the ``bl01t`` beamline that we created in -:doc:`create_beamline`. The first step is to make a copy of our existing IOC instance -and make some modifications to it. We will call this new IOC instance -``bl01t-ea-ioc-02``. - -.. code-block:: bash - - cd bl01t - cp -r iocs/bl01t-ea-ioc-01 iocs/bl01t-ea-ioc-02 - # don't need this file for the new IOC - rm iocs/bl01t-ea-ioc-02/config/extra.db - -We are going to make a very basic IOC with some hand coded database with -a couple of simple records. Therefore the Generic IOC that we use can just -be ioc-template. - -Generic IOCs have multiple targets, they always have a -``developer`` target which is used for building and debugging the Generic IOC and -a ``runtime`` target which is lightweight and usually used when running the IOC -in the cluster. The matrix of targets also includes an architecture dimension, -at present the ioc-template supports two architectures, ``linux`` and -``rtems``, thus there are 4 targets in total as follows: - -- ghcr.io/epics-containers/ioc-template-linux-runtime -- ghcr.io/epics-containers/ioc-template-linux-developer -- ghcr.io/epics-containers/ioc-template-rtems-runtime -- ghcr.io/epics-containers/ioc-template-rtems-developer - -We want to run the RTEMS runtime target on the cluster so this will appear -at the top of the ``values.yaml`` file. In addition there are a number of -environment variables required for the RTEMS target that we also specify in -``values.yaml``. -Edit the file -``iocs/bl01t-ea-ioc-02/values.yaml`` to look like this: - -.. code-block:: yaml - - base_image: ghcr.io/epics-containers/ioc-template-rtems-runtime:23.4.2 - - env: - # This is used to set EPICS_IOC_ADDR_LIST in the liveness probe client - # It is only needed if auto addr list discovery would fail - - name: K8S_IOC_ADDRESS - value: 172.23.250.15 - - # RTEMS console connection details - - name: RTEMS_VME_CONSOLE_ADDR - value: ts0001.cs.diamond.ac.uk - - name: RTEMS_VME_CONSOLE_PORT - value: "7007" - - name: RTEMS_VME_AUTO_REBOOT - value: true - - name: RTEMS_VME_AUTO_PAUSE - value: true - -If you are not at DLS you will need to change the above to match the -parameters of your RTEMS Crate. The environment variables are: - - -.. list-table:: RTEMS Environment Variables - :widths: 30 70 - :header-rows: 1 - - * - Variable - - Description - * - K8S_IOC_ADDRESS - - The IP address of the IOC (mot-/dev/enet0-cipa above) - * - RTEMS_VME_CONSOLE_ADDR - - Address of terminal server for console access - * - RTEMS_VME_CONSOLE_PORT - - Port of terminal server for console access - * - RTEMS_VME_AUTO_REBOOT - - true to reboot the hard IOC when the IOC container changes - * - RTEMS_VME_AUTO_PAUSE - - true to pause/unpause when the IOC container stops/starts - -Edit the file ``iocs/bl01t-ea-ioc-02/Chart.yaml`` and change the 1st 4 lines -to represent this new IOC (the rest of the file is boilerplate): - -.. code-block:: yaml - - apiVersion: v2 - name: bl01t-ea-ioc-02 - description: | - example RTEMS IOC for bl01t - -For configuration we will create a simple database with a few of records and -a basic startup script. Add the following files to the -``iocs/bl01t-ea-ioc-02/config`` directory. - -.. code-block:: :caption: bl01t-ea-ioc-02.db - - record(calc, "bl01t-ea-ioc-02:SUM") { - field(DESC, "Sum A and B") - field(CALC, "A+B") - field(SCAN, ".1 second") - field(INPA, "bl01t-ea-ioc-02:A") - field(INPB, "bl01t-ea-ioc-02:B") - } - - record(ao, "bl01t-ea-ioc-02:A") { - field(DESC, "A voltage") - field(EGU, "Volts") - field(VAL, "0.0") - } - - record(ao, "bl01t-ea-ioc-02:B") { - field(DESC, "B voltage") - field(EGU, "Volts") - field(VAL, "0.0") - } - -.. code-block:: :caption: st.cmd - - # RTEMS Test IOC bl01t-ea-ioc-02 - - dbLoadDatabase "/iocs/bl01t/bl01t-ea-ioc-02/dbd/ioc.dbd" - ioc_registerRecordDeviceDriver(pdbbase) - - # db files from the support modules are all held in this folder - epicsEnvSet(EPICS_DB_INCLUDE_PATH, "/iocs/bl01t/bl01t-ea-ioc-02/support/db") - - # load our hand crafted database - dbLoadRecords("/iocs/bl01t/bl01t-ea-ioc-02/config/bl01t-ea-ioc-02.db") - # also make Database records for DEVIOCSTATS - dbLoadRecords(iocAdminSoft.db, "IOC=bl01t-ea-ioc-02") - dbLoadRecords(iocAdminScanMon.db, "IOC=bl01t-ea-ioc-02") - - iocInit - -You now have a new helm chart in iocs/bl01t-ea-ioc-02 that describes an IOC -instance for your RTEMS device. Recall that this is not literally where the IOC -runs, it deploys a kubernetes pod that manages the RTEMS IOC. It does contain -the IOC's configuration and the IOC's binary code, which it will copy to the -file-server on startup. - -Finally you will need to tell the IOC to mount the Persistent Volume Claim -that the bl01t-ioc-files service is serving over NFS and TFTP. To do this -add the following lines to ``iocs/bl01t-ea-ioc-02/values.yaml``: - -.. code-block:: yaml - - # for RTEMS IOCS this is the PVC name for the filesystem where RTEMS - # IOCs look for their files - enable this in RTEMS IOCs only - nfsv2TftpClaim: bl01t-ioc-files-claim - -You are now ready to deploy the IOC instance to the cluster and test it out. - - -Deploying an RTEMS IOC Instance -------------------------------- - -To deploy an IOC instance to the cluster you can use one of two approaches: - -- push your beamline repo to GitHub and tag it. Then use ``ec ioc deploy`` to - deploy the resulting versioned IOC instance. This was covered for linux IOCs - in `deploy_example`. - -- use ``ec ioc deploy-local`` to directly deploy the local copy of the IOC - instance helm chart to kubernetes as a beta version. This was covered for - linux IOCs in --local_deploy_ioc--. - -Both types of deployment of IOC instances above work exactly the same for -linux and RTEMS IOCs. We will do the latter as it is quicker for -the purposes of the tutorial. - -Execute the following commands: - -.. code-block:: bash - - cd bl01t - ec ioc deploy-local iocs/bl01t-ea-ioc-02 - -When an RTEMS Kubernetes pod runs up it will make a telnet connection to -the hard IOC's console and present the console as stdin/stdout of the -container. This means once you have done the above deployment the command: - - -.. code-block:: bash - - ec logs bl01t-ea-ioc-02 -f - -will show the RTEMS console output, and follow it along (``-f``) as the IOC -starts up. You can hit ``^C`` to stop following the logs. - -You can also attach to the container and interact with the RTEMS console via -the telnet connection with: - -.. code-block:: bash - - ec attach bl01t-ea-ioc-02 - -Most likely for the first deploy your IOC will still be sitting at the -``MVME5500>`` prompt. If you see this prompt when you attach then you need -to type ``reset`` to restart the boot-loader. This should then go through -the boot-loader startup and eventually start the IOC. - -Checking your RTEMS IOC ------------------------ - -To verify that your RTEMS IOC is working you should be able to execute the -following commands and get correct sum of the A and B values: - -.. code-block:: bash - - caput bl01t-ea-ioc-02:A 12 - caput get bl01t-ea-ioc-02:B 13 - caget get bl01t-ea-ioc-02:SUM diff --git a/docs/tutorials/rtems_setup.rst b/docs/tutorials/rtems_setup.md similarity index 55% rename from docs/tutorials/rtems_setup.rst rename to docs/tutorials/rtems_setup.md index 4d7eab07..b77b7d86 100644 --- a/docs/tutorials/rtems_setup.rst +++ b/docs/tutorials/rtems_setup.md @@ -1,12 +1,10 @@ -RTEMS - Creating a File Server -============================== +# RTEMS - Creating a File Server -.. Warning:: +:::{Warning} +This tutorial is out of date and will be updated in December 2023. +::: - This tutorial is out of date and will be updated in December 2023. - -Introduction ------------- +## Introduction RTEMS IOCs are an example of a 'hard' IOC. Each IOC is a physical crate that contains a number of I/O cards and a processor card. @@ -28,8 +26,7 @@ At present epics-containers supports the MVVME5500 processor card running RTEMS 5. The same model as described above can be used for other 'hard' IOC types in future. -Create a File Server Service ----------------------------- +## Create a File Server Service When an RTEMS 5 IOC boots the bootloader loads the IOC binary from a TFTP address, this binary is then given access to a filesystem over NFS V2, this is @@ -39,51 +36,51 @@ Therefore we need a TFTP server and an NFS V2 server to serve the files to the IOC. For each EPICS domain a single service running in Kubernetes will supply a TFTP and NFS V2 server for all the IOCs in that domain. -In the tutorial :doc:`create_beamline` we created a beamline repository that -defines the IOC instances in the beamline ``bl01t``. The template project -that we copied contains a folder called ``services/nfsv2-tftp``. The folder +In the tutorial {doc}`create_beamline` we created a beamline repository that +defines the IOC instances in the beamline `bl01t`. The template project +that we copied contains a folder called `services/nfsv2-tftp`. The folder is a helm chart that will deploy a TFTP and NFS V2 server to Kubernetes. Before deploying the service we need to configure it. Make the following changes: -- Change the ``name`` value in ``Chart.yaml`` to ``bl01t-ioc-files`` -- Change the ``loadBalancerIP`` value in ``values.yaml`` to a free IP address +- Change the `name` value in `Chart.yaml` to `bl01t-ioc-files` +- Change the `loadBalancerIP` value in `values.yaml` to a free IP address in your cluster's Static Load Balancer range. This IP address will be used to access the TFTP and NFS V2 servers from the IOC. -.. note:: - - **DLS Users** The load balancer IP range on Pollux is - ``172.23.168.201-172.23.168.222``. Please use ``172.23.168.203``. The test - RTEMS crate is likely to already be set up to point at this address. There - are a limited number of addresses available, hence we have reserved a single - address for the training purposes. +:::{note} +**DLS Users** The load balancer IP range on Pollux is +`172.23.168.201-172.23.168.222`. Please use `172.23.168.203`. The test +RTEMS crate is likely to already be set up to point at this address. There +are a limited number of addresses available, hence we have reserved a single +address for the training purposes. - Also note that ``bl01t`` is a shared resource so if there is already a - ``bl01t-ioc-files`` service running then you could just use the existing - service. +Also note that `bl01t` is a shared resource so if there is already a +`bl01t-ioc-files` service running then you could just use the existing +service. +::: You can verify if the service is already running using kubectl. The command -shown below will list all the services in the ``bl01t`` namespace, and the -example output shows that there is already a ``bl01t-ioc-files`` service -using the IP address ``172.23.168.203``. +shown below will list all the services in the `bl01t` namespace, and the +example output shows that there is already a `bl01t-ioc-files` service +using the IP address `172.23.168.203`. -.. code-block:: bash - - $ kubectl get services -n bl01t - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - bl01t-ioc-files LoadBalancer 10.108.219.193 172.23.168.203 111:31491/UDP,2049:30944/UDP,20048:32277/UDP,69:32740/UDP 32d +```bash +$ kubectl get services -n bl01t +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +bl01t-ioc-files LoadBalancer 10.108.219.193 172.23.168.203 111:31491/UDP,2049:30944/UDP,20048:32277/UDP,69:32740/UDP 32d +``` Once you have made the changes to the helm chart you can deploy it to the cluster using the following command: -.. code-block:: bash - - cd bl01t - helm upgrade --install bl01t-ioc-files services/nfsv2-tftp -n bl01t +```bash +cd bl01t +helm upgrade --install bl01t-ioc-files services/nfsv2-tftp -n bl01t +``` -Now if you run the ``kubectl get services`` command again you should see the +Now if you run the `kubectl get services` command again you should see the new service. Once you have this service up and running you can leave it alone. It will @@ -93,6 +90,3 @@ persistent volume is shared with hard IOC pods so that they can place the files they need to serve to the IOC. See the next tutorial for how to deploy a hard IOC pod to the cluster. - - - diff --git a/docs/tutorials/setup_k8s.rst b/docs/tutorials/setup_k8s.md similarity index 61% rename from docs/tutorials/setup_k8s.rst rename to docs/tutorials/setup_k8s.md index 54b37a4a..06865b3c 100644 --- a/docs/tutorials/setup_k8s.rst +++ b/docs/tutorials/setup_k8s.md @@ -1,29 +1,25 @@ -.. _setup_kubernetes: +(setup-kubernetes)= +# Setup a Kubernetes Server -Setup a Kubernetes Server -========================= +:::{Note} +**DLS Users**: DLS already has the test cluster `Pollux` which includes +the test beamline p45 and the training beamlines p46 through to p49. -.. Note:: +We have also started to roll out production clusters for some of our +beamlines. To date we have clusters for p38, i20, i22 and c01. - **DLS Users**: DLS already has the test cluster ``Pollux`` which includes - the test beamline p45 and the training beamlines p46 through to p49. +For this reason DLS users should skip this tutorial unless you have a +spare linux machine with root access and an interest in how Clusters +are created. +::: - We have also started to roll out production clusters for some of our - beamlines. To date we have clusters for p38, i20, i22 and c01. +## Introduction - For this reason DLS users should skip this tutorial unless you have a - spare linux machine with root access and an interest in how Clusters - are created. - -Introduction ------------- This is a very easy set of instructions for setting up an experimental single-node Kubernetes cluster, ready to test deployment of EPICS IOCs. - -Bring Your Own Cluster ----------------------- +## Bring Your Own Cluster If you already have a Kubernetes cluster then you can skip this section. and go straight to the next tutorial. @@ -38,17 +34,18 @@ namespace and service account as long as it has network=host capability. Cloud based K8S offerings may not be appropriate because of the Channel Access routing requirement. -Platform Choice ---------------- +## Platform Choice These instructions have been tested on the following platforms. The simplest option is to use a linux distribution that is supported by k3s. +```{eval-rst} ========================== ============================================ Ubuntu 20.10 any modern linux distro should also work Raspberry Pi OS 2021-05-07 See `raspberry` Windows WSL2 See `wsl` ========================== ============================================ +``` Note that K3S provides a good uninstaller that will clean up your system if you decide to back out. So there is no harm in trying it out. @@ -56,65 +53,61 @@ if you decide to back out. So there is no harm in trying it out. If you prefer to investigate other implementations there are also these easy to install, lightweight Kubernetes implementations: - - kind https://kind.sigs.k8s.io/docs/user/quick-start/ - - microk8s https://microk8s.io/ - - minikube https://minikube.sigs.k8s.io/docs/start/ +> - kind +> - microk8s +> - minikube -For k3s documentation see https://k3s.io/. +For k3s documentation see . -Installation Steps ------------------- +## Installation Steps These instructions work with a single machine or with a server running k3s and a workstation running the client CLI. The client CLI commands should -all be run inside the devcontainer (at an [E7] prompt). - +all be run inside the devcontainer (at an \[E7\] prompt). -Install K3S lightweight Kubernetes -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +### Install K3S lightweight Kubernetes This command should be run OUTSIDE of the devcontainer. Execute this command on your server to set up the cluster master -(aka K3S Server node):: +(aka K3S Server node): - curl -sfL https://get.k3s.io | sh - +``` +curl -sfL https://get.k3s.io | sh - +``` -.. _install_kubectl: +(install-kubectl)= -Configure kubectl -~~~~~~~~~~~~~~~~~ +### Configure kubectl Kubectl is the command line tool for interacting with Kubernetes Clusters. It is already installed inside the devcontainer. It uses a configuration file in -$HOME/.kube to connect to the cluster. Here we will copy the configuration file +\$HOME/.kube to connect to the cluster. Here we will copy the configuration file from the server to the workstation. These commands should be run OUTSIDE of the devcontainer. If you have one machine only then copy the k3s kubectl configuration: -.. code-block:: bash - - mkdir ~/.kube - sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config - sudo chown ~/.kube/config +```bash +mkdir ~/.kube +sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config +sudo chown ~/.kube/config +``` If you have a separate server then from the server machine copy over the k3s kubectl configuration: -.. code-block:: bash - - mkdir ~/.kube - sudo scp /etc/rancher/k3s/k3s.yaml @:.kube/config +```bash +mkdir ~/.kube +sudo scp /etc/rancher/k3s/k3s.yaml @:.kube/config +``` If you do have separate workstation then edit the file .kube/config replacing 127.0.0.1 with your server's IP Address. For a single machine the file is left as is. - -Create an epics IOCs namespace and context -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +### Create an epics IOCs namespace and context For each beamline or EPICS domain there will be a kubernetes namespace. A namespace is a virtual cluster within a Kubernetes cluster. Namespaces allow @@ -131,56 +124,52 @@ DLS and non-DLS users. From the workstation INSIDE the devcontainer execute the following: -.. code-block:: bash +```bash +kubectl create namespace bl46p +kubectl config set-context bl46p --namespace=bl46p --user=default --cluster=default +kubectl config use-context bl46p +``` - kubectl create namespace bl46p - kubectl config set-context bl46p --namespace=bl46p --user=default --cluster=default - kubectl config use-context bl46p - -Create a service account to run the IOCs -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +### Create a service account to run the IOCs Inside of our new namespace we will create a service account that will be used to run the IOCs. Create the account: -.. code-block:: bash - - kubectl apply -f - < + +HOWEVER, these instructions can be also used to setup any +new beamline at DLS - just substitute the beamline name where appropriate. +You will need to have a beamline cluster already created for the +beamline by the cloud team and have requested access via the URL above. +::: + +## Create a new beamline repository + +To create a new beamline repository, use the template repository at +. Click on the green +"Use this template" button to create a new repository. Name the repository +bl46p (or choose your own name and remember to substitute it in the rest of +this tutorial). Create this repository in your own GitHub account. + +:::{note} +DLS users: if this is real beamline then it needs to be +created in our internal GitLab registry at +. +For this purpose use the template description for [bl38p](https://github.com/epics-containers/bl38p?tab=readme-ov-file#how-to-create-a-new-beamline--accelerator-domain). + +For test DLS beamlines these should still be created in github +as per the below instructions. +::: + +Clone the new repository to your local machine and change directory into it. + +```bash +git clone https://github.com/YOUR_GITHUB_ACCOUNT/bl46p.git +cd bl46p +``` + +Next make some changes to the repository to customise it for your beamline. +Cut and paste the following script to do so. + +```bash +BEAMLINE=bl46p + +# update the readme +echo "Beamline repo for the beamline $BEAMLINE" > README.md + +# remove the sample IOC directory +rm -r services/blxxi-ea-ioc-01 +# change the services setup scripts to use the new beamline name +sed -i "s/blxxi/$BEAMLINE/g" services/* beamline-chart/values.yaml +``` + +## Cluster Topologies + +There are two supported topologies for beamline clusters: + +- shared cluster with multiple beamlines' IOCs running in the same cluster +- dedicated cluster with a single beamline's IOCs running in the cluster + +If you are working with the single node k3s cluster set up in the previous +tutorial then this will be considered a dedicated cluster. + +If you are creating a real DLS beamline or accelerator domain then this will +also be a dedicated cluster. You will need to make sure the cloud team has +created the cluster for the beamline and you have permissions to use it. + +If you are working with one of the test beamlines at DLS then these are usually +shared topology and are set up as nodes on the Pollux cluster. + +Other facilities are free to choose the topology that best suits their needs. + +### Shared Clusters + +In the shared cluster topology we would usually want IOCs to run on the +servers that are closest to the beamline. This is important for Channel Access +because it is a broadcast protocol and by default only works on a single +subnet. + +To facilitate this we use `node affinity rules` to ensure that IOCs +run on the beamline's specific nodes. `Node affinity` can look for a `label` +on the node to say that it belongs to a beamline. +We can also use `taints` to stop other pods from +running on our beamline nodes. A `taint` will stop pods from being scheduled +on a node unless the pod has a matching toleration. + +For example the test beamline p46 at DLS has the following `taints` and +`labels`: + +``` +Labels: beamline=bl46p + nodetype=test-rig + +Taints: beamline=bl46p:NoSchedule + nodetype=test-rig:NoSchedule +``` + +If you are working with your facility cluster then, you are may not to +have permission to set up these labels and taints. In this case, your +administrator will need to do this for you. At DLS, you should expect that +this is already set up for you. + +For an explanation of these K8S concepts see + +- [Taints and Tolerances](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) +- [Node Affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity-beta-feature) + +### Dedicated Clusters + +In the dedicated cluster topology we would usually want to let the IOCs +run on all of the worker nodes in the cluster. In this case the only thing +that is required is a namespace in which to run your IOCs. + +By convention we use a namespace like `bl46p-iocs` for this purpose. This +namespace will need the appropriate permissions to allow the IOCs to run +with network host. + +## Environment Setup + +Every beamline repository has an `environment.sh` file used to configure +your shell so that the command line tools know which cluster to talk to. +Up to this point we have been using the local docker or podman instance, +but here we will configure it to use the beamline cluster. + +For the detail of what goes into `environment.sh` see +{any}`../reference/environment`. + +Now edit `environment.sh` make changes as follows: + +### Section 1 + +Change this section to set the following variables: + +```bash +export EC_REGISTRY_MAPPING='github.com=ghcr.io' +export EC_K8S_NAMESPACE=p46-iocs +export EC_SERVICES_REPO=git@github.com:YOUR_GITHUB_ACCOUNT/bl46p.git +``` + +This tells the `ec` command line tool to use the GitHub container registry +when it sees github projects, the name of the Kubernetes namespace to use and +the location of the beamline repository. + +### Section 2 + +The script should also make sure that `ec` CLI is available and it is also +useful to set up command line completion up. The simplest way to do this is: + +```bash +set -e # exit on error +source <(ec --show-completion ${SHELL}) +``` + +For a review of how to set up the epics-containers-cli tool `ec` see +{any}`python-setup` and {any}`ec`. + +### Section 3 + +This is where you make sure the cluster is contactable. For the k3s cluster +we set up the default `~/.kube/config` file to point to the local cluster. +So we can leave this section blank. + +At DLS you would need to load a module to set up the environment for the +beamline cluster. For example: + +```bash +module load pollux # for all test beamlines +module load k8s-i22 # for the real beamline i22 +``` + +Once `environment.sh` is set up, source it to set up your shell. + +```bash +source environment.sh +``` + +You are now ready to start talking to the cluster. You can verify this with +the following command that should list all the nodes on the cluster. You +will be asked for your credentials if required. + +```bash +kubectl get nodes +``` + +## Setting up the Beamline Helm Chart Defaults + +The beamline helm chart is used to deploy IOCs to the cluster. Each IOC instance +gets to override any of the settings available in the chart. This is done +in `services//values.yaml` for each IOC instance. However, all +settings except `image` have default values supplied at the beamline level. +For this reason most IOC instances only need supply the `image` setting +which specifies the Generic IOC container image to use. + +Before making the first IOC instance we need to set up the beamline defaults. +These are all held in the file `beamline-chart/values.yaml`. + +Open this file and make the following changes depending on your beamline +type. + +### All cluster types + +```yaml +beamline: bl46p +namespace: p46-iocs +hostNetwork: true # required for channel access access on the host + +opisClaim: bl46p-opi-claim +runtimeClaim: bl46p-runtime-claim +autosaveClaim: bl46p-autosave-claim +``` + +### k3s single server cluster + +```yaml +dataVolume: + pvc: true + # point at a PVC created by kubernetes + hostPath: /data/ +``` + +### DLS test beamlines + +```yaml +dataVolume: + pvc: true + # point at local disk on the server + hostPath: /exports/mybeamline + +# extra tolerations for the training rigs +tolerations: +- key: nodetype + operator: "Equal" + value: training-rig + effect: "NoSchedule" +``` + +### DLS real beamlines + +```yaml +dataVolume: + pvc: true + # point at the shared filesystem data folder for the beamline + hostPath: /dls/p46/data +``` + +## Set Up The One Time Only Beamline Resources + +There are two scripts in the `services` directory that set up some initial +resources. You should run each of these in order: + +- `services/install-pvcs.sh`: this sets up some persistent volume claims for + : the beamline. PVCS are Kubernetes managed chunks of storage that can be + shared between pods if required. The 3 PVCS created here relate to the + `Claim` entries in the `beamline-chart/values.yaml` file. These are + places to store: + \- autosave files + \- runtime generated startup scripts and EPICS database files + \- OPI screens (usually auto generated) +- `services/install-opi.sh`: this sets up an nginx web server for the + : beamline. It serves the OPI screens from the `opisClaim` PVC. Each IOC + instance will place its OPI screens in a subdirectory of this PVC. + OPI clients like phoebus can then retrieve these files via HTTP. + +## Create a Test IOC to Deploy + +TODO: WIP (but this looks just like it did in the first IOC deployment tutorial) + +:::{note} +At DLS you can get to a Kubernetes Dashboard for your beamline via +a landing page `https://pollux.diamond.ac.uk` for test beamlines on +`Pollux` - remember to select the namespace `p46-iocs` for example. + +For real beamlines dedicated clusters, you can find the landing page for example: +`https://k8s-i22.diamond.ac.uk/` for BL22I. +`https://k8s-b01-1.diamond.ac.uk/` for the 2nd branch of BL01B. +::: diff --git a/docs/tutorials/setup_k8s_new_beamline.rst b/docs/tutorials/setup_k8s_new_beamline.rst deleted file mode 100644 index f4fc676f..00000000 --- a/docs/tutorials/setup_k8s_new_beamline.rst +++ /dev/null @@ -1,326 +0,0 @@ -.. _setup_k8s_beamline: - - -Create a New Kubernetes Beamline -================================ - -.. warning:: - - This is a first draft that has been tested against a DLS test beamline - only. I will remove this warning once it has been tested against: - - - the k3s example cluster described in the previous tutorial - - a real DLS beamline. - - TODO: would it be better to have a separate tutorial for each of these? - -Up until now the tutorials have been deploying IOCs to the local docker or -podman instance on your workstation. In this tutorial we look into setting -up a Kubernetes cluster for a beamline and deploying a test IOC there. - -The advantage of using Kubernetes is that it is a production grade container -orchestration system. It will manage the CPU, disk and memory available across -your cluster of nodes, scheduling your IOCs and other services accordingly. -It will also restart them if they fail and monitor their health. -It can provide centralised logging and monitoring -of all of your services including IOCs. - - -In this tutorial we will create a new beamline in the Kubernetes cluster. -Here we assume that the cluster is already setup and that there is -a namespace configured for use by the beamline. See the previous tutorial -for how to set one up if you do not have this already. - -.. note:: - - DLS users: these instructions are for the BL46P beamline. This beamline - already exists at DLS, so you could just skip ahead to creating the - example IOC. You will need to ask the cloud team for permission on - cluster ``pollux``, namespace ``p46-iocs`` to do this. - Go to this URL to request access: - https://jira.diamond.ac.uk/servicedesk/customer/portal/2/create/92 - - HOWEVER, these instructions can be also used to setup any - new beamline at DLS - just substitute the beamline name where appropriate. - You will need to have a beamline cluster already created for the - beamline by the cloud team and have requested access via the URL above. - -Create a new beamline repository --------------------------------- - -To create a new beamline repository, use the template repository at -https://github.com/epics-containers/blxxi-template. Click on the green -"Use this template" button to create a new repository. Name the repository -bl46p (or choose your own name and remember to substitute it in the rest of -this tutorial). Create this repository in your own GitHub account. - -.. note:: - - DLS users: if this is real beamline then it needs to be - created in our internal GitLab registry at - https://gitlab.diamond.ac.uk/controls/containers/beamline. - For this purpose use the template description for `bl38p - `_. - - For test DLS beamlines these should still be created in github - as per the below instructions. - -Clone the new repository to your local machine and change directory into it. - -.. code-block:: bash - - git clone https://github.com/YOUR_GITHUB_ACCOUNT/bl46p.git - cd bl46p - -Next make some changes to the repository to customise it for your beamline. -Cut and paste the following script to do so. - -.. code-block:: bash - - BEAMLINE=bl46p - - # update the readme - echo "Beamline repo for the beamline $BEAMLINE" > README.md - - # remove the sample IOC directory - rm -r iocs/blxxi-ea-ioc-01 - # change the services setup scripts to use the new beamline name - sed -i "s/blxxi/$BEAMLINE/g" services/* beamline-chart/values.yaml - -Cluster Topologies ------------------- - -There are two supported topologies for beamline clusters: - -- shared cluster with multiple beamlines' IOCs running in the same cluster -- dedicated cluster with a single beamline's IOCs running in the cluster - -If you are working with the single node k3s cluster set up in the previous -tutorial then this will be considered a dedicated cluster. - -If you are creating a real DLS beamline or accelerator domain then this will -also be a dedicated cluster. You will need to make sure the cloud team has -created the cluster for the beamline and you have permissions to use it. - -If you are working with one of the test beamlines at DLS then these are usually -shared topology and are set up as nodes on the Pollux cluster. - -Other facilities are free to choose the topology that best suits their needs. - -Shared Clusters -~~~~~~~~~~~~~~~ - -In the shared cluster topology we would usually want IOCs to run on the -servers that are closest to the beamline. This is important for Channel Access -because it is a broadcast protocol and by default only works on a single -subnet. - -To facilitate this we use ``node affinity rules`` to ensure that IOCs -run on the beamline's specific nodes. ``Node affinity`` can look for a ``label`` -on the node to say that it belongs to a beamline. -We can also use ``taints`` to stop other pods from -running on our beamline nodes. A ``taint`` will stop pods from being scheduled -on a node unless the pod has a matching toleration. - -For example the test beamline p46 at DLS has the following ``taints`` and -``labels``: - -.. code-block:: - - Labels: beamline=bl46p - nodetype=test-rig - - Taints: beamline=bl46p:NoSchedule - nodetype=test-rig:NoSchedule - -If you are working with your facility cluster then, you are may not to -have permission to set up these labels and taints. In this case, your -administrator will need to do this for you. At DLS, you should expect that -this is already set up for you. - -For an explanation of these K8S concepts see - -- `Taints and Tolerances `_ -- `Node Affinity `_ - -Dedicated Clusters -~~~~~~~~~~~~~~~~~~ - -In the dedicated cluster topology we would usually want to let the IOCs -run on all of the worker nodes in the cluster. In this case the only thing -that is required is a namespace in which to run your IOCs. - -By convention we use a namespace like ``bl46p-iocs`` for this purpose. This -namespace will need the appropriate permissions to allow the IOCs to run -with network host. - -Environment Setup ------------------ - -Every beamline repository has an ``environment.sh`` file used to configure -your shell so that the command line tools know which cluster to talk to. -Up to this point we have been using the local docker or podman instance, -but here we will configure it to use the beamline cluster. - -For the detail of what goes into ``environment.sh`` see -`../reference/environment`. - -Now edit ``environment.sh`` make changes as follows: - -Section 1 -~~~~~~~~~ - -Change this section to set the following variables: - -.. code-block:: bash - - export EC_REGISTRY_MAPPING='github.com=ghcr.io' - export EC_K8S_NAMESPACE=p46-iocs - export EC_SERVICES_REPO=git@github.com:YOUR_GITHUB_ACCOUNT/bl46p.git - -This tells the ``ec`` command line tool to use the GitHub container registry -when it sees github projects, the name of the Kubernetes namespace to use and -the location of the beamline repository. - -Section 2 -~~~~~~~~~ - -The script should also make sure that ``ec`` CLI is available and it is also -useful to set up command line completion up. The simplest way to do this is: - -.. code-block:: bash - - set -e # exit on error - source <(ec --show-completion ${SHELL}) - -For a review of how to set up the epics-containers-cli tool ``ec`` see -`python_setup` and `ec`. - -Section 3 -~~~~~~~~~ - -This is where you make sure the cluster is contactable. For the k3s cluster -we set up the default ``~/.kube/config`` file to point to the local cluster. -So we can leave this section blank. - -At DLS you would need to load a module to set up the environment for the -beamline cluster. For example: - -.. code-block:: bash - - module load pollux # for all test beamlines - module load k8s-i22 # for the real beamline i22 - -Once ``environment.sh`` is set up, source it to set up your shell. - -.. code-block:: bash - - source environment.sh - -You are now ready to start talking to the cluster. You can verify this with -the following command that should list all the nodes on the cluster. You -will be asked for your credentials if required. - -.. code-block:: bash - - kubectl get nodes - -Setting up the Beamline Helm Chart Defaults -------------------------------------------- - -The beamline helm chart is used to deploy IOCs to the cluster. Each IOC instance -gets to override any of the settings available in the chart. This is done -in ``iocs//values.yaml`` for each IOC instance. However, all -settings except ``image`` have default values supplied at the beamline level. -For this reason most IOC instances only need supply the ``image`` setting -which specifies the Generic IOC container image to use. - -Before making the first IOC instance we need to set up the beamline defaults. -These are all held in the file ``beamline-chart/values.yaml``. - -Open this file and make the following changes depending on your beamline -type. - -All cluster types -~~~~~~~~~~~~~~~~~ - -.. code-block:: yaml - - beamline: bl46p - namespace: p46-iocs - hostNetwork: true # required for channel access access on the host - - opisClaim: bl46p-opi-claim - runtimeClaim: bl46p-runtime-claim - autosaveClaim: bl46p-autosave-claim - -k3s single server cluster -~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. code-block:: yaml - - dataVolume: - pvc: true - # point at a PVC created by kubernetes - hostPath: /data/ - -DLS test beamlines -~~~~~~~~~~~~~~~~~~ - -.. code-block:: yaml - - dataVolume: - pvc: true - # point at local disk on the server - hostPath: /exports/mybeamline - - # extra tolerations for the training rigs - tolerations: - - key: nodetype - operator: "Equal" - value: training-rig - effect: "NoSchedule" - -DLS real beamlines -~~~~~~~~~~~~~~~~~~ - -.. code-block:: yaml - - dataVolume: - pvc: true - # point at the shared filesystem data folder for the beamline - hostPath: /dls/p46/data - -Set Up The One Time Only Beamline Resources -------------------------------------------- - -There are two scripts in the ``services`` directory that set up some initial -resources. You should run each of these in order: - -- ``services/install-pvcs.sh``: this sets up some persistent volume claims for - the beamline. PVCS are Kubernetes managed chunks of storage that can be - shared between pods if required. The 3 PVCS created here relate to the - ``Claim`` entries in the ``beamline-chart/values.yaml`` file. These are - places to store: - - autosave files - - runtime generated startup scripts and EPICS database files - - OPI screens (usually auto generated) -- ``services/install-opi.sh``: this sets up an nginx web server for the - beamline. It serves the OPI screens from the ``opisClaim`` PVC. Each IOC - instance will place its OPI screens in a subdirectory of this PVC. - OPI clients like phoebus can then retrieve these files via HTTP. - -Create a Test IOC to Deploy ---------------------------- - -TODO: WIP (but this looks just like it did in the first IOC deployment tutorial) - -.. note:: - - At DLS you can get to a Kubernetes Dashboard for your beamline via - a landing page ``https://pollux.diamond.ac.uk`` for test beamlines on - ``Pollux`` - remember to select the namespace ``p46-iocs`` for example. - - For real beamlines dedicated clusters, you can find the landing page for example: - ``https://k8s-i22.diamond.ac.uk/`` for BL22I. - ``https://k8s-b01-1.diamond.ac.uk/`` for the 2nd branch of BL01B. diff --git a/docs/tutorials/setup_workstation.rst b/docs/tutorials/setup_workstation.md similarity index 56% rename from docs/tutorials/setup_workstation.rst rename to docs/tutorials/setup_workstation.md index c91a9224..08859b8c 100644 --- a/docs/tutorials/setup_workstation.rst +++ b/docs/tutorials/setup_workstation.md @@ -1,5 +1,4 @@ -Set up a Developer Workstation -============================== +# Set up a Developer Workstation This page will guide you through the steps to set up a developer workstation in readiness for the remaining tutorials. @@ -14,43 +13,37 @@ Visual Studio Code is recommended because it has excellent integration with devcontainers. It also has useful extensions for working with Kubernetes, EPICS, WSL2 and more. -Options -------- +## Options You are not required to use VSCode to develop with epics-containers. If you have your own preferred code editor you can use that. See these how-to pages for more information: -- `own_editor` +- {any}`own-editor` -Platform Support ----------------- +## Platform Support epics-containers can use Linux, Windows or MacOS as the host operating system for the developer workstation. If you are using Windows then you must first install WSL2 and then work within the Linux subsystem. see -`WSL2 installation instructions`_. +[WSL2 installation instructions]. Ubuntu is recommended as the Linux distribution for WSL2. -.. _WSL2 installation instructions: https://docs.microsoft.com/en-us/windows/wsl/install-win10 +## Installation Steps -Installation Steps ------------------- +### Setup VSCode -Setup VSCode -~~~~~~~~~~~~ - -.. Note:: - - **DLS Users**: You can access VSCode with ``module load vscode``. +:::{Note} +**DLS Users**: You can access VSCode with `module load vscode`. +::: First download and install Visual Studio Code. -- `Download Visual Studio Code`_ -- `Setup Visual Studio Code`_ +- [Download Visual Studio Code] +- [Setup Visual Studio Code] VSCode has a huge library of extensions. The following list of extensions are useful for working with epics-containers. You will need to install the *Required* @@ -60,33 +53,23 @@ on how to do this. The recommended extensions will be installed for you when you launch the devcontainer in the next tutorial. -- Required: `Remote Development`_ -- Required for Windows: `VSCode WSL2`_ (see `How to use WSL2 and Visual Studio Code`_) -- Recommended: `VSCode EPICS`_ -- Recommended: `Kubernetes`_ - -.. _VSCode WSL2: https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-wsl -.. _How to use WSL2 and Visual Studio Code: https://code.visualstudio.com/blogs/2019/09/03/wsl2 -.. _Kubernetes: https://marketplace.visualstudio.com/items?itemName=ms-kubernetes-tools.vscode-kubernetes-tools -.. _VSCode EPICS: https://marketplace.visualstudio.com/items?itemName=nsd.vscode-epics -.. _Remote Development: https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.vscode-remote-extensionpack -.. _Setup Visual Studio Code: https://code.visualstudio.com/learn/get-started/basics -.. _Download Visual Studio Code: https://code.visualstudio.com/download +- Required: [Remote Development] +- Required for Windows: [VSCode WSL2] (see [How to use WSL2 and Visual Studio Code]) +- Recommended: [VSCode EPICS] +- Recommended: [Kubernetes] +### Setup Docker or Podman -Setup Docker or Podman -~~~~~~~~~~~~~~~~~~~~~~ - -.. Note:: - - **DLS Users**: RHEL 8 Workstations at DLS have podman 4.4.1 installed by default. - RHEL 7 Workstations are not supported. +:::{Note} +**DLS Users**: RHEL 8 Workstations at DLS have podman 4.4.1 installed by default. +RHEL 7 Workstations are not supported. +::: Next install docker or podman as your container platform. epics-containers has been tested with podman 4.4.1 on RedHat 8, and Docker 24.0.5 on for Ubuntu 22.04. -If you are using docker, simply replace ``podman`` with ``docker`` in the +If you are using docker, simply replace `podman` with `docker` in the commands listed in these tutorials. The podman version required is 4.0 or later. Any version of docker since 20.10 @@ -98,83 +81,75 @@ which epics-containers has had the most testing. The links below have details of how to install your choice of container platform: -- `Install docker`_ -- `Install podman`_ +- [Install docker] +- [Install podman] The docker install page encourages you to install Docker Desktop. This is a paid for product and is not required for this tutorial. You can install the free linux CLI tools by clicking on the appropriate linux distribution link. -.. _Install docker: https://docs.docker.com/engine/install/ -.. _Install podman: https://podman.io/getting-started/installation - -.. _python_setup: +(python-setup)= -Install Python -~~~~~~~~~~~~~~ +### Install Python -.. Note:: - - **DLS Users**: use ``module load python/3.11`` +:::{Note} +**DLS Users**: use `module load python/3.11` +::: Go ahead and install Python 3.10 or later. 3.11 is recommended as this is the highest version that epics-containers has been tested with. There are instructions for installing Python on all platforms here: -https://docs.python-guide.org/starting/installation/ - + Once you have python set up a virtual environment for your epics-containers -work. In the examples we will use ``$HOME/ec-venv`` as the virtual environment +work. In the examples we will use `$HOME/ec-venv` as the virtual environment but you can choose any folder. -.. code-block:: bash - - python -m venv $HOME/ec-venv - source $HOME/ec-venv/bin/activate - python -m pip install --upgrade pip +```bash +python -m venv $HOME/ec-venv +source $HOME/ec-venv/bin/activate +python -m pip install --upgrade pip +``` Note that each time you open a new shell you will need to activate the virtual environment again. (Or place its bin folder in your path permanently). +(ec)= -.. _ec: - -epics-containers-cli -~~~~~~~~~~~~~~~~~~~~ +### edge-containers-cli Above we set up a python virtual environment. Now we will install -the epics-containers-cli python tool into that environment. - -.. code-block:: bash +the {any}`edge-containers-cli` python tool into that environment. - pip install epics-containers-cli +```bash +pip install edge-containers-cli +``` This is the developer's 'outside of the container' helper tool. The command -line entry point is ``ec``. We will be using many ``ec`` command line +line entry point is `ec`. We will be using many `ec` command line functions in the next tutorial. -See `CLI` for more details. - -.. note:: +See {any}`CLI` for more details. - DLS Users: ``ec`` is already installed for you on ``dls_sw`` just do the - following to make sure it is always available: +:::{note} +DLS Users: `ec` is already installed for you on `dls_sw` just do the +following to make sure it is always available: - .. code:: bash +```bash +# use the ec version from dls_sw/work/python3 +mkdir -p $HOME/.local/bin +ln -fs /dls_sw/work/python3/ec-venv/bin/ec $HOME/.local/bin/ec +``` +::: - # use the ec version from dls_sw/work/python3 - mkdir -p $HOME/.local/bin - ln -fs /dls_sw/work/python3/ec-venv/bin/ec $HOME/.local/bin/ec +## Git -Git ---- If you don't already have git installed see -https://git-scm.com/book/en/v2/Getting-Started-Installing-Git. Any recent +. Any recent version of git will work. -Kubernetes -~~~~~~~~~~ +### Kubernetes You don't need Kubernetes yet. @@ -186,17 +161,27 @@ will deploy containers to the local workstation's docker or podman instance. However, everything in these tutorials would also work with Kubernetes. If you are particularly interested in Kubernetes then you can jump to -`setup_kubernetes` and follow the instructions there. Then come back to this +{any}`setup-kubernetes` and follow the instructions there. Then come back to this point and continue with the tutorials. If you do this just be aware that -we use the beamline name ``bl01t`` for local deployment examples and -``bl46p`` for Kubernetes examples so you will need to substitute the +we use the beamline name `bl01t` for local deployment examples and +`bl46p` for Kubernetes examples so you will need to substitute the appropriate beamline name for your environment. All the local deployment examples should also deploy to a Kubernetes cluster. If you are planning not to use Kubernetes at all then now might be a good time to install an alternative container management platform such -as `Portainer `_. Such tools will help you +as [Portainer](https://www.portainer.io/). Such tools will help you visualise and manage your local containers. They are not required and you could just manage everything from epics-containers command line interface if you prefer. +[download visual studio code]: https://code.visualstudio.com/download +[how to use wsl2 and visual studio code]: https://code.visualstudio.com/blogs/2019/09/03/wsl2 +[install docker]: https://docs.docker.com/engine/install/ +[install podman]: https://podman.io/getting-started/installation +[kubernetes]: https://marketplace.visualstudio.com/items?itemName=ms-kubernetes-tools.vscode-kubernetes-tools +[remote development]: https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.vscode-remote-extensionpack +[setup visual studio code]: https://code.visualstudio.com/learn/get-started/basics +[vscode epics]: https://marketplace.visualstudio.com/items?itemName=nsd.vscode-epics +[vscode wsl2]: https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-wsl +[wsl2 installation instructions]: https://docs.microsoft.com/en-us/windows/wsl/install-win10 diff --git a/docs/tutorials/support_module.rst b/docs/tutorials/support_module.md similarity index 67% rename from docs/tutorials/support_module.rst rename to docs/tutorials/support_module.md index fc053dfc..ee3740a5 100644 --- a/docs/tutorials/support_module.rst +++ b/docs/tutorials/support_module.md @@ -1,9 +1,8 @@ -Working with Support Modules -============================= +# Working with Support Modules -.. Warning:: - - This tutorial is out of date and will be updated soon. +:::{Warning} +This tutorial is out of date and will be updated soon. +::: TODO: this is currently a stub with some pointers. @@ -11,7 +10,7 @@ TODO: suggest that we will make a new Stream Device that will be a simple echo server. Use this to step through the process of creating a new support module. -This is a type 3. change from the list at `ioc_change_types`. +This is a type 3. change from the list at {any}`ioc-change-types`. If you are starting a new support module then the preceding tutorials have covered all of the skills you will need. @@ -19,9 +18,7 @@ have covered all of the skills you will need. To work on a new support module you will need a Generic IOC project to work inside. You could choose to create two new projects: -:MyNewDeviceSupport: - - a traditional EPICS Support module, +```{eval-rst} :ioc-MyNewDeviceSupport: @@ -29,4 +26,7 @@ work inside. You could choose to create two new projects: Once you have created the project(s), working on the support module will look very similar to the procedures set out here `debug_generic_ioc` +``` +Once you have created the project(s), working on the support module will +look very similar to the procedures set out here {any}`debug_generic_ioc` diff --git a/docs/tutorials/test_generic_ioc.md b/docs/tutorials/test_generic_ioc.md new file mode 100644 index 00000000..bfe3a731 --- /dev/null +++ b/docs/tutorials/test_generic_ioc.md @@ -0,0 +1,160 @@ +# Testing and Deploying a Generic IOC + +:::{Warning} +This tutorial is out of date and will be updated soon. +::: + +## Continuous Integration + +An important feature of epics-containers is CI. The ioc-template that we +based ioc-adurl upon has built-in CI for GitHub (and DLS internal GitLab). + +The first thing we will do is get the project pushed up to GitHub and +verify that the CI is working. + +Before pushing the project we must push our changes to the submodule ibek-defs, +then we can push the main project to GitHub too: + +```bash +cd ibek-defs +git add . +git commit -m "Added ADURL" +git push +cd .. +git add . +git commit -m "update the template to ADUrl" +git push +``` + +If you now go up to the GitHub page for the project and click on `Actions` +and click on the latest build you should see something like this: + +:::{figure} ../images/github_actions2.png +GitHub Actions for ioc-adurl +::: + +This build should succeed. + +## Publishing the Generic IOC Container to GHCR + +Now give your GenericIOC a version tag and push it to GitHub. + +```bash +git tag 23.4.1 +git push origin 23.4.1 +``` + +You will see the CI rebuild the container and push it to the GitHub container +registry at github.io/\/ioc-adurl:\. + +This time the build will take 2 minutes. Last time it took 10 minutes +(because we were building areaDetector and graphicsMagick). The speed up +is because we keep a build cache in GitHub actions. + +Once the build is complete, if you go to your project page on GitHub +/\/ioc-adurl you should now see a +`Packages` tab. Click on the developer package and and you should see +something like this: + +:::{figure} ../images/ghcr.png +The ioc-adurl container on GHCR +::: + +This means that the container is now available for anyone to use with a +`podman/docker pull` command as described in the `Installation` tab. + +## Making the Tests Relevant to our Generic IOC + +You may notice that some tests were run as part of the CI. These Generic +tests are used by the ioc-template and are defined in the `tests` directory. +We need to update them to be relevant to our new Generic IOC. + +TODO: I just noticed that the tests do not use the build cache (but also +they are not required to push the built container so don't hold up +development). + +In the `ioc/config` folder we have some default config that is used by the +Generic IOC if no config is provided. We can use this for testing and need +to update the tests to use it too. + +To do this remove the file `ioc/config/ioc.db` and replace the contents +of `ioc/config/st.cmd` with: + +``` +cd "$(TOP)" + +dbLoadDatabase "dbd/ioc.dbd" +ioc_registerRecordDeviceDriver(pdbbase) + +URLDriverConfig("EXAMPLE.CAM", 0, 0) + +# NDPvaConfigure(portName, queueSize, blockingCallbacks, NDArrayPort, NDArrayAddr, pvName, maxMemory, priority, stackSize) +NDPvaConfigure("EXAMPLE.PVA", 2, 0, "EXAMPLE.CAM", 0, "EXAMPLE:IMAGE", 0, 0, 0) +startPVAServer + +# instantiate Database records for Url Detector +dbLoadRecords("URLDriver.template","P=EXAMPLE, R=:CAM:, PORT=EXAMPLE.CAM, TIMEOUT=1, ADDR=0") +dbLoadRecords("NDPva.template", "P=EXAMPLE, R=:PVA:, PORT=EXAMPLE.PVA, ADDR=0, TIMEOUT=1, NDARRAY_PORT=EXAMPLE.CAM, NDARRAY_ADR=0, ENABLED=1") + +# start IOC shell +iocInit + +# poke some records +dbpf "EXAMPLE:CAM:AcquirePeriod", "0.1" +``` + +Next, remove the folders `tests/example-config` and `tests/example-ibek-config`. + +Then edit the `tests/run_tests.sh` file. Remove the test blocks titled +`Test an ibek IOC` and `Test a hand coded st.cmd IOC` leaving just the +block called `Test the default example IOC`. Finally edit the block to +look like this: + +```bash +... +fi +podman run ${ioc_args} +check_pv 'EXAMPLE:CAM:AcquirePeriod' '0.1' +``` + +Now try out the test with the following command: + +```bash +./tests/run_tests.sh +``` + +We have made a very simple test that only checks one PV value, but that is +good enough to validate that the IOC is running and that the config is +being loaded. You can add more sophisticated tests as needed to your +own Generic IOCs. + +If you had any issues with getting this tutorial working, you can get a +fully working version of the ioc-adurl project from the following link: + +> + +## Try out some GUI + +Now let us verify that this is really working other than serving a single PV. For this purpose I have made some edm screens to try out. Using these screens you could attach ADUrl to your own video stream. A still image example is supplied as well. Unfortunately ADUrl dies not support HTTPS so there are no public feeds we could use to demo this. + +After running the tests in the previous section you should have a running +container still active. You can see this using `podman ps`. You should +see that `ioc-template-test-container` is still running. You can start it +again with `tests/run_tests.sh` if it is not. + +Now get the edm screens and launch them as follows. + +```bash +cd /tmp +git clone git@github.com:epics-containers/ioc-adurl.git +cd ioc-adurl +opi/example.sh +``` + +You should see the C2DataViewer. Click on auto button and you should see: + +:::{figure} ../images/millie.png +Millie the Labradoodle +::: + +To work out how Millie got into the viewer, take a look at example.sh. diff --git a/docs/tutorials/test_generic_ioc.rst b/docs/tutorials/test_generic_ioc.rst deleted file mode 100644 index 8454614b..00000000 --- a/docs/tutorials/test_generic_ioc.rst +++ /dev/null @@ -1,170 +0,0 @@ -Testing and Deploying a Generic IOC -=================================== - -.. Warning:: - - This tutorial is out of date and will be updated soon. - -Continuous Integration ----------------------- - -An important feature of epics-containers is CI. The ioc-template that we -based ioc-adurl upon has built-in CI for GitHub (and DLS internal GitLab). - -The first thing we will do is get the project pushed up to GitHub and -verify that the CI is working. - -Before pushing the project we must push our changes to the submodule ibek-defs, -then we can push the main project to GitHub too: - -.. code-block:: bash - - cd ibek-defs - git add . - git commit -m "Added ADURL" - git push - cd .. - git add . - git commit -m "update the template to ADUrl" - git push - -If you now go up to the GitHub page for the project and click on ``Actions`` -and click on the latest build you should see something like this: - -.. figure:: ../images/github_actions2.png - - GitHub Actions for ioc-adurl - -This build should succeed. - -Publishing the Generic IOC Container to GHCR --------------------------------------------- - -Now give your GenericIOC a version tag and push it to GitHub. - -.. code-block:: bash - - git tag 23.4.1 - git push origin 23.4.1 - -You will see the CI rebuild the container and push it to the GitHub container -registry at github.io//ioc-adurl:. - -This time the build will take 2 minutes. Last time it took 10 minutes -(because we were building areaDetector and graphicsMagick). The speed up -is because we keep a build cache in GitHub actions. - -Once the build is complete, if you go to your project page on GitHub -https://github.com//ioc-adurl you should now see a -``Packages`` tab. Click on the developer package and and you should see -something like this: - -.. figure:: ../images/ghcr.png - - The ioc-adurl container on GHCR - -This means that the container is now available for anyone to use with a -``podman/docker pull`` command as described in the ``Installation`` tab. - - -Making the Tests Relevant to our Generic IOC --------------------------------------------- - -You may notice that some tests were run as part of the CI. These Generic -tests are used by the ioc-template and are defined in the ``tests`` directory. -We need to update them to be relevant to our new Generic IOC. - -TODO: I just noticed that the tests do not use the build cache (but also -they are not required to push the built container so don't hold up -development). - -In the ``ioc/config`` folder we have some default config that is used by the -Generic IOC if no config is provided. We can use this for testing and need -to update the tests to use it too. - -To do this remove the file ``ioc/config/ioc.db`` and replace the contents -of ``ioc/config/st.cmd`` with: - -.. code-block:: - - cd "$(TOP)" - - dbLoadDatabase "dbd/ioc.dbd" - ioc_registerRecordDeviceDriver(pdbbase) - - URLDriverConfig("EXAMPLE.CAM", 0, 0) - - # NDPvaConfigure(portName, queueSize, blockingCallbacks, NDArrayPort, NDArrayAddr, pvName, maxMemory, priority, stackSize) - NDPvaConfigure("EXAMPLE.PVA", 2, 0, "EXAMPLE.CAM", 0, "EXAMPLE:IMAGE", 0, 0, 0) - startPVAServer - - # instantiate Database records for Url Detector - dbLoadRecords("URLDriver.template","P=EXAMPLE, R=:CAM:, PORT=EXAMPLE.CAM, TIMEOUT=1, ADDR=0") - dbLoadRecords("NDPva.template", "P=EXAMPLE, R=:PVA:, PORT=EXAMPLE.PVA, ADDR=0, TIMEOUT=1, NDARRAY_PORT=EXAMPLE.CAM, NDARRAY_ADR=0, ENABLED=1") - - # start IOC shell - iocInit - - # poke some records - dbpf "EXAMPLE:CAM:AcquirePeriod", "0.1" - -Next, remove the folders ``tests/example-config`` and ``tests/example-ibek-config``. - -Then edit the ``tests/run_tests.sh`` file. Remove the test blocks titled -``Test an ibek IOC`` and ``Test a hand coded st.cmd IOC`` leaving just the -block called ``Test the default example IOC``. Finally edit the block to -look like this: - -.. code-block:: bash - - - ... - fi - podman run ${ioc_args} - check_pv 'EXAMPLE:CAM:AcquirePeriod' '0.1' - -Now try out the test with the following command: - -.. code-block:: bash - - ./tests/run_tests.sh - -We have made a very simple test that only checks one PV value, but that is -good enough to validate that the IOC is running and that the config is -being loaded. You can add more sophisticated tests as needed to your -own Generic IOCs. - -If you had any issues with getting this tutorial working, you can get a -fully working version of the ioc-adurl project from the following link: - - https://github.com/epics-containers/ioc-adurl - -Try out some GUI ----------------- - -Now let us verify that this is really working other than serving a single PV. For this purpose I have made some edm screens to try out. Using these screens you could attach ADUrl to your own video stream. A still image example is supplied as well. Unfortunately ADUrl dies not support HTTPS so there are no public feeds we could use to demo this. - -After running the tests in the previous section you should have a running -container still active. You can see this using ``podman ps``. You should -see that ``ioc-template-test-container`` is still running. You can start it -again with ``tests/run_tests.sh`` if it is not. - -Now get the edm screens and launch them as follows. - -.. code-block:: bash - - cd /tmp - git clone git@github.com:epics-containers/ioc-adurl.git - cd ioc-adurl - opi/example.sh - -You should see the C2DataViewer. Click on auto button and you should see: - -.. figure:: ../images/millie.png - - Millie the Labradoodle - -To work out how Millie got into the viewer, take a look at example.sh. - - -