Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation fixes and improvements #466

Merged
merged 25 commits into from
Mar 4, 2022
Merged
Changes from 1 commit
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
0a4012f
Grafana instructions for adding influxdb datasource
ukkopahis Dec 12, 2021
f897bf1
Pi-hole: docs to setup DNS for esphome devices
ukkopahis Dec 16, 2021
c06c08a
Fix docs on how to update containers
ukkopahis Dec 17, 2021
9e97ee3
docs: fix syntax and cleanup
ukkopahis Dec 11, 2021
2459ef9
docs: move developer documentation to subfolder
ukkopahis Jan 15, 2022
067995b
docs: add dark and light theme
Willem-Dekker Jul 12, 2020
a81573f
docs: fix unsupported absolute links
ukkopahis Jan 15, 2022
1eacd40
docs: Add how to write documentation
ukkopahis Jan 16, 2022
6be71a5
docs: Add top navigation tabs
ukkopahis Jan 25, 2022
1fc5105
docs: autogenerate heading link anchors
ukkopahis Jan 25, 2022
d38a122
docs: keep top tabs always visible and hide footer
ukkopahis Jan 25, 2022
b05029c
homeassistant: add docs for https reverse proxy setup
ukkopahis Jan 20, 2022
118648d
docs: fix to reflect network change
ukkopahis Jan 29, 2022
0d9b982
Wireguard: better document how PEERDNS works with host resolv.conf
ukkopahis Jan 29, 2022
4f52cf0
docs: fix container menu order
ukkopahis Jan 30, 2022
c614c20
influxdb: document basic usage
ukkopahis Feb 2, 2022
383d213
Merge remote-tracking branch 'upstream/master' into HEAD
ukkopahis Feb 24, 2022
a15ae1f
Pi-hole: improve docs
Paraphraser Feb 18, 2022
6e499db
Octoprint: change doc to use shorter menu title
ukkopahis Feb 24, 2022
40d17ec
docs: fix edit_uri
ukkopahis Feb 24, 2022
519aaee
docs: define mkdocs dependencies in requirements-mkdocs.txt
ukkopahis Feb 24, 2022
179c633
docs: add "stack" logo and favicon
ukkopahis Feb 24, 2022
fd0340c
docs: improve Wiki home page friendliness
ukkopahis Feb 23, 2022
3f9bcea
docs: move Updates/ from subfolder to top-level tab
ukkopahis Feb 24, 2022
4d69183
docs: improve "Getting Started"
ukkopahis Feb 24, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
docs: autogenerate heading link anchors
Remove custom anchor links and generate them automatically using
the toc markdown extension. Links updated to match new anchors.

This fixes the custom links coloring the heading blue,
which isn't the best if user selects the dark theme.

Heading changes done by:
cd docs && sed -i -r 's/(#*).*> (.*?) <\/a>/\1 \2/g' *md */*md */*/*md
ukkopahis committed Jan 28, 2022
commit 1fc5105aee1fa7dc435c51f6f9436147e0d08441
10 changes: 5 additions & 5 deletions docs/Basic_setup/Accessing-your-Device-from-the-internet.md
Original file line number Diff line number Diff line change
@@ -6,14 +6,14 @@ From time to time the IP address that your ISP assigns changes and it's difficul

Secondly, how do you get into your home network? Your router has a firewall that is designed to keep the rest of the internet out of your network to protect you. The solution to that is a Virtual Private Network (VPN) or "tunnel".

## <a name="dynamicDNS"> Dynamic DNS </a>
## Dynamic DNS

There are two parts to a Dynamic DNS service:

1. You have to register with a Dynamic DNS service provider and obtain a domain name that is not already taken by someone else.
2. Something on your side of the network needs to propagate updates so that your chosen domain name remains in sync with your router's dynamically-allocated public IP address.

### <a name="registerDDNS"> Register with a Dynamic DNS service provider </a>
### Register with a Dynamic DNS service provider

The first part is fairly simple and there are quite a few Dynamic DNS service providers including:

@@ -24,7 +24,7 @@ The first part is fairly simple and there are quite a few Dynamic DNS service pr
Some router vendors also provide their own built-in Dynamic DNS capabilities for registered customers so it's a good idea to check your router's capabilities before you plough ahead.

### <a name="propagateDDNS"> Dynamic DNS propagation </a>
### Dynamic DNS propagation

The "something" on your side of the network propagating WAN IP address changes can be either:

@@ -39,7 +39,7 @@ A behind-the-router technique usually relies on sending updates according to a s

> This seems to be a problem for DuckDNS which takes a beating because almost every person using it is sending an update bang-on five minutes.
### <a name="duckDNSclient"> DuckDNS client </a>
### DuckDNS client

IOTstack provides a solution for DuckDNS. The best approach to running it is:

@@ -99,7 +99,7 @@ A null result indicates failure so check your work.

Remember, the Domain Name System is a *distributed* database. It takes *time* for changes to propagate. The response you get from directing a query to ns1.duckdns.org may not be the same as the response you get from any other DNS server. You often have to wait until cached records expire and a recursive query reaches the authoritative DuckDNS name-servers.

#### <a name="duckDNSauto"> Running the DuckDNS client automatically </a>
#### Running the DuckDNS client automatically

The recommended arrangement for keeping your Dynamic DNS service up-to-date is to invoke `duck.sh` from `cron` at five minute intervals.

42 changes: 21 additions & 21 deletions docs/Basic_setup/Updates/gcgarner-migration.md
Original file line number Diff line number Diff line change
@@ -6,9 +6,9 @@ Migrating to SensorsIot/IOTstack was fairly easy when this repository was first

The probability of conflicts developing increases as a function of time since the fork. Conflicts were and are pretty much inevitable so a more involved procedure is needed.

## <a name="migrationSteps"> Migration Steps </a>
## Migration Steps

### <a name="checkAssumptions"> Step 1 – Check your assumptions </a>
### Step 1 – Check your assumptions

Make sure that you are, *actually*, on gcgarner. Don't assume!

@@ -20,7 +20,7 @@ origin https://github.com/gcgarner/IOTstack.git (push)

Do not proceed if you don't see those URLs!

### <a name="downStack"> Step 2 – Take IOTstack down </a>
### Step 2 – Take IOTstack down

Take your stack down. This is not *strictly* necessary but we'll be moving the goalposts a bit so it's better to be on the safe side.

@@ -29,22 +29,22 @@ $ cd ~/IOTstack
$ docker-compose down
```

### <a name="chooseMigrationMethod"> Step 3 – Choose your migration method </a>
### Step 3 – Choose your migration method

There are two basic approaches to switching from gcgarner/IOTstack to SensorsIot/IOTstack:

- [Migration by changing upstream repository](#migrateChangeUpstream)
- [Migration by clone and merge](#migrateCloneMerge)
- [Migration by changing upstream repository](#migration-option-1-change-upstream-repository)
- [Migration by clone and merge](#migration-option-2-clone-and-merge)

You can think of the first as "working *with* git" while the second is "using brute force".

The first approach will work if you haven't tried any other migration steps and/or have not made too many changes to items in your gcgarner/IOTstack that are under git control.

If you are already stuck or you try the first approach and get a mess, or it all looks far too hard to sort out, then try the [Migration by clone and merge](#migrateCloneMerge) approach.
If you are already stuck or you try the first approach and get a mess, or it all looks far too hard to sort out, then try the [Migration by clone and merge](#migration-option-2-clone-and-merge) approach.

#### <a name="migrateChangeUpstream"> Migration Option 1 – change upstream repository </a>
#### Migration Option 1 – change upstream repository

##### <a name="checkLocalChanges"> Check for local changes </a>
##### Check for local changes

Make sure you are on the master branch (you probably are so this is just a precaution), and then see if Git thinks you have made any local changes:

@@ -93,15 +93,15 @@ The simplest way to deal with modified files is to rename them to move them out
menu.sh.jqh
```

##### <a name="synchroniseGcgarner"> Synchronise with gcgarner on GitHub </a>
##### Synchronise with gcgarner on GitHub

Make sure your local copy of gcgarner is in sync with GitHub.

```
$ git pull
```

##### <a name="removeUpstream"> Get rid of any upstream reference </a>
##### Get rid of any upstream reference

There may or may not be any "upstream" set. The most likely reason for this to happen is if you used your local copy as the basis of a Pull Request.

@@ -111,15 +111,15 @@ The next command will probably return an error, which you should ignore. It's ju
$ git remote remove upstream
```

##### <a name="pointToSensorsIoT"> Point to SensorsIot </a>
##### Point to SensorsIot

Change your local repository to point to SensorsIot.

```
$ git remote set-url origin https://github.com/SensorsIot/IOTstack.git
```

##### <a name="syncSensorsIoT"> Synchronise with SensorsIot on GitHub </a>
##### Synchronise with SensorsIot on GitHub

This is where things can get a bit tricky so please read these instructions carefully **before** you proceed.

@@ -174,19 +174,19 @@ Auto-merging .templates/someRandomService/service.yml

If you don't use `someRandomService` then you could safely ignore this on the basis that it was "probably right". However, if you did use that service and it started to misbehave after migration, you would know that the `service.yml` file was a good place to start looking for explanations.

##### <a name="finishWithPull"> Finish with a pull </a>
##### Finish with a pull

At this point, only the migrated master branch is present on your local copy of the repository. The next command brings you fully in-sync with GitHub:

```
$ git pull
```

#### <a name="migrateCloneMerge"> Migration Option 2 – clone and merge </a>
#### Migration Option 2 – clone and merge

If you have been following the process correctly, your IOTstack will already be down.

##### <a name="renameOldIOTstack"> Rename your existing IOTstack folder </a>
##### Rename your existing IOTstack folder

Move your old IOTstack folder out of the way, like this:

@@ -199,7 +199,7 @@ Note:

* You should not need `sudo` for the `mv` command but it is OK to use it if necessary.

##### <a name="fetchCleanClone"> Fetch a clean clone of SensorsIot/IOTstack </a>
##### Fetch a clean clone of SensorsIot/IOTstack

```
$ git clone https://github.com/SensorsIot/IOTstack.git ~/IOTstack
@@ -240,7 +240,7 @@ Observe what is **not** there:

From this, it should be self-evident that a clean checkout from GitHub is the factory for *all* IOTstack installations, while the contents of `backups`, `services`, `volumes` and `docker-compose.yml` represent each user's individual choices, configuration options and data.

##### <a name="mergeOldWithNew"> Merge old into new </a>
##### Merge old into new

Execute the following commands:

@@ -272,7 +272,7 @@ There is no need to migrate the `backups` directory. You are better off creating
$ mkdir ~/IOTstack/backups
```

### <a name="chooseMenu"> Step 4 – Choose your menu </a>
### Step 4 – Choose your menu

If you have reached this point, you have migrated to SensorsIot/IOTstack where you are on the "master" branch. This implies "new menu".

@@ -353,15 +353,15 @@ Although you can freely change branches, it's probably not a good idea to try to

Even so, nothing will change **until** you run your chosen menu to completion and allow it to generate a new `docker-compose.yml`.

### <a name="upStack"> Step 5 – Bring up your stack </a>
### Step 5 – Bring up your stack

Unless you have gotten ahead of yourself and have already run the menu (old or new) then nothing will have changed in the parts of your `~/IOTstack` folder that define your IOTstack implementation. You can safely:

```
$ docker-compose up -d
```

## <a name="seeAlso"> See also </a>
## See also

There is another gist [Installing Docker for IOTstack](https://gist.github.com/Paraphraser/d119ae81f9e60a94e1209986d8c9e42f) which explains how to overcome problems with outdated Docker and Docker-Compose installations.

92 changes: 46 additions & 46 deletions docs/Basic_setup/index.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Getting Started

## <a name="introAndVideos"> introduction to IOTstack - videos </a>
## introduction to IOTstack - videos

Andreas Spiess Video #295: Raspberry Pi Server based on Docker, with VPN, Dropbox backup, Influx, Grafana, etc: IOTstack

@@ -10,7 +10,7 @@ Andreas Spiess Video #352: Raspberry Pi4 Home Automation Server (incl. Docker, O

[![#352 Raspberry Pi4 Home Automation Server (incl. Docker, OpenHAB, HASSIO, NextCloud)](http://img.youtube.com/vi/KJRMjUzlHI8/0.jpg)](https://www.youtube.com/watch?v=KJRMjUzlHI8)

## <a name="assumptions"> assumptions </a>
## assumptions

IOTstack makes the following assumptions:

@@ -46,15 +46,15 @@ If the first three assumptions hold, assumptions four through six are Raspberry

Please don't read these assumptions as saying that IOTstack will not run on other hardware, other operating systems, or as a different user. It is just that IOTstack gets most of its testing under these conditions. The further you get from these implicit assumptions, the more your mileage may vary.

### <a name="otherPlatforms"> other platforms </a>
### other platforms

Users have reported success on other platforms, including:

* [Orange Pi WinPlus](https://github.com/SensorsIot/IOTstack/issues/375)

## <a name="newInstallation"> new installation </a>
## new installation

### <a name="autoInstall"> automatic (recommended) </a>
### automatic (recommended)

1. Install `curl`:

@@ -82,7 +82,7 @@ Users have reported success on other platforms, including:
$ docker-compose up -d
```

### <a name="manualInstall"> manual </a>
### manual

1. Install `git`:

@@ -122,21 +122,21 @@ Users have reported success on other platforms, including:
$ docker-compose up -d
```

### <a name="scriptedInstall"> scripted </a>
### scripted

If you prefer to automate your installations using scripts, see:

* [Installing Docker for IOTstack](https://gist.github.com/Paraphraser/d119ae81f9e60a94e1209986d8c9e42f#scripting-iotstack-installations).

## <a name="gcgarnerMigrate"> migrating from the old repo (gcgarner)? </a>
## migrating from the old repo (gcgarner)?

If you are still running on gcgarner/IOTstack and need to migrate to SensorsIot/IOTstack, see:

* [Migrating IOTstack from gcgarner to SensorsIot](Updates/gcgarner-migration.md).

## <a name="recommendedPatches"> recommended system patches </a>
## recommended system patches

### <a name="patch1DHCP"> patch 1 – restrict DHCP </a>
### patch 1 – restrict DHCP

Run the following commands:

@@ -147,7 +147,7 @@ $ sudo reboot

See [Issue 219](https://github.com/SensorsIot/IOTstack/issues/219) and [Issue 253](https://github.com/SensorsIot/IOTstack/issues/253) for more information.

### <a name="patch2DHCP"> patch 2 – update libseccomp2 </a>
### patch 2 – update libseccomp2

This patch is **ONLY** for Raspbian Buster. Do **NOT** install this patch if you are running Raspbian Bullseye.

@@ -189,7 +189,7 @@ Enable by running (takes effect after reboot):
echo $(cat /boot/cmdline.txt) cgroup_memory=1 cgroup_enable=memory | sudo tee /boot/cmdline.txt
```

## <a name="aboutSudo"> a word about the `sudo` command </a>
## a word about the `sudo` command

Many first-time users of IOTstack get into difficulty by misusing the `sudo` command. The problem is best understood by example. In the following, you would expect `~` (tilde) to expand to `/home/pi`. It does:

@@ -237,13 +237,13 @@ Please try to minimise your use of `sudo` when you are working with IOTstack. He

It takes time, patience and practice to learn when `sudo` is **actually** needed. Over-using `sudo` out of habit, or because you were following a bad example you found on the web, is a very good way to find that you have created so many problems for yourself that will need to reinstall your IOTstack. *Please* err on the side of caution!

## <a name="theMenu"> the IOTstack menu </a>
## the IOTstack menu

The menu is used to install Docker and then build the `docker-compose.yml` file which is necessary for starting the stack.

> The menu is only an aid. It is a good idea to learn the `docker` and `docker-compose` commands if you plan on using Docker in the long run.
### <a name="menuInstallDocker"> menu item: Install Docker </a> (old menu only)
### menu item: Install Docker (old menu only)

Please do **not** try to install `docker` and `docker-compose` via `sudo apt install`. There's more to it than that. Docker needs to be installed by `menu.sh`. The menu will prompt you to install docker if it detects that docker is not already installed. You can manually install it from within the `Native Installs` menu:

@@ -260,7 +260,7 @@ Note:

* New menu (master branch) automates this step.

### <a name="menuBuildStack"> menu item: Build Stack </a>
### menu item: Build Stack

`docker-compose` uses a `docker-compose.yml` file to configure all your services. The `docker-compose.yml` file is created by the menu:

@@ -292,15 +292,15 @@ Some containers also need to be built locally. Node-RED is an example. Depending

Be patient (and ignore the huge number of warnings).

### <a name="menuDockerCommands"> menu item: Docker commands </a>
### menu item: Docker commands

The commands in this menu execute shell scripts in the root of the project.

### <a name="otherMenuItems"> other menu items </a>
### other menu items

The old and new menus differ in the options they offer. You should come back and explore them once your stack is built and running.

## <a name="switchingMenus"> switching menus </a>
## switching menus

At the time of writing, IOTstack supports three menus:

@@ -334,7 +334,7 @@ $ git checkout -- .templates/mosquitto/Dockerfile

When `git status` reports no more "modified" files, it is safe to switch your branch.

### <a name="menuMasterBranch"> current menu (master branch) </a>
### current menu (master branch)

```bash
$ cd ~/IOTstack/
@@ -352,7 +352,7 @@ $ git checkout old-menu
$ ./menu.sh
```

### <a name="menuExperimentalBranch"> experimental branch </a>
### experimental branch

Switch to the experimental branch to try the latest and greatest features.

@@ -377,14 +377,14 @@ Notes:

* The way back is to take down your stack, restore a backup, and bring up your stack again.

## <a name="dockerAndCompose"> useful commands: docker & docker-compose </a>
## useful commands: docker & docker-compose

Handy rules:

* `docker` commands can be executed from anywhere, but
* `docker-compose` commands need to be executed from within `~/IOTstack`

### <a name="upIOTstack"> starting your IOTstack </a>
### starting your IOTstack

To start the stack:

@@ -395,7 +395,7 @@ $ docker-compose up -d

Once the stack has been brought up, it will stay up until you take it down. This includes shutdowns and reboots of your Raspberry Pi. If you do not want the stack to start automatically after a reboot, you need to stop the stack before you issue the reboot command.

#### <a name="journaldErrors"> logging journald errors </a>
#### logging journald errors

If you get docker logging error like:

@@ -425,7 +425,7 @@ Logging limits were added to prevent Docker using up lots of RAM if log2ram is e

You can also turn logging off or set it to use another option for any service by using the IOTstack `docker-compose-override.yml` file mentioned at [IOTstack/Custom](Custom.md).

### <a name="upContainer"> starting an individual container </a>
### starting an individual container

To start a particular container:

@@ -434,7 +434,7 @@ $ cd ~/IOTstack
$ docker-compose up -d «container»
```

### <a name="downIOTstack"> stopping your IOTstack </a>
### stopping your IOTstack

Stopping aka "downing" the stack stops and deletes all containers, and removes the internal network:

@@ -450,7 +450,7 @@ $ cd ~/IOTstack
$ docker-compose stop
```

### <a name="downContainer"> stopping an individual container </a>
### stopping an individual container

`stop` can also be used to stop individual containers, like this:

@@ -480,7 +480,7 @@ $ cd ~/IOTstack
$ docker-compose up -d «container»
```

### <a name="dockerPS"> checking container status </a>
### checking container status

You can check the status of containers with:

@@ -495,7 +495,7 @@ $ cd ~/IOTstack
$ docker-compose ps
```

### <a name="dockerLogs"> viewing container logs </a>
### viewing container logs

You can inspect the logs of most containers like this:

@@ -517,7 +517,7 @@ $ docker logs -f nodered

Terminate with a Control+C. Note that restarting a container will also terminate a followed log.

### <a name="restartContainer"> restarting a container </a>
### restarting a container

You can restart a container in several ways:

@@ -544,9 +544,9 @@ $ cd ~/IOTstack
$ docker-compose up -d --force-recreate «container»
```

See also [updating images built from Dockerfiles](#updateDockerfile) if you need to force `docker-compose` to notice a change to a Dockerfile.
See also [updating images built from Dockerfiles](#updating-images-not-built-from-dockerfiles) if you need to force `docker-compose` to notice a change to a Dockerfile.

## <a name="persistentStore"> persistent data </a>
## persistent data

Docker allows a container's designer to map folders inside a container to a folder on your disk (SD, SSD, HD). This is done with the "volumes" key in `docker-compose.yml`. Consider the following snippet for Node-RED:

@@ -588,7 +588,7 @@ is mirrored at the same relative path **inside** the container at:
/data
```

### <a name="deletePersistentStore"> deleting persistent data </a>
### deleting persistent data

If you need a "clean slate" for a container, you can delete its volumes. Using InfluxDB as an example:

@@ -616,9 +616,9 @@ When InfluxDB starts, it sees that the folder on right-hand-side of the volumes

This is how **most** containers behave. There are exceptions so it's always a good idea to keep a backup.

## <a name="stackMaintenance"> stack maintenance </a>
## stack maintenance

### <a name="raspbianUpdates"> update Raspberry Pi OS </a>
### update Raspberry Pi OS

You should keep your Raspberry Pi up-to-date. Despite the word "container" suggesting that *containers* are fully self-contained, they sometimes depend on operating system components ("WireGuard" is an example).

@@ -627,7 +627,7 @@ $ sudo apt update
$ sudo apt upgrade -y
```

### <a name="gitUpdates"> git pull </a>
### git pull

Although the menu will generally do this for you, it does not hurt to keep your local copy of the IOTstack repository in sync with the master version on GitHub.

@@ -636,7 +636,7 @@ $ cd ~/IOTstack
$ git pull
```

### <a name="imageUpdates"> container image updates </a>
### container image updates

There are two kinds of images used in IOTstack:

@@ -650,7 +650,7 @@ The easiest way to work out which type of image you are looking at is to inspect
* `image:` keyword then the image is **not** built using a Dockerfile.
* `build:` keyword then the image **is** built using a Dockerfile.

#### <a name="updateNonDockerfile"> updating images not built from Dockerfiles </a>
#### updating images not built from Dockerfiles

If new versions of this type of image become available on DockerHub, your local IOTstack copies can be updated by a `pull` command:

@@ -665,7 +665,7 @@ The `pull` downloads any new images. It does this without disrupting the running

The `up -d` notices any newly-downloaded images, builds new containers, and swaps old-for-new. There is barely any downtime for affected containers.

#### <a name="updateDockerfile"> updating images built from Dockerfiles </a>
#### updating images built from Dockerfiles

Containers built using Dockerfiles have a two-step process:

@@ -685,7 +685,7 @@ Note:

* You can also add nodes to Node-RED using Manage Palette.

##### <a name="buildDockerfile"> when Dockerfile changes (*local* image only) </a>
##### when Dockerfile changes (*local* image only)

When your Dockerfile changes, you need to rebuild like this:

@@ -697,7 +697,7 @@ $ docker system prune

This only rebuilds the *local* image and, even then, only if `docker-compose` senses a *material* change to the Dockerfile.

If you are trying to force the inclusion of a later version of an add-on node, you need to treat it like a [DockerHub update](#rebuildDockerfile).
If you are trying to force the inclusion of a later version of an add-on node, you need to treat it like a [DockerHub update](#updating-images-built-from-dockerfiles).

Key point:

@@ -712,7 +712,7 @@ Note:
$ docker-compose up --build -d nodered
```

##### <a name="rebuildDockerfile"> when DockerHub updates (*base* and *local* images) </a>
##### when DockerHub updates (*base* and *local* images)

When a newer version of the *base* image appears on DockerHub, you need to rebuild like this:

@@ -728,7 +728,7 @@ This causes DockerHub to be checked for the later version of the *base* image, d

Then, the Dockerfile is run to produce a new *local* image. The Dockerfile run happens even if a new *base* image was not downloaded in the previous step.

### <a name="dockerPrune"> deleting unused images </a>
### deleting unused images

As your system evolves and new images come down from DockerHub, you may find that more disk space is being occupied than you expected. Try running:

@@ -762,9 +762,9 @@ $ docker rmi dbf28ba50432

In general, you can use the repository name to remove an image but the Image ID is sometimes needed. The most common situation where you are likely to need the Image ID is after an image has been updated on DockerHub and pulled down to your Raspberry Pi. You will find two containers with the same name. One will be tagged "latest" (the running version) while the other will be tagged "\<none\>" (the prior version). You use the Image ID to resolve the ambiguity.

### <a name="versionPinning"> pinning to specific versions </a>
### pinning to specific versions

See [container image updates](#imageUpdates) to understand how to tell the difference between images that are used "as is" from DockerHub versus those that are built from local Dockerfiles.
See [container image updates](#container-image-updates) to understand how to tell the difference between images that are used "as is" from DockerHub versus those that are built from local Dockerfiles.

Note:

@@ -820,7 +820,7 @@ To pin an image to a specific version:
$ docker-compose up -d --build mosquitto
```

## <a name="nuclearOption"> the nuclear option - use with caution </a>
## the nuclear option - use with caution

If you create a mess and can't see how to recover, try proceeding like this:

@@ -840,7 +840,7 @@ In words:
4. Move your existing IOTstack directory out of the way. If you get a permissions problem:

* Re-try the command with `sudo`; and
* Read [a word about the `sudo` command](#aboutSudo). Needing `sudo` in this situation is an example of over-using `sudo`.
* Read [a word about the `sudo` command](#a-word-about-the-sudo-command). Needing `sudo` in this situation is an example of over-using `sudo`.

5. Check out a clean copy of IOTstack.

8 changes: 4 additions & 4 deletions docs/Containers/AdGuardHome.md
Original file line number Diff line number Diff line change
@@ -9,7 +9,7 @@

AdGuard Home and PiHole perform similar functions. They use the same ports so you can **not** run both at the same time. You must choose one or the other.

## <a name="quickStart"> Quick Start </a>
## Quick Start

When you first install AdGuard Home:

@@ -34,7 +34,7 @@ When you first install AdGuard Home:

Port 8089 is the default administrative user interface for AdGuard Home running under IOTstack.

Port 8089 is not active until you have completed the [Quick Start](#quickStart) procedure. You must start by connecting to port 3001.
Port 8089 is not active until you have completed the [Quick Start](#quick-start) procedure. You must start by connecting to port 3001.

Because of AdGuard Home limitations, you must take special precautions if you decide to change to a different port number:

@@ -50,11 +50,11 @@ Because of AdGuard Home limitations, you must take special precautions if you de
$ docker-compose up -d adguardhome
```

3. Repeat the [Quick Start](#quickStart) procedure, this time substituting the new Admin Web Interface port where you see "8089".
3. Repeat the [Quick Start](#quick-start) procedure, this time substituting the new Admin Web Interface port where you see "8089".

## About port 3001:3000

Port 3001 (external, 3000 internal) is only used during [Quick Start](#quickStart) procedure. Once port 8089 becomes active, port 3001 ceases to be active.
Port 3001 (external, 3000 internal) is only used during [Quick Start](#quick-start) procedure. Once port 8089 becomes active, port 3001 ceases to be active.

In other words, you need to keep port 3001 reserved even though it is only ever used to set up port 8089.

34 changes: 17 additions & 17 deletions docs/Containers/Blynk_server.md
Original file line number Diff line number Diff line change
@@ -2,7 +2,7 @@

This document discusses an IOTstack-specific version of Blynk-Server. It is built on top of an [Ubuntu](https://hub.docker.com/_/ubuntu) base image using a *Dockerfile*.

## <a name="references"> References </a>
## References

- [Ubuntu base image](https://hub.docker.com/_/ubuntu) at DockerHub
- [Peter Knight Blynk-Server fork](https://github.com/Peterkn2001/blynk-server) at GitHub (includes documentation)
@@ -18,7 +18,7 @@ Acknowledgement:

- Original writeup from @877dev

## <a name="significantFiles"> Significant directories and files </a>
## Significant directories and files

```
~/IOTstack
@@ -56,19 +56,19 @@ Everything in ❽:
* will be replaced if it is not present when the container starts; but
* will never be overwritten if altered by you.

## <a name="howBlynkServerIOTstackGetsBuilt"> How Blynk Server gets built for IOTstack </a>
## How Blynk Server gets built for IOTstack

### <a name="dockerHubImages"> GitHub Updates </a>
### GitHub Updates

Periodically, the source code is updated and a new version is released. You can check for the latest version at the [releases page](https://github.com/Peterkn2001/blynk-server/releases/).

### <a name="iotstackMenu"> IOTstack menu </a>
### IOTstack menu

When you select Blynk Server in the IOTstack menu, the *template service definition* is copied into the *Compose* file.

> Under old menu, it is also copied to the *working service definition* and then not really used.
### <a name="iotstackFirstRun"> IOTstack first run </a>
### IOTstack first run

On a first install of IOTstack, you run the menu, choose your containers, and are told to do this:

@@ -129,15 +129,15 @@ ubuntu latest 897590a6c564 7 days ago 49.8MB

You will see the same pattern in *Portainer*, which reports the ***base image*** as "unused". You should not remove the ***base*** image, even though it appears to be unused.

## <a name="logging"> Logging </a>
## Logging

You can inspect Blynk Server's log by:

```
$ docker logs blynk_server
```

## <a name="editConfiguration"> Changing Blynk Server's configuration </a>
## Changing Blynk Server's configuration

The first time you launch the `blynk_server` container, the following structure will be created in the persistent storage area:

@@ -156,7 +156,7 @@ $ cd ~/IOTstack
$ docker-compose restart blynk_server
```

## <a name="cleanSlate"> Getting a clean slate </a>
## Getting a clean slate

Erasing Blynk Server's persistent storage area triggers self-healing and restores known defaults:

@@ -176,7 +176,7 @@ Note:
$ docker-compose restart blynk_server
```

## <a name="upgradingBlynkServer"> Upgrading Blynk Server </a>
## Upgrading Blynk Server

To find out when a new version has been released, you need to visit the [Blynk-Server releases](https://github.com/Peterkn2001/blynk-server/releases/) page at GitHub.

@@ -216,11 +216,11 @@ At the time of writing, version 0.41.16 was the most up-to-date. Suppose that ve
$ docker system prune -f
```

## <a name="usingBlynkServer"> Using Blynk Server </a>
## Using Blynk Server

See the [References](#references) for documentation links.

### <a name="blynkAdmin"> Connecting to the administrative UI </a>
### Connecting to the administrative UI

To connect to the administrative interface, navigate to:

@@ -233,7 +233,7 @@ You may encounter browser security warnings which you will have to acknowledge i
- username = `admin@blynk.cc`
- password = `admin`

### <a name="changePassword"> Change username and password </a>
### Change username and password

1. Click on Users > "email address" and edit email, name and password.
2. Save changes.
@@ -244,19 +244,19 @@ You may encounter browser security warnings which you will have to acknowledge i
$ docker-compose restart blynk_server
```

### <a name="gmailSetup"> Setup gmail </a>
### Setup gmail

Optional step, useful for getting the auth token emailed to you.
(To be added once confirmed working....)

### <a name="mobileSetup"> iOS/Android app setup </a>
### iOS/Android app setup

1. When setting up the application on your mobile be sure to select "custom" setup [see](https://github.com/Peterkn2001/blynk-server#app-and-sketch-changes).
2. Press "New Project"
3. Give it a name, choose device "Raspberry Pi 3 B" so you have plenty of [virtual pins](http://help.blynk.cc/en/articles/512061-what-is-virtual-pins) available, and lastly select WiFi.
4. Create project and the [auth token](https://docs.blynk.cc/#getting-started-getting-started-with-the-blynk-app-4-auth-token) will be emailed to you (if emails configured). You can also find the token in app under the phone app settings, or in the admin web interface by clicking Users>"email address" and scroll down to token.

### <a name="quickAppGuide"> Quick usage guide for app </a>
### Quick usage guide for app

1. Press on the empty page, the widgets will appear from the right.
2. Select your widget, let's say a button.
@@ -269,7 +269,7 @@ Optional step, useful for getting the auth token emailed to you.

Enter Node-Red.....

### <a name="enterNodeRed"> Node-RED </a>
### Node-RED

1. Install `node-red-contrib-blynk-ws` from Manage Palette.
2. Drag a "write event" node into your flow, and connect to a debug node
8 changes: 4 additions & 4 deletions docs/Containers/Chronograf.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# Chronograf

## <a name="references"> References </a>
## References

- [*influxdata Chronograf* documentation](https://docs.influxdata.com/chronograf/)
- [*GitHub*: influxdata/influxdata-docker/chronograf](https://github.com/influxdata/influxdata-docker/tree/master/chronograf)
- [*DockerHub*: influxdata Chronograf](https://hub.docker.com/_/chronograf)

## <a name="kapacitorIntegration"> Kapacitor integration </a>
## Kapacitor integration

If you selected Kapacitor in the menu and want Chronograf to be able to interact with it, you need to edit `docker-compose.yml` to un-comment the lines which are commented-out in the following:

@@ -28,7 +28,7 @@ $ cd ~IOTstack
$ docker-compose up -d chronograf
```

## <a name="upgradingChronograf"> Upgrading Chronograf </a>
## Upgrading Chronograf

You can update the container via:

@@ -45,7 +45,7 @@ In words:
* `docker-compose up -d` causes any newly-downloaded images to be instantiated as containers (replacing the old containers); and
* the `prune` gets rid of the outdated images.

### <a name="versionPinning"> Chronograf version pinning </a>
### Chronograf version pinning

If you need to pin to a particular version:

40 changes: 20 additions & 20 deletions docs/Containers/Home-Assistant.md
Original file line number Diff line number Diff line change
@@ -2,7 +2,7 @@

Home Assistant is a home automation platform running on Python 3. It is able to track and control all devices at your home and offer a platform for automating control.

## <a name="references"> References </a>
## References

- [Home Assistant home page](https://www.home-assistant.io/)

@@ -31,7 +31,7 @@ Note:

* Technically, both versions can **run** at the same time but it is not **supported**. Each version runs in "host mode" and binds to port 8123 so, in practice, the first version to start will claim the port and the second version will then be blocked.

### <a name="versionHassio"> Hass.io </a>
### Hass.io

Hass.io uses its own orchestration:

@@ -45,21 +45,21 @@ Hass.io uses its own orchestration:

IOTstack can only offer limited configuration of Hass.io since it is its own platform.

### <a name="versionHAContainer"> Home Assistant Container </a>
### Home Assistant Container

Home Assistant Container runs as a single Docker container, and doesn't support all the features that Hass.io does (such as add-ons).

## <a name="menuInstallation"> Menu installation </a>
## Menu installation

### <a name="installHassio"> Installing Hass.io </a>
### Installing Hass.io

Hass.io creates a conundrum:

* If you are definitely going to install Hass.io then you **must** install its dependencies **before** you install Docker.
* One of Hass.io's dependencies is [Network Manager](https://wiki.archlinux.org/index.php/NetworkManager). Network Manager makes **serious** changes to your operating system, with side-effects you may not expect such as giving your Raspberry Pi's WiFi interface a random MAC address both during the installation and, then, each time you reboot. You are in for a world of pain if you install Network Manager without first understanding what is going to happen and planning accordingly.
* If you don't install Hass.io's dependencies before you install Docker, you will either have to uninstall Docker or rebuild your system. This is because both Docker and Network Manager adjust your Raspberry Pi's networking. Docker is happy to install after Network Manager, but the reverse is not true.

#### <a name="uninstallDocker"> Step 1: If Docker is already installed, uninstall it </a>
#### Step 1: If Docker is already installed, uninstall it

```bash
$ sudo apt -y purge docker-ce docker-ce-cli containerd.io
@@ -71,21 +71,21 @@ Note:

* Removing Docker does **not** interfere with your existing `~/IOTstack` folder.

#### <a name="aptUpdate"> Step 2: Ensure your system is fully up-to-date </a>
#### Step 2: Ensure your system is fully up-to-date

```bash
$ sudo apt update
$ sudo apt upgrade -y
```

#### <a name="hassioDependencies1"> Step 3: Install Hass.io dependencies (stage 1) </a>
#### Step 3: Install Hass.io dependencies (stage 1)

```bash
$ sudo apt install -y apparmor apparmor-profiles apparmor-utils
$ sudo apt install -y software-properties-common apt-transport-https ca-certificates dbus
```

#### <a name="useEthernet"> Step 4: Connect to your Raspberry Pi via Ethernet </a>
#### Step 4: Connect to your Raspberry Pi via Ethernet

You can skip this step if you interact with your Raspberry Pi via a screen connected to its HDMI port, along with a keyboard and mouse.

@@ -127,17 +127,17 @@ You *may* be able to re-connect after the WiFi interface acquires a new IP addre

The advice about using Ethernet is well-intentioned. You should heed this advice even if means you need to temporarily relocate your Raspberry Pi just so you can attach it via Ethernet for the next few steps. You can go back to WiFi later, once everything is set up. You have been warned!

#### <a name="hassioDependencies2"> Step 5: Install Hass.io dependencies (stage 2) </a>
#### Step 5: Install Hass.io dependencies (stage 2)

Install Network Manager:

```bash
$ sudo apt install -y network-manager
```

#### <a name="disableRandomMac1"> Step 6: Consider disabling random MAC address allocation </a>
#### Step 6: Consider disabling random MAC address allocation

To understand why you should consider disabling random MAC address allocation, see [why random MACs are such a hassle ](#aboutRandomMACs).
To understand why you should consider disabling random MAC address allocation, see [why random MACs are such a hassle ](#why-random-macs-are-such-a-hassle).

You can stop Network Manager from allocating random MAC addresses to your WiFi interface by running the following commands:

@@ -150,7 +150,7 @@ Acknowledgement:

* This tip came from [@steveatk on Discord](https://discordapp.com/channels/638610460567928832/638610461109256194/758825690715652116).

#### <a name="reinstallDocker"> Step 7: Re-install Docker </a>
#### Step 7: Re-install Docker

You can re-install Docker using the IOTstack menu or one of the scripts provided with IOTstack but the following commands guarantee an up-to-date version of `docker-compose` and also include a dependency needed if you want to run with the 64-bit kernel:

@@ -169,7 +169,7 @@ Note:

* Installing or re-installing Docker does **not** interfere with your existing `~/IOTstack` folder.

#### <a name="runHassioInstall"> Step 8: Run the Hass.io installation </a>
#### Step 8: Run the Hass.io installation

Start at:

@@ -184,7 +184,7 @@ The installation of Hass.io takes up to 20 minutes (depending on your internet c

Hass.io installation is provided as a convenience. It is independent of, is not maintained by, and does not appear in the `docker-compose.yml` for IOTstack. Hass.io has its own service for maintaining its uptime.

#### <a name="disableRandomMac2"> Re-check random MAC address allocation </a>
#### Re-check random MAC address allocation

Installing Hass.io can re-enable random MAC address allocation. You should check this via:

@@ -195,9 +195,9 @@ wifi.scan-rand-mac-address=no

```

If you do **NOT** see `wifi.scan-rand-mac-address=no`, repeat [Step 6](#disableRandomMac1).
If you do **NOT** see `wifi.scan-rand-mac-address=no`, repeat [Step 6](#step-6-consider-disabling-random-mac-address-allocation).

### <a name="installHAContainer"> Installing Home Assistant Container </a>
### Installing Home Assistant Container

Home Assistant can be found in the `Build Stack` menu. Selecting it in this menu results in a service definition being added to:

@@ -222,7 +222,7 @@ $ cd ~/IOTstack
$ docker-compose up -d
```

## <a name="deactivateHassio"> Deactivating Hass.io </a>
## Deactivating Hass.io

Because Hass.io is independent of IOTstack, you can't deactivate it with any of the commands you normally use for IOTstack.

@@ -249,15 +249,15 @@ You can use Portainer to view what is running and clean up the unused images.
At this point, Hass.io is stopped and will not start again after a reboot. Your options are:

* Leave things as they are; or
* Re-install Hass.io by starting over at [Installing Hass.io](#installHassio); or
* Re-install Hass.io by starting over at [Installing Hass.io](#installing-hassio); or
* Re-activate Hass.io by:

```bash
$ sudo systemctl enable hassio-supervisor.service
$ sudo systemctl start hassio-supervisor.service
```

## <a name="aboutRandomMACs"> Why random MACs are such a hassle </a>
## Why random MACs are such a hassle

> This material was originally posted as part of [Issue 312](https://github.com/SensorsIot/IOTstack/issues/312). It was moved here following a suggestion by [lole-elol](https://github.com/lole-elol).
6 changes: 3 additions & 3 deletions docs/Containers/Kapacitor.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# Kapacitor

## <a name="references"> References </a>
## References

- [*influxdata Kapacitor* documentation](https://docs.influxdata.com/kapacitor/)
- [*GitHub*: influxdata/influxdata-docker/kapacitor](https://github.com/influxdata/influxdata-docker/tree/master/kapacitor)
- [*DockerHub*: influxdata Kapacitor](https://hub.docker.com/_/kapacitor)

## <a name="upgradingKapacitor"> Upgrading Kapacitor </a>
## Upgrading Kapacitor

You can update the container via:

@@ -23,7 +23,7 @@ In words:
* `docker-compose up -d` causes any newly-downloaded images to be instantiated as containers (replacing the old containers); and
* the `prune` gets rid of the outdated images.

### <a name="versionPinning"> Kapacitor version pinning </a>
### Kapacitor version pinning

If you need to pin to a particular version:

10 changes: 5 additions & 5 deletions docs/Containers/MariaDB.md
Original file line number Diff line number Diff line change
@@ -67,9 +67,9 @@ To close the terminal session, either:
* type "exit" and press <kbd>return</kbd>; or
* press <kbd>control</kbd>+<kbd>d</kbd>.

## <a name="healthCheck"> Container health check </a>
## Container health check

### <a name="healthCheckTheory"> theory of operation </a>
### theory of operation

A script , or "agent", to assess the health of the MariaDB container has been added to the *local image* via the *Dockerfile*. In other words, the script is specific to IOTstack.

@@ -87,11 +87,11 @@ The agent is invoked 30 seconds after the container starts, and every 30 seconds
mysqld is alive
```

3. If the command returned the expected response, the agent tests the responsiveness of the TCP port the `mysqld` daemon should be listening on (see [customising health-check](#healthCheckCustom)).
3. If the command returned the expected response, the agent tests the responsiveness of the TCP port the `mysqld` daemon should be listening on (see [customising health-check](#customising-health-check)).

4. If all of those steps succeed, the agent concludes that MariaDB is functioning properly and returns "healthy".

### <a name="healthCheckMonitor"> monitoring health-check </a>
### monitoring health-check

Portainer's *Containers* display contains a *Status* column which shows health-check results for all containers that support the feature.

@@ -124,7 +124,7 @@ Possible reply patterns are:
mariadb Up About a minute (unhealthy)
```

### <a name="healthCheckCustom"> customising health-check </a>
### customising health-check

You can customise the operation of the health-check agent by editing the `mariadb` service definition in your *Compose* file:

68 changes: 34 additions & 34 deletions docs/Containers/Mosquitto.md
Original file line number Diff line number Diff line change
@@ -6,15 +6,15 @@ This document discusses an IOTstack-specific version of Mosquitto built on top o
<hr>

## <a name="references"> References </a>
## References

- [*Eclipse Mosquitto* home](https://mosquitto.org)
- [*GitHub*: eclipse/mosquitto](https://github.com/eclipse/mosquitto)
- [*DockerHub*: eclipse-mosquitto](https://hub.docker.com/_/eclipse-mosquitto)
- [Setting up passwords](https://www.youtube.com/watch?v=1msiFQT_flo) (video)
- [Tutorial: from MQTT to InfluxDB via Node-Red](https://gist.github.com/Paraphraser/c9db25d131dd4c09848ffb353b69038f)

## <a name="significantFiles"> Significant directories and files </a>
## Significant directories and files

```
~/IOTstack
@@ -57,23 +57,23 @@ This document discusses an IOTstack-specific version of Mosquitto built on top o
* You will normally need `sudo` to make changes in this area.
* Each time Mosquitto starts, it automatically replaces anything originating in ❹ that has gone missing from ❼. This "self-repair" function is intended to provide reasonable assurance that Mosquitto will at least **start** instead of going into a restart loop.

## <a name="howMosquittoIOTstackGetsBuilt"> How Mosquitto gets built for IOTstack </a>
## How Mosquitto gets built for IOTstack

### <a name="githubSourceCode"> Mosquitto source code ([*GitHub*](https://github.com)) </a>
### Mosquitto source code ([*GitHub*](https://github.com))

The source code for Mosquitto lives at [*GitHub* eclipse/mosquitto](https://github.com/eclipse/mosquitto).

### <a name="dockerHubImages"> Mosquitto images ([*DockerHub*](https://hub.docker.com)) </a>
### Mosquitto images ([*DockerHub*](https://hub.docker.com))

Periodically, the source code is recompiled and the resulting image is pushed to [eclipse-mosquitto](https://hub.docker.com/_/eclipse-mosquitto?tab=tags&page=1&ordering=last_updated) on *DockerHub*.

### <a name="iotstackMenu"> IOTstack menu </a>
### IOTstack menu

When you select Mosquitto in the IOTstack menu, the *template service definition* is copied into the *Compose* file.

> Under old menu, it is also copied to the *working service definition* and then not really used.
### <a name="iotstackFirstRun"> IOTstack first run </a>
### IOTstack first run

On a first install of IOTstack, you run the menu, choose Mosquitto as one of your containers, and are told to do this:

@@ -82,7 +82,7 @@ $ cd ~/IOTstack
$ docker-compose up -d
```

> See also the [Migration considerations](#migration) (below).
> See also the [Migration considerations](#migration-considerations) (below).
`docker-compose` reads the *Compose* file. When it arrives at the `mosquitto` fragment, it finds:

@@ -107,7 +107,7 @@ The *Dockerfile* begins with:
FROM eclipse-mosquitto:latest
```

> If you need to pin to a particular version of Mosquitto, the *Dockerfile* is the place to do it. See [Mosquitto version pinning](#versionPinning).
> If you need to pin to a particular version of Mosquitto, the *Dockerfile* is the place to do it. See [Mosquitto version pinning](#mosquitto-version-pinning).
The `FROM` statement tells the build process to pull down the ***base image*** from [*DockerHub*](https://hub.docker.com).

@@ -142,7 +142,7 @@ eclipse-mosquitto latest 46ad1893f049 4 weeks ago 8.31MB

You will see the same pattern in Portainer, which reports the *base image* as "unused". You should not remove the *base* image, even though it appears to be unused.

### <a name="migration"> Migration considerations </a>
### Migration considerations

Under the original IOTstack implementation of Mosquitto (just "as it comes" from *DockerHub*), the service definition expected the configuration files to be at:

@@ -203,7 +203,7 @@ Using `mosquitto.conf` as the example, assume you wish to use your existing file

5. If necessary, repeat these steps with `filter.acl`.

## <a name="logging"> Logging </a>
## Logging

Mosquitto logging is controlled by `mosquitto.conf`. This is the default configuration:

@@ -246,9 +246,9 @@ $ sudo tail ~/IOTstack/volumes/mosquitto/log/mosquitto.log
Logs written to `mosquitto.log` do not disappear when your IOTstack is restarted. They persist until you take action to prune the file.

## <a name="security"> Security </a>
## Security

### <a name="securityConfiguration"> Configuring security </a>
### Configuring security

Mosquitto security is controlled by `mosquitto.conf`. These are the relevant directives:

@@ -267,7 +267,7 @@ enabled | true | credentials optional | |
enabled | false | credentials required | |


### <a name="passwordManagement"> Password file management </a>
### Password file management

The password file for Mosquitto is part of a mapped volume:

@@ -285,7 +285,7 @@ The Mosquitto container performs self-repair each time the container is brought

* If `false` then **all** MQTT requests will be rejected.

#### <a name="passwordCreation"> create username and password </a>
#### create username and password

To create a username and password, use the following as a template.

@@ -301,9 +301,9 @@ $ docker exec mosquitto mosquitto_passwd -b /mosquitto/pwfile/pwfile hello world

Note:

* See also [customising health-check](#healthCheckCustom). If you are creating usernames and passwords, you may also want to create credentials for the health-check agent.
* See also [customising health-check](#customising-health-check). If you are creating usernames and passwords, you may also want to create credentials for the health-check agent.

#### <a name="checkPasswordFile"> check password file </a>
#### check password file

There are two ways to verify that the password file exists and has the expected content:

@@ -327,15 +327,15 @@ Each credential starts with the username and occupies one line in the file:
hello:$7$101$ZFOHHVJLp2bcgX+h$MdHsc4rfOAhmGG+65NpIEJkxY0beNeFUyfjNAGx1ILDmI498o4cVOaD9vDmXqlGUH9g6AgHki8RPDEgjWZMkDA==
```

#### <a name="deletePassword"> remove entry from password file </a>
#### remove entry from password file

To remove an entry from the password file:

```
$ docker exec mosquitto mosquitto_passwd -D /mosquitto/pwfile/pwfile «username»
```

#### <a name="resetPasswordFile"> reset the password file </a>
#### reset the password file

There are several ways to reset the password file. Your options are:

@@ -366,7 +366,7 @@ There are several ways to reset the password file. Your options are:

The result is an empty password file.

### <a name="activateSecurity"> Activate Mosquitto security </a>
### Activate Mosquitto security

1. Use `sudo` and your favourite text editor to open the following file:

@@ -409,23 +409,23 @@ There are several ways to reset the password file. Your options are:
$ docker-compose restart mosquitto
```

### <a name="testSecurity"> Testing Mosquitto security </a>
### Testing Mosquitto security

#### <a name="testAssumptions"> assumptions </a>
#### assumptions

1. You have created at least one username ("hello") and password ("world").
2. `password_file` is enabled.
3. `allow_anonymous` is `false`.

#### <a name="installTestTools"> install testing tools </a>
#### install testing tools

If you do not have the Mosquitto clients installed on your Raspberry Pi (ie `$ which mosquitto_pub` does not return a path), install them using:

```
$ sudo apt install -y mosquitto-clients
```

#### <a name="anonymousDenied"> test: *anonymous access is prohibited* </a>
#### test: *anonymous access is prohibited*

Test **without** providing credentials:

@@ -439,7 +439,7 @@ Note:

* The error is the expected result and shows that Mosquitto will not allow anonymous access.

#### <a name="pubPermitted"> test: *access with credentials is permitted* </a>
#### test: *access with credentials is permitted*

Test with credentials

@@ -452,7 +452,7 @@ Note:

* The absence of any error message means the message was sent. Silence = success!

#### <a name="pubSubPermitted"> test: *round-trip with credentials is permitted* </a>
#### test: *round-trip with credentials is permitted*

Prove round-trip connectivity will succeed when credentials are provided. First, set up a subscriber as a background process. This mimics the role of a process like Node-Red:

@@ -480,9 +480,9 @@ $
[1]+ Terminated mosquitto_sub -v -h 127.0.0.1 -p 1883 -t "/password/test" -F "%I %t %p" -u hello -P world
```

## <a name="healthCheck"> Container health check </a>
## Container health check

### <a name="healthCheckTheory"> theory of operation </a>
### theory of operation

A script , or "agent", to assess the health of the Mosquitto container has been added to the *local image* via the *Dockerfile*. In other words, the script is specific to IOTstack.

@@ -497,7 +497,7 @@ The agent is invoked 30 seconds after the container starts, and every 30 seconds
* Subscribes to the same broker for the same topic for a single message event.
* Compares the payload sent with the payload received. If the payloads (ie time-stamps) match, the agent concludes that the Mosquitto broker (the process running inside the same container) is functioning properly for round-trip messaging.

### <a name="healthCheckMonitor"> monitoring health-check </a>
### monitoring health-check

Portainer's *Containers* display contains a *Status* column which shows health-check results for all containers that support the feature.

@@ -543,7 +543,7 @@ Notes:
* If you enable authentication for your Mosquitto broker, you will need to add `-u «user»` and `-P «password»` parameters to this command.
* You should expect to see a new message appear approximately every 30 seconds. That indicates the health-check agent is functioning normally. Use <kbd>control</kbd>+<kbd>c</kbd> to terminate the command.

### <a name="healthCheckCustom"> customising health-check </a>
### customising health-check

You can customise the operation of the health-check agent by editing the `mosquitto` service definition in your *Compose* file:

@@ -563,7 +563,7 @@ You can customise the operation of the health-check agent by editing the `mosqui

Note:

* You will also need to use the same topic string in the `mosquitto_sub` command shown at [monitoring health-check](#healthCheckMonitor).
* You will also need to use the same topic string in the `mosquitto_sub` command shown at [monitoring health-check](#monitoring-health-check).

3. If you have enabled authentication for your Mosquitto broker service, you will need to provide appropriate credentials for your health-check agent:

@@ -592,7 +592,7 @@ You can customise the operation of the health-check agent by editing the `mosqui

You must remove the entire `healthcheck:` clause.

## <a name="upgradingMosquitto"> Upgrading Mosquitto </a>
## Upgrading Mosquitto

You can update most containers like this:

@@ -634,7 +634,7 @@ Your existing Mosquitto container continues to run while the rebuild proceeds. O

The `prune` is the simplest way of cleaning up. The first call removes the old *local image*. The second call cleans up the old *base image*.

### <a name="versionPinning"> Mosquitto version pinning </a>
### Mosquitto version pinning

If you need to pin Mosquitto to a particular version:

@@ -670,7 +670,7 @@ Note:

* As well as preventing Docker from updating the *base image*, pinning will also block incoming updates to the *Dockerfile* from a `git pull`. Nothing will change until you decide to remove the pin.

## <a name="aboutPort9001"> About Port 9001 </a>
## About Port 9001

Earlier versions of the IOTstack service definition for Mosquitto included two port mappings:

14 changes: 7 additions & 7 deletions docs/Containers/NextCloud.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Nextcloud

## <a name="serviceDefinition"> Service definition </a>
## Service definition

This is the **core** of the IOTstack Nextcloud service definition:

@@ -54,7 +54,7 @@ Under new-menu, the menu can generate random passwords for you. You can either u

The passwords need to be set before you bring up the Nextcloud service for the first time but the following initialisation steps assume you might not have done that and always start over from a clean slate.

## <a name="initialisation"> Initialising Nextcloud </a>
## Initialising Nextcloud

1. Be in the correct directory:

@@ -108,7 +108,7 @@ The passwords need to be set before you bring up the Nextcloud service for the f

* You **can't** use a multicast domain name (eg `myrpi.local`). An mDNS name will not work until Nextcloud has been initialised!
* Once you have picked a connection method, **STICK TO IT**.
* You are only stuck with this restriction until Nextcloud has been initialised. You **can** (and should) fix it later by completing the steps in ["Access through untrusted domain"](#untrustedDomain).
* You are only stuck with this restriction until Nextcloud has been initialised. You **can** (and should) fix it later by completing the steps in ["Access through untrusted domain"](#access-through-untrusted-domain).

7. On a computer that is **not** the Raspberry Pi running Nextcloud, launch a browser and point to the Raspberry Pi running Nextcloud using your chosen connection method. Examples:

@@ -243,7 +243,7 @@ See also:

* [Nextcloud documentation - trusted domains](https://docs.nextcloud.com/server/21/admin_manual/installation/installation_wizard.html#trusted-domains).

### <a name="dnsAlias"> Using a DNS alias for your Nextcloud service </a>
### Using a DNS alias for your Nextcloud service

The examples above include using a DNS alias (a CNAME record) for your Nextcloud service. If you decide to do that, you may see this warning in the log:

@@ -261,13 +261,13 @@ You can silence the warning by editing the Nextcloud service definition in `dock

Nextcloud traffic is not encrypted. Do **not** expose it to the web by opening a port on your home router. Instead, use a VPN like Wireguard to provide secure access to your home network, and let your remote clients access Nextcloud over the VPN tunnel.

## <a name="healthCheck"> Container health check </a>
## Container health check

A script , or "agent", to assess the health of the MariaDB container has been added to the *local image* via the *Dockerfile*. In other words, the script is specific to IOTstack.

Because it is an instance of MariaDB, Nextcloud_DB inherits the health-check agent. See the [IOTstack MariaDB](MariaDB.md) documentation for more information.

## <a name="updatingNextcloud"> Keeping Nextcloud up-to-date </a>
## Keeping Nextcloud up-to-date

To update the `nextcloud` container:

@@ -290,7 +290,7 @@ $ docker system prune

The first "prune" removes the old *local* image, the second removes the old *base* image.

## <a name="backups"> Backups </a>
## Backups

Nextcloud is currently excluded from the IOTstack-supplied backup scripts due to its potential size.

80 changes: 40 additions & 40 deletions docs/Containers/Node-RED.md

Large diffs are not rendered by default.

18 changes: 9 additions & 9 deletions docs/Containers/Portainer-ce.md
Original file line number Diff line number Diff line change
@@ -1,27 +1,27 @@
# Portainer CE

## <a name="references"> References </a>
## References

- [Docker](https://hub.docker.com/r/portainer/portainer-ce/)
- [Website](https://www.portainer.io/portainer-ce/)

## <a name="definitions"> Definition </a>
## Definition

- "#yourip" means any of the following:

- the IP address of your Raspberry Pi (eg `192.168.1.10`)
- the multicast domain name of your Raspberry Pi (eg `iot-hub.local`)
- the domain name of your Raspberry Pi (eg `iot-hub.mydomain.com`)

## <a name="about"> About *Portainer CE* </a>
## About *Portainer CE*

*Portainer CE* (Community Edition) is an application for managing Docker. It is a successor to *Portainer*. According to [the *Portainer CE* documentation](https://www.portainer.io/2020/08/portainer-ce-2-0-what-to-expect/)

> Portainer 1.24.x will continue as a separate code branch, released as portainer/portainer:latest, and will receive ongoing security updates until at least 1st Sept 2021. No new features will be added beyond what was available in 1.24.1.
From that it should be clear that *Portainer* is deprecated and that *Portainer CE* is the way forward.

## <a name="installation"> Installing *Portainer CE* </a>
## Installing *Portainer CE*

Run the menu:

@@ -40,7 +40,7 @@ Ignore any message like this:

> WARNING: Found orphan containers (portainer) for this project …
## <a name="firstRun"> First run of *Portainer CE* </a>
## First run of *Portainer CE*

In your web browser navigate to `#yourip:9000/`:

@@ -51,7 +51,7 @@ From there, you can click on the "Local" group and take a look around. One of th

There are 'Quick actions' to view logs and other stats. This can all be done from terminal commands but *Portainer CE* makes it easier.

## <a name="setPublicIP"> Setting the Public IP address for your end-point </a>
## Setting the Public IP address for your end-point

If you click on a "Published Port" in the "Containers" list, your browser may return an error saying something like "can't connect to server" associated with an IP address of "0.0.0.0".

@@ -79,7 +79,7 @@ Keep in mind that clicking on a "Published Port" does not guarantee that your br

> All things considered, you will get more consistent behaviour if you simply bookmark the URLs you want to use for your IOTstack services.
## <a name="forgotPassword"> If you forget your password </a>
## If you forget your password

If you forget the password you created for *Portainer CE*, you can recover by doing the following:

@@ -92,5 +92,5 @@ $ docker-compose start portainer-ce

Then, follow the steps in:

1. [First run of *Portainer CE*](#firstRun); and
2. [Setting the Public IP address for your end-point](#setPublicIP).
1. [First run of *Portainer CE*](#first-run-of-portainer-ce); and
2. [Setting the Public IP address for your end-point](#setting-the-public-ip-address-for-your-end-point).
54 changes: 27 additions & 27 deletions docs/Containers/Prometheus.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Prometheus

## <a name="references"> References </a>
## References

* [*Prometheus* home](https://prometheus.io)
* *GitHub*:
@@ -15,19 +15,19 @@
- [*CAdvisor*](https://hub.docker.com/r/zcube/cadvisor)
- [*Node Exporter*](https://hub.docker.com/r/prom/node-exporter)

## <a name="overview"> Overview </a>
## Overview

Prometheus is a collection of three containers:

* *Prometheus*
* *CAdvisor*
* *Node Exporter*

The [default configuration](#activeConfig) for *Prometheus* supplied with IOTstack scrapes information from all three containers.
The [default configuration](#active-configuration-file) for *Prometheus* supplied with IOTstack scrapes information from all three containers.

## <a name="installProm"> Installing Prometheus </a>
## Installing Prometheus

### <a name="installPromNewMenu"> *if you are running New Menu …* </a>
### *if you are running New Menu …*

When you select *Prometheus* in the IOTstack menu, you must also select:

@@ -36,15 +36,15 @@ When you select *Prometheus* in the IOTstack menu, you must also select:

If you do not select all three containers, Prometheus will not start.

### <a name="installPromOldMenu"> *if you are running Old Menu …* </a>
### *if you are running Old Menu …*

When you select *Prometheus* in the IOTstack menu, the service definition includes the three containers:

* *Prometheus*
* *CAdvisor*
* *Node Exporter*

## <a name="significantFiles"> Significant directories and files </a>
## Significant directories and files

```
~/IOTstack
@@ -75,25 +75,25 @@ When you select *Prometheus* in the IOTstack menu, the service definition includ
5. The *working service definition* (only relevant to old-menu, copied from ❶).
6. The *Compose* file (includes ❶).
7. The *persistent storage area*.
8. The [configuration directory](#configDir).
8. The [configuration directory](#configuration-directory).

## <a name="howPrometheusIOTstackGetsBuilt"> How *Prometheus* gets built for IOTstack </a>
## How *Prometheus* gets built for IOTstack

### <a name="githubSourceCode"> *Prometheus* source code ([*GitHub*](https://github.com)) </a>
### *Prometheus* source code ([*GitHub*](https://github.com))

The source code for *Prometheus* lives at [*GitHub* prometheus/prometheus](https://github.com/prometheus/prometheus).

### <a name="dockerHubImages"> *Prometheus* images ([*DockerHub*](https://hub.docker.com)) </a>
### *Prometheus* images ([*DockerHub*](https://hub.docker.com))

Periodically, the source code is recompiled and the resulting image is pushed to [prom/prometheus](https://hub.docker.com/r/prom/prometheus) on *DockerHub*.

### <a name="iotstackMenu"> IOTstack menu </a>
### IOTstack menu

When you select *Prometheus* in the IOTstack menu, the *template service definition* is copied into the *Compose* file.

> Under old menu, it is also copied to the *working service definition* and then not really used.
### <a name="iotstackFirstRun"> IOTstack first run </a>
### IOTstack first run

On a first install of IOTstack, you run the menu, choose *Prometheus* as one of your containers, and are told to do this:

@@ -124,7 +124,7 @@ The *Dockerfile* begins with:
FROM prom/prometheus:latest
```

> If you need to pin to a particular version of *Prometheus*, the *Dockerfile* is the place to do it. See [*Prometheus* version pinning](#versionPinning).
> If you need to pin to a particular version of *Prometheus*, the *Dockerfile* is the place to do it. See [*Prometheus* version pinning](#prometheus-version-pinning).
The `FROM` statement tells the build process to pull down the ***base image*** from [*DockerHub*](https://hub.docker.com).

@@ -156,15 +156,15 @@ prom/prometheus latest 3f9575991a6c 3 days ago 169MB

You will see the same pattern in Portainer, which reports the *base image* as "unused". You should not remove the *base* image, even though it appears to be unused.

### <a name="dependencies"> Dependencies: *CAdvisor* and *Node Exporter* </a>
### Dependencies: *CAdvisor* and *Node Exporter*

The *CAdvisor* and *Node Exporter* are included in the *Prometheus* service definition as dependent containers. What that means is that each time you start *Prometheus*, `docker-compose` ensures that *CAdvisor* and *Node Exporter* are already running, and keeps them running.

The [default configuration](#activeConfig) for *Prometheus* assumes *CAdvisor* and *Node Exporter* are running and starts scraping information from those targets as soon as it launches.
The [default configuration](#active-configuration-file) for *Prometheus* assumes *CAdvisor* and *Node Exporter* are running and starts scraping information from those targets as soon as it launches.

## <a name="configuringPrometheus"> Configuring **Prometheus** </a>
## Configuring **Prometheus**

### <a name="configDir"> Configuration directory </a>
### Configuration directory

The configuration directory for the IOTstack implementation of *Prometheus* is at the path:

@@ -179,9 +179,9 @@ That directory contains two files:

If you delete either file, *Prometheus* will replace it with a default the next time the container starts. This "self-repair" function is intended to provide reasonable assurance that *Prometheus* will at least **start** instead of going into a restart loop.

Unless you [decide to change it](#environmentVars), the `config` folder and its contents are owned by "pi:pi". This means you can edit the files in the configuration directory without needing the `sudo` command. Ownership is enforced each time the container restarts.
Unless you [decide to change it](#environment-variables), the `config` folder and its contents are owned by "pi:pi". This means you can edit the files in the configuration directory without needing the `sudo` command. Ownership is enforced each time the container restarts.

#### <a name="activeConfig"> Active configuration file </a>
#### Active configuration file

The file named `config.yml` is the active configuration. This is the file you should edit if you want to make changes. The default structure of the file is:

@@ -211,7 +211,7 @@ Note:

* The YAML parser used by *Prometheus* seems to be ***exceptionally*** sensitive to syntax errors (far less tolerant than `docker-compose`). For this reason, you should **always** check the *Prometheus* log after any configuration change.

#### <a name="referenceConfig"> Reference configuration file </a>
#### Reference configuration file

The file named `prometheus.yml` is a reference configuration. It is a **copy** of the original configuration file that ships inside the *Prometheus* container at the path:

@@ -229,7 +229,7 @@ $ docker-compose restart prometheus
$ docker logs prometheus
```

### <a name="environmentVars"> Environment variables </a>
### Environment variables

The IOTstack implementation of *Prometheus* supports two environment variables:

@@ -239,11 +239,11 @@ environment:
- IOTSTACK_GID=1000
```
Those variables control ownership of the [Configuration directory](#configDir) and its contents. Those environment variables are present in the standard IOTstack service definition for *Prometheus* and have the effect of assigning ownership to "pi:pi".
Those variables control ownership of the [Configuration directory](#configuration-directory) and its contents. Those environment variables are present in the standard IOTstack service definition for *Prometheus* and have the effect of assigning ownership to "pi:pi".
If you delete those environment variables from your *Compose* file, the [Configuration directory](#configDir) will be owned by "nobody:nobody"; otherwise the directory and its contents will be owned by whatever values you pass for those variables.
If you delete those environment variables from your *Compose* file, the [Configuration directory](#configuration-directory) will be owned by "nobody:nobody"; otherwise the directory and its contents will be owned by whatever values you pass for those variables.
### <a name="migration"> Migration considerations </a>
### Migration considerations
Under the original IOTstack implementation of *Prometheus* (just "as it comes" from *DockerHub*), the service definition expected the configuration file to be at:
@@ -274,7 +274,7 @@ Note:

* The YAML parser used by *Prometheus* is very sensitive to syntax errors. Always check the *Prometheus* log after any configuration change.

## <a name="upgradingPrometheus"> Upgrading *Prometheus* </a>
## Upgrading *Prometheus*

You can update `cadvisor` and `nodeexporter` like this:

@@ -316,7 +316,7 @@ Your existing *Prometheus* container continues to run while the rebuild proceeds

The `prune` is the simplest way of cleaning up. The first call removes the old *local image*. The second call cleans up the old *base image*.

### <a name="versionPinning"> *Prometheus* version pinning </a>
### *Prometheus* version pinning

If you need to pin *Prometheus* to a particular version:

42 changes: 21 additions & 21 deletions docs/Containers/Python.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# Python

## <a name="references"> references </a>
## references

* [Python.org](https://www.python.org)
* [Dockerhub image library](https://hub.docker.com/_/python)
* [GitHub docker-library/python](https://github.com/docker-library/python)

## <a name="menuPython"> selecting Python in the IOTstack menu </a>
## selecting Python in the IOTstack menu

When you select Python in the menu:

@@ -48,7 +48,7 @@ When you select Python in the menu:

* This service definition is for "new menu" (master branch). The only difference with "old menu" (old-menu branch) is the omission of the last two lines.

### <a name="customisingPython"> customising your Python service definition </a>
### customising your Python service definition

The service definition contains a number of customisation points:

@@ -76,7 +76,7 @@ $ cd ~/IOTstack
$ docker-compose up -d python
```

## <a name="firstLaunchPython"> Python - first launch </a>
## Python - first launch

After running the menu, you are told to run the commands:

@@ -145,7 +145,7 @@ This is what happens:

Pressing <kbd>control</kbd>+<kbd>c</kbd> terminates the log display but does not terminate the running container.

## <a name="stopPython"> stopping the Python service </a>
## stopping the Python service

To stop the container from running, either:

@@ -163,7 +163,7 @@ To stop the container from running, either:
$ docker-compose rm --force --stop -v python
```

## <a name="startPython"> starting the Python service </a>
## starting the Python service

To bring up the container again after you have stopped it, either:

@@ -181,23 +181,23 @@ To bring up the container again after you have stopped it, either:
$ docker-compose up -d python
```

## <a name="reLaunchPython"> Python - second-and-subsequent launch </a>
## Python - second-and-subsequent launch

Each time you launch the Python container *after* the first launch:

1. The existing local image (`iotstack_python`) is instantiated to become the running container.
2. The `docker-entrypoint.sh` script runs and performs "self-repair" by replacing any files that have gone missing from the persistent storage area. Self-repair does **not** overwrite existing files!
3. The `app.py` Python script is run.

## <a name="debugging"> when things go wrong - check the log </a>
## when things go wrong - check the log

If the container misbehaves, the log is your friend:

```
$ docker logs python
```

## <a name="yourPythonScript"> project development life-cycle </a>
## project development life-cycle

It is **critical** that you understand that **all** of your project development should occur within the folder:

@@ -207,7 +207,7 @@ It is **critical** that you understand that **all** of your project development

So long as you are performing some sort of routine backup (either with a supplied script or a third party solution like [Paraphraser/IOTstackBackup](https://github.com/Paraphraser/IOTstackBackup)), your work will be protected.

### <a name="gettingStarted"> getting started </a>
### getting started

Start by editing the file:

@@ -228,7 +228,7 @@ $ cd ~/IOTstack
$ docker-compose restart python
```

### <a name="persistentStorage"> reading and writing to disk </a>
### reading and writing to disk

Consider this line in the service definition:

@@ -255,7 +255,7 @@ What it means is that:

If your script writes into any other directory inside the container, the data will be lost when the container re-launches.

### <a name="cleanSlate"> getting a clean slate </a>
### getting a clean slate

If you make a mess of things and need to start from a clean slate, erase the persistent storage area:

@@ -268,7 +268,7 @@ $ docker-compose up -d python

The container will re-initialise the persistent storage area from its defaults.

### <a name="addingPackages"> adding packages </a>
### adding packages

As you develop your project, you may find that you need to add supporting packages. For this example, we will assume you want to add "[Flask](https://pypi.org/project/Flask/)" and "[beautifulsoup4](https://pypi.org/project/beautifulsoup4/)".

@@ -322,7 +322,7 @@ To make *Flask* and *beautifulsoup4* a permanent part of your container:
Flask==2.0.1
```

5. Continue your development work by returning to [getting started](#gettingStarted).
5. Continue your development work by returning to [getting started](#getting-started).

Note:

@@ -346,11 +346,11 @@ Note:

The `requirements.txt` file will be recreated and it will be a copy of the version in the *services* directory as of the last image rebuild.

### <a name="scriptBaking"> making your own Python script the default </a>
### making your own Python script the default

Suppose the Python script you have been developing reaches a major milestone and you decide to "freeze dry" your work up to that point so that it becomes the default when you ask for a [clean slate](#cleanSlate). Proceed like this:
Suppose the Python script you have been developing reaches a major milestone and you decide to "freeze dry" your work up to that point so that it becomes the default when you ask for a [clean slate](#getting-a-clean-slate). Proceed like this:

1. If you have added any packages by following the steps in [adding packages](#addingPackages), run the following command:
1. If you have added any packages by following the steps in [adding packages](#adding-packages), run the following command:

```bash
$ docker exec python bash -c 'pip3 freeze >requirements.txt'
@@ -412,11 +412,11 @@ Suppose the Python script you have been developing reaches a major milestone and
$ docker system prune -f
```

### <a name="scriptCanning"> canning your project </a>
### canning your project

Suppose your project has reached the stage where you wish to put it into production as a service under its own name. Make two further assumptions:

1. You have gone through the steps in [making your own Python script the default](#scriptBaking) and you are **certain** that the content of `./services/python/app` correctly captures your project.
1. You have gone through the steps in [making your own Python script the default](#making-your-own-python-script-the-default) and you are **certain** that the content of `./services/python/app` correctly captures your project.
2. You want to give your project the name "wishbone".

Proceed like this:
@@ -479,7 +479,7 @@ Remember:
~/IOTstack/volumes/wishbone/app
```

## <a name="routineMaintenance"> routine maintenance </a>
## routine maintenance

To make sure you are running from the most-recent **base** image of Python from Dockerhub:

@@ -503,4 +503,4 @@ The old base image can't be removed until the old local image has been removed,

Note:

* If you have followed the steps in [canning your project](#scriptCanning) and your service has a name other than `python`, just substitute the new name where you see `python` in the two `dockerc-compose` commands.
* If you have followed the steps in [canning your project](#canning-your-project) and your service has a name other than `python`, just substitute the new name where you see `python` in the two `dockerc-compose` commands.
52 changes: 26 additions & 26 deletions docs/Containers/Telegraf.md
Original file line number Diff line number Diff line change
@@ -7,13 +7,13 @@ The purpose of the Dockerfile is to:
* tailor the default configuration to be IOTstack-ready; and
* enable the container to perform self-repair if essential elements of the persistent storage area disappear.

## <a name="references"> References </a>
## References

- [*influxdata Telegraf* home](https://www.influxdata.com/time-series-platform/telegraf/)
- [*GitHub*: influxdata/influxdata-docker/telegraf](https://github.com/influxdata/influxdata-docker/tree/master/telegraf)
- [*DockerHub*: influxdata Telegraf](https://hub.docker.com/_/telegraf)

## <a name="significantFiles"> Significant directories and files </a>
## Significant directories and files

```
~/IOTstack
@@ -38,34 +38,34 @@ The purpose of the Dockerfile is to:

1. The *Dockerfile* used to customise Telegraf for IOTstack.
2. A replacement for the `telegraf` container script of the same name, extended to handle container self-repair.
3. The *additions folder*. See [Applying optional additions](#optionalAdditions).
3. The *additions folder*. See [Applying optional additions](#applying-optional-additions).
4. The *auto_include folder*. Additions automatically applied to
`telegraf.conf`. See [Automatic includes to telegraf.conf](#autoInclude).
`telegraf.conf`. See [Automatic includes to telegraf.conf](#automatic-includes-to-telegrafconf).
5. The *template service definition*.
6. The *working service definition* (only relevant to old-menu, copied from ❹).
7. The *persistent storage area* for the `telegraf` container.
8. A working copy of the *additions folder* (copied from ❸). See [Applying optional additions](#optionalAdditions).
9. The *reference configuration file*. See [Changing Telegraf's configuration](#editConfiguration).
8. A working copy of the *additions folder* (copied from ❸). See [Applying optional additions](#applying-optional-additions).
9. The *reference configuration file*. See [Changing Telegraf's configuration](#changing-telegrafs-configuration).
10. The *active configuration file*. A subset of ➒ altered to support communication with InfluxDB running in a container in the same IOTstack instance.

Everything in the persistent storage area ❼:

* will be replaced if it is not present when the container starts; but
* will never be overwritten if altered by you.

## <a name="howTelegrafIOTstackGetsBuilt"> How Telegraf gets built for IOTstack </a>
## How Telegraf gets built for IOTstack

### <a name="dockerHubImages"> Telegraf images ([*DockerHub*](https://hub.docker.com)) </a>
### Telegraf images ([*DockerHub*](https://hub.docker.com))

Periodically, the source code is recompiled and the resulting image is pushed to [influxdata Telegraf](https://hub.docker.com/_/telegraf?tab=tags&page=1&ordering=last_updated) on *DockerHub*.

### <a name="iotstackMenu"> IOTstack menu </a>
### IOTstack menu

When you select Telegraf in the IOTstack menu, the *template service definition* is copied into the *Compose* file.

> Under old menu, it is also copied to the *working service definition* and then not really used.
### <a name="iotstackFirstRun"> IOTstack first run </a>
### IOTstack first run

On a first install of IOTstack, you run the menu, choose your containers, and are told to do this:

@@ -74,7 +74,7 @@ $ cd ~/IOTstack
$ docker-compose up -d
```

> See also the [Migration considerations](#migration) (below).
> See also the [Migration considerations](#migration-considerations) (below).
`docker-compose` reads the *Compose* file. When it arrives at the `telegraf` fragment, it finds:

@@ -99,7 +99,7 @@ The *Dockerfile* begins with:
FROM telegraf:latest
```

> If you need to pin to a particular version of Telegraf, the *Dockerfile* is the place to do it. See [Telegraf version pinning](#versionPinning).
> If you need to pin to a particular version of Telegraf, the *Dockerfile* is the place to do it. See [Telegraf version pinning](#telegraf-version-pinning).
The `FROM` statement tells the build process to pull down the ***base image*** from [*DockerHub*](https://hub.docker.com).

@@ -132,7 +132,7 @@ telegraf latest a721ac170fad 3 days ago 273MB

You will see the same pattern in *Portainer*, which reports the ***base image*** as "unused". You should not remove the ***base*** image, even though it appears to be unused.

### <a name="migration"> Migration considerations </a>
### Migration considerations

Under the original IOTstack implementation of Telegraf (just "as it comes" from *DockerHub*), the service definition expected `telegraf.conf` to be at:

@@ -152,9 +152,9 @@ With one exception, all prior and current versions of the default configuration

> In other words, once you strip away comments and blank lines, and remove any "active" configuration options that simply repeat their default setting, you get the same subset of "active" configuration options. The default configuration file supplied with gcgarner/IOTstack is available [here](https://github.com/gcgarner/IOTstack/blob/master/.templates/telegraf/telegraf.conf) if you wish to refer to it.
The exception is `[[inputs.mqtt_consumer]]` which is now provided as an optional addition. If your existing Telegraf configuration depends on that input, you will need to apply it. See [applying optional additions](#optionalAdditions).
The exception is `[[inputs.mqtt_consumer]]` which is now provided as an optional addition. If your existing Telegraf configuration depends on that input, you will need to apply it. See [applying optional additions](#applying-optional-additions).

## <a name="logging"> Logging </a>
## Logging

You can inspect Telegraf's log by:

@@ -164,7 +164,7 @@ $ docker logs telegraf

These logs are ephemeral and will disappear when your Telegraf container is rebuilt.

### <a name="logTelegrafDB"> log message: *database "telegraf" creation failed* </a>
### log message: *database "telegraf" creation failed*

The following log message can be misleading:

@@ -176,7 +176,7 @@ If InfluxDB is not running when Telegraf starts, the `depends_on:` clause in Tel

What this error message *usually* means is that Telegraf has tried to communicate with InfluxDB before the latter is ready to accept connections. Telegraf typically retries after a short delay and is then able to communicate with InfluxDB.

## <a name="editConfiguration"> Changing Telegraf's configuration </a>
## Changing Telegraf's configuration

The first time you launch the Telegraf container, the following structure will be created in the persistent storage area:

@@ -202,7 +202,7 @@ The file:
- is created by removing all comment lines and blank lines from `telegraf-reference.conf`, leaving only the "active" configuration options, and then adding options necessary for IOTstack.
- is less than 30 lines and is significantly easier to understand than `telegraf-reference.conf`.

* `inputs.docker.conf` – see [Applying optional additions](#optionalAdditions) below.
* `inputs.docker.conf` – see [Applying optional additions](#applying-optional-additions) below.

The intention of this structure is that you:

@@ -217,7 +217,7 @@ $ cd ~/IOTstack
$ docker-compose restart telegraf
```

### <a name="autoInclude"> Automatic includes to telegraf.conf </a>
### Automatic includes to telegraf.conf

* `inputs.docker.conf` instructs Telegraf to collect metrics from Docker. Requires kernel control
groups to be enabled to collect memory usage data. If not done during initial installation,
@@ -227,9 +227,9 @@ $ docker-compose restart telegraf
```
* `inputs.cpu_temp.conf' collects cpu temperature.

### <a name="optionalAdditions"> Applying optional additions </a>
### Applying optional additions

The *additions folder* (see [Significant directories and files](#significantFiles)) is a mechanism for additional *IOTstack-ready* configuration options to be provided for Telegraf.
The *additions folder* (see [Significant directories and files](#significant-directories-and-files)) is a mechanism for additional *IOTstack-ready* configuration options to be provided for Telegraf.

Currently there is one addition:

@@ -247,9 +247,9 @@ $ docker-compose restart telegraf

The `grep` strips comment lines and the `sudo tee` is a safe way of appending the result to `telegraf.conf`. The `restart` causes Telegraf to notice the change.

## <a name="cleanSlate"> Getting a clean slate </a>
## Getting a clean slate

### <a name="resetDB"> Erasing the persistent storage area </a>
### Erasing the persistent storage area

Erasing Telegraf's persistent storage area triggers self-healing and restores known defaults:

@@ -270,7 +270,7 @@ Note:
$ docker-compose restart telegraf
```

### <a name="resetDB"> Resetting the InfluxDB database </a>
### Resetting the InfluxDB database

To reset the InfluxDB database that Telegraf writes into, proceed like this:

@@ -291,7 +291,7 @@ In words:
* Delete the `telegraf` database, and then exit the CLI.
* Start the Telegraf container. This re-creates the database automatically.

## <a name="upgradingTelegraf"> Upgrading Telegraf </a>
## Upgrading Telegraf

You can update most containers like this:

@@ -333,7 +333,7 @@ Your existing Telegraf container continues to run while the rebuild proceeds. On

The `prune` is the simplest way of cleaning up. The first call removes the old ***local image***. The second call cleans up the old ***base image***.

### <a name="versionPinning"> Telegraf version pinning </a>
### Telegraf version pinning

If you need to pin Telegraf to a particular version:

64 changes: 32 additions & 32 deletions docs/Containers/WireGuard.md
Original file line number Diff line number Diff line change
@@ -11,27 +11,27 @@ Assumptions:
* These instructions assume that you have privileges to configure your network's gateway (router). If you are not able to make changes to your network's firewall settings, then you will not be able to finish this setup.
* In common with most VPN technologies, WireGuard assumes that the WAN side of your network's gateway has a public IP address which is reachable directly. WireGuard may not work if that assumption does not hold. If you strike this problem, you have to take it up with your ISP.

## <a name="installWireguard"> Installing WireGuard under IOTstack </a>
## Installing WireGuard under IOTstack

You increase your chances of a trouble-free installation by performing the installation steps in the following order.

### <a name="updateRaspbian"> Step 1: Update your Raspberry Pi OS </a>
### Step 1: Update your Raspberry Pi OS

To be able to run WireGuard successfully, your Raspberry Pi needs to be **fully** up-to-date. If you want to understand why, see [the read only flag](#readOnlyFlag).
To be able to run WireGuard successfully, your Raspberry Pi needs to be **fully** up-to-date. If you want to understand why, see [the read only flag](#the-read-only-flag).

```bash
$ sudo apt update
$ sudo apt upgrade -y
```

### <a name="obtainDDNS"> Step 2: Set up a Dynamic DNS name </a>
### Step 2: Set up a Dynamic DNS name

Before you can use WireGuard (or any VPN solution), you need a mechanism for your remote clients to reach your home router. You have two choices:

1. Obtain a permanent IP address for your home router from your Internet Service Provider (ISP). Approach your ISP if you wish to pursue this option. It generally involves additional charges.
2. Use a Dynamic DNS service. See IOTstack documentation [Accessing your device from the internet](../Basic_setup/Accessing-your-Device-from-the-internet.md). The rest of this documentation assumes you have chosen this option.

### <a name="serviceDefinition"> Step 3: Understand the Service Definition </a>
### Step 3: Understand the Service Definition

This is the service definition *template* that IOTstack uses for WireGuard:

@@ -69,9 +69,9 @@ Key points:
* Everything in the `environment:` section from `SERVERURL=` down to `PEERDNS=` (inclusive) affects WireGuard's generated configurations (the QR codes). In other words, any time you change any of those values, any existing QR codes will stop working.
* WireGuard does not need to communicate directly with other Docker containers so there is no need for it to join `iotstack_nw`.

### <a name="configureWhat"> Step 4: Decide what to configure </a>
### Step 4: Decide what to configure

With most containers, you can continue to tweak environment variables and settings without upsetting the container's basic behaviour. WireGuard is a little different. You really need to think, carefully, about how you want to configure the service before you start. If you change your mind later, you generally have to [start from a clean slate](#cleanSlate).
With most containers, you can continue to tweak environment variables and settings without upsetting the container's basic behaviour. WireGuard is a little different. You really need to think, carefully, about how you want to configure the service before you start. If you change your mind later, you generally have to [start from a clean slate](#getting-a-clean-slate).

#### <a name="configureAlways">Fields that you should always configure </a>

@@ -101,15 +101,15 @@ With most containers, you can continue to tweak environment variables and settin

- Many examples on the web use "PEERS=n" where "n" is a number. In practice, that approach seems to be a little fragile and is not recommended for IOTstack.

#### <a name="configurePeerDNS"> Optional configuration - DNS resolution for peers </a>
#### Optional configuration - DNS resolution for peers

You have several options for how your remote peers resolve DNS requests:

* `PEERDNS=auto`

The default value of `auto` instructs the WireGuard *service* running within the WireGuard *container* to use the same DNS as the WireGuard *container* when resolving requests from connected peers. In practice, that means the *service* directs queries to 127.0.0.11, which Docker intercepts and forwards to whichever resolvers are specified in the Raspberry Pi's `/etc/resolv.conf`.

* <a name="customContInit"> `PEERDNS=auto` with `custom-cont-init` </a>
* `PEERDNS=auto` with `custom-cont-init` <a name="customContInit"></a>

This configuration instructs WireGuard to forward DNS queries from remote peers to any **container** which is listening on port 53. This is the option you will want to choose if you are running an ad-blocking DNS server (eg *PiHole* or *AdGuardHome*) in a container on the same host as WireGuard, and you want your remote clients to obtain DNS resolution via the ad-blocker.

@@ -162,7 +162,7 @@ You have several options for how your remote peers resolve DNS requests:
- PEERDNS=192.168.203.65
```

#### <a name="configurePorts"> Optional configuration - WireGuard ports </a>
#### Optional configuration - WireGuard ports

The WireGuard service definition template follows the convention of using UDP port "51820" in three places. You can leave it like that and it will just work. There is no reason to change the defaults unless you want to.

@@ -189,15 +189,15 @@ Rule #1:

Rule #2:

* The *«public»* port forms part of the QR codes. If you decide to change the *«public»* port after you generate the QR codes, you will have to [start over from a clean slate](#cleanSlate).
* The *«public»* port forms part of the QR codes. If you decide to change the *«public»* port after you generate the QR codes, you will have to [start over from a clean slate](#getting-a-clean-slate).

Rule #3:

* Your router needs to know about both the *«public»* and *«external»* ports so, if you decide to change either of those, you must also reconfigure your router.

See [Understanding WireGuard's port numbers](#understandingPorts) if you want more information on how the various port numbers are used.
See [Understanding WireGuard's port numbers](#understanding-wireguards-port-numbers) if you want more information on how the various port numbers are used.

### <a name="configureWireGuard"> Step 5: Configure WireGuard </a>
### Step 5: Configure WireGuard

There are two approaches:

@@ -206,7 +206,7 @@ There are two approaches:

Of the two, the first is generally the simpler and means you don't have to re-run the menu whenever you want to change WireGuard's configuration.

#### <a name="editCompose"> Method 1: Configure WireGuard by editing `docker-compose.yml` </a>
#### Method 1: Configure WireGuard by editing `docker-compose.yml`

1. Run the menu:

@@ -221,10 +221,10 @@ Of the two, the first is generally the simpler and means you don't have to re-ru
5. Choose Exit.
6. Open `docker-compose.yml` in your favourite text editor.
7. Navigate to the WireGuard service definition.
8. Implement the decisions you took in [decide what to configure](#configureWhat).
8. Implement the decisions you took in [decide what to configure](#step-4-decide-what-to-configure).
9. Save your work.

#### <a name="editOverride"> Method 2: Configure WireGuard using `compose-override.yml` </a>
#### Method 2: Configure WireGuard using `compose-override.yml`

The [Custom services and overriding default settings for IOTstack](../Basic_setup/Custom.md) page describes how to use an override file to allow the menu to incorporate your custom configurations into the final `docker-compose.yml` file.

@@ -236,7 +236,7 @@ You will need to create the `compose-override.yml` **before** running the menu t
~/IOTstack/compose-override.yml
```

2. Define overrides to implement the decisions you took in [Decide what to configure](#configureWhat). For example:
2. Define overrides to implement the decisions you took in [Decide what to configure](#step-4-decide-what-to-configure). For example:

```yml
services:
@@ -277,7 +277,7 @@ You will need to create the `compose-override.yml` **before** running the menu t

and verify that the `wireguard` service definition is as you expect.

### <a name="startWireGuard"> Step 6: Start WireGuard </a>
### Step 6: Start WireGuard

1. To start WireGuard, bring up your stack:

@@ -298,7 +298,7 @@ You will need to create the `compose-override.yml` **before** running the menu t
$ docker logs wireguard
```

See also discussion of [the read-only flag](#readOnlyFlag).
See also discussion of [the read-only flag](#the-read-only-flag).

3. Confirm that WireGuard has generated the expected configurations. For example, given the following setting in `docker-compose.yml`:

@@ -341,7 +341,7 @@ You will need to create the `compose-override.yml` **before** running the menu t

Notice how each element in the `PEERS=` list is represented by a sub-directory prefixed with `peer_`. You should expect the same pattern for your peers.

### <a name="clientQRcodes"> Step 7: Save your WireGuard client configuration files (QR codes) </a>
### Step 7: Save your WireGuard client configuration files (QR codes)

The first time you launch WireGuard, it generates cryptographically protected configurations for your remote clients and encapsulates those configurations in QR codes. You can see the QR codes by running:

@@ -382,7 +382,7 @@ In this case:

Keep in mind that each QR code contains everything needed for **any** device to access your home network via WireGuard. Treat your `.png` files as "sensitive documents".

### <a name="routerNATConfig"> Step 8: Configure your router with a NAT rule </a>
### Step 8: Configure your router with a NAT rule

A typical home network will have a firewall that effectively blocks all incoming attempts from the Internet to open a new connection with a device on your network.

@@ -427,7 +427,7 @@ A typical configuration process goes something like this:
* *Public Port* or *External Port* needs to be the value you chose for «public» in the WireGuard service definition (51820 if you didn't change it).
* *Service Name* (or *Service Type*) is typically a text field, an editable menu (where you can either make a choice or type your own value), or a button approximating an editable menu. If you are given the option of choosing "WireGuard", do that, otherwise just type that name into the field. It has no significance other than reminding you what the rule is for.

### <a name="configureClients"> Step 9: Configure your remote WireGuard clients </a>
### Step 9: Configure your remote WireGuard clients

This is a massive topic and one which is well beyond the scope of this guide. You really will have to work it out for yourself. Start by Googling:

@@ -443,7 +443,7 @@ For portable devices (eg iOS and Android) it usually boils down to:
4. Point the device's camera at the QR code.
5. Follow your nose.

## <a name="understandingPorts"> Understanding WireGuard's port numbers </a>
## Understanding WireGuard's port numbers

Here's a concrete example configuration using three different port numbers:

@@ -466,7 +466,7 @@ You also need to make a few assumptions:
1. The host running the remote WireGuard client (eg a mobile phone with the WireGuard app installed) has been allocated the IP address 55.66.77.88 when it connected to the Internet over 3G/4G/5G.
2. When the remote WireGuard client initiated the session, it chose UDP port 44524 as its source port. The actual number chosen is (essentially) random and only significant to the client.
3. Your Internet Service Provider allocated the IP address 12.13.14.15 to the WAN side of your router.
4. You have done all the steps in [Set up a Dynamic DNS name](#obtainDDNS) and your WAN IP address (12.13.14.15) is being propagated to your Dynamic DNS service provider.
4. You have done all the steps in [Set up a Dynamic DNS name](#step-2-set-up-a-dynamic-dns-name) and your WAN IP address (12.13.14.15) is being propagated to your Dynamic DNS service provider.

Here's a reference model to help explain what occurs:

@@ -494,9 +494,9 @@ Even if you use port 51820 everywhere (the default), all this Network Address Tr

This model is a slight simplification because the remote client may also be also operating behind a router performing Network Address Translation. It is just easier to understand the basic concepts if you assume the remote client has a publicly-routable IP address.

## <a name="debugging"> Debugging techniques </a>
## Debugging techniques

### <a name="tcpdumpExternal"> Monitor WireGuard traffic between your router and your Raspberry Pi </a>
### Monitor WireGuard traffic between your router and your Raspberry Pi

If `tcpdump` is not installed on your Raspberry Pi, you can install it by:

@@ -512,7 +512,7 @@ $ sudo tcpdump -i eth0 -n udp port «external»

Press <kbd>ctrl</kbd><kbd>c</kbd> to terminate the capture.

### <a name="tcpdumpInternal"> Monitor WireGuard traffic between your Raspberry Pi and the WireGuard container </a>
### Monitor WireGuard traffic between your Raspberry Pi and the WireGuard container

First, you need to add `tcpdump` to the container. You only need to do this once per debugging session. The package will remain in place until the next time you re-create the container.

@@ -528,7 +528,7 @@ $ docker exec wireguard tcpdump -i eth0 -n udp port «internal»

Press <kbd>ctrl</kbd><kbd>c</kbd> to terminate the capture.

### <a name="listenExternal"> Is Docker listening on the Raspberry Pi's «external» port? </a>
### Is Docker listening on the Raspberry Pi's «external» port?

```bash
$ PORT=«external»; sudo nmap -sU -p $PORT 127.0.0.1 | grep "$PORT/udp"
@@ -541,7 +541,7 @@ There will be a short delay. The expected answer is either:

Success implies that the container is also listening.

### <a name="listenPublic"> Is your router listening on the «public» port? </a>
### Is your router listening on the «public» port?

```bash
$ PORT=«public»; sudo nmap -sU -p $PORT downunda.duckdns.org | grep "$PORT/udp"
@@ -552,7 +552,7 @@ There will be a short delay. The expected answer is either:
* `«public»/udp open|filtered unknown` = router is listening
* `«public»/udp closed unknown` = router is not listening

## <a name="readOnlyFlag"> The read-only flag </a>
## The read-only flag

The `:ro` at the end of the following line in WireGuard's service definition means "read only":

@@ -568,7 +568,7 @@ Writing into `/lib/modules` is not needed on a Raspberry Pi, providing that Rasp

If WireGuard refuses to install and you have good reason to suspect that WireGuard may be trying to write to `/lib/modules` then you can *consider* removing the `:ro` flag and re-trying. Just be aware that WireGuard will likely be modifying your operating system.

## <a name="pullWireguard"> Updating WireGuard </a>
## Updating WireGuard

To update the WireGuard container:

@@ -584,7 +584,7 @@ $ docker-compose up -d wireguard
$ docker system prune
```

## <a name="cleanSlate"> Getting a clean slate </a>
## Getting a clean slate

If WireGuard misbehaves, you can start over from a clean slate. You *may* also need to do this if you change any of the following environment variables:

2 changes: 2 additions & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
@@ -52,3 +52,5 @@ plugins:
markdown_extensions:
- admonition
- pymdownx.superfences
- toc:
permalink: true