Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation fixes and improvements #466

Merged
merged 25 commits into from
Mar 4, 2022
Merged
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
0a4012f
Grafana instructions for adding influxdb datasource
ukkopahis Dec 12, 2021
f897bf1
Pi-hole: docs to setup DNS for esphome devices
ukkopahis Dec 16, 2021
c06c08a
Fix docs on how to update containers
ukkopahis Dec 17, 2021
9e97ee3
docs: fix syntax and cleanup
ukkopahis Dec 11, 2021
2459ef9
docs: move developer documentation to subfolder
ukkopahis Jan 15, 2022
067995b
docs: add dark and light theme
Willem-Dekker Jul 12, 2020
a81573f
docs: fix unsupported absolute links
ukkopahis Jan 15, 2022
1eacd40
docs: Add how to write documentation
ukkopahis Jan 16, 2022
6be71a5
docs: Add top navigation tabs
ukkopahis Jan 25, 2022
1fc5105
docs: autogenerate heading link anchors
ukkopahis Jan 25, 2022
d38a122
docs: keep top tabs always visible and hide footer
ukkopahis Jan 25, 2022
b05029c
homeassistant: add docs for https reverse proxy setup
ukkopahis Jan 20, 2022
118648d
docs: fix to reflect network change
ukkopahis Jan 29, 2022
0d9b982
Wireguard: better document how PEERDNS works with host resolv.conf
ukkopahis Jan 29, 2022
4f52cf0
docs: fix container menu order
ukkopahis Jan 30, 2022
c614c20
influxdb: document basic usage
ukkopahis Feb 2, 2022
383d213
Merge remote-tracking branch 'upstream/master' into HEAD
ukkopahis Feb 24, 2022
a15ae1f
Pi-hole: improve docs
Paraphraser Feb 18, 2022
6e499db
Octoprint: change doc to use shorter menu title
ukkopahis Feb 24, 2022
40d17ec
docs: fix edit_uri
ukkopahis Feb 24, 2022
519aaee
docs: define mkdocs dependencies in requirements-mkdocs.txt
ukkopahis Feb 24, 2022
179c633
docs: add "stack" logo and favicon
ukkopahis Feb 24, 2022
fd0340c
docs: improve Wiki home page friendliness
ukkopahis Feb 23, 2022
3f9bcea
docs: move Updates/ from subfolder to top-level tab
ukkopahis Feb 24, 2022
4d69183
docs: improve "Getting Started"
ukkopahis Feb 24, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 6 additions & 7 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
@@ -9,10 +9,9 @@ jobs:
name: Deploy docs
runs-on: ubuntu-latest
steps:
- name: Checkout master
uses: actions/checkout@v1

- name: Deploy docs
uses: mhausenblas/mkdocs-deploy-gh-pages@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: 3.x
- run: pip3 install -r requirements-mkdocs.txt
- run: mkdocs gh-deploy --force
7 changes: 4 additions & 3 deletions .templates/wireguard/use-container-dns.sh
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
# Forward DNS requests from remote WireGuard clients to the default
# gateway on the internal bridged network that the WireGuard container
# is attached to. This results in queries being sent to any other
# container on the same internal bridged network that is listening
# on port 53 (eg PiHole, AdGuardHome or bind9).
# is attached to. The gateway routes queries out from the bridged network to
# the host's network. This results in queries being sent to any daemon or
# container that is listening on host port 53 (eg PiHole, AdGuardHome, dnsmasq
# or bind9).
#
# Acknowledgement: @ukkopahis

Original file line number Diff line number Diff line change
@@ -6,14 +6,14 @@ From time to time the IP address that your ISP assigns changes and it's difficul

Secondly, how do you get into your home network? Your router has a firewall that is designed to keep the rest of the internet out of your network to protect you. The solution to that is a Virtual Private Network (VPN) or "tunnel".

## <a name="dynamicDNS"> Dynamic DNS </a>
## Dynamic DNS

There are two parts to a Dynamic DNS service:

1. You have to register with a Dynamic DNS service provider and obtain a domain name that is not already taken by someone else.
2. Something on your side of the network needs to propagate updates so that your chosen domain name remains in sync with your router's dynamically-allocated public IP address.

### <a name="registerDDNS"> Register with a Dynamic DNS service provider </a>
### Register with a Dynamic DNS service provider

The first part is fairly simple and there are quite a few Dynamic DNS service providers including:

@@ -24,7 +24,7 @@ The first part is fairly simple and there are quite a few Dynamic DNS service pr
Some router vendors also provide their own built-in Dynamic DNS capabilities for registered customers so it's a good idea to check your router's capabilities before you plough ahead.

### <a name="propagateDDNS"> Dynamic DNS propagation </a>
### Dynamic DNS propagation

The "something" on your side of the network propagating WAN IP address changes can be either:

@@ -39,7 +39,7 @@ A behind-the-router technique usually relies on sending updates according to a s

> This seems to be a problem for DuckDNS which takes a beating because almost every person using it is sending an update bang-on five minutes.
### <a name="duckDNSclient"> DuckDNS client </a>
### DuckDNS client

IOTstack provides a solution for DuckDNS. The best approach to running it is:

@@ -99,7 +99,7 @@ A null result indicates failure so check your work.

Remember, the Domain Name System is a *distributed* database. It takes *time* for changes to propagate. The response you get from directing a query to ns1.duckdns.org may not be the same as the response you get from any other DNS server. You often have to wait until cached records expire and a recursive query reaches the authoritative DuckDNS name-servers.

#### <a name="duckDNSauto"> Running the DuckDNS client automatically </a>
#### Running the DuckDNS client automatically

The recommended arrangement for keeping your Dynamic DNS service up-to-date is to invoke `duck.sh` from `cron` at five minute intervals.

@@ -152,7 +152,7 @@ $ cat /dev/null >~/Logs/duck.log

### WireGuard

WireGuard is supplied as part of IOTstack. See [WireGuard documentation](https://sensorsiot.github.io/IOTstack/Containers/WireGuard.html).
WireGuard is supplied as part of IOTstack. See [WireGuard documentation](../Containers/WireGuard.md).

### PiVPN

Original file line number Diff line number Diff line change
@@ -8,6 +8,7 @@ The backup command can be executed from IOTstack's menu, or from a cronjob.
To ensure that all your data is saved correctly, the stack should be brought down. This is mainly due to databases potentially being in a state that could cause data loss.

There are 2 ways to run backups:

* From the menu: `Backup and Restore` > `Run backup`
* Running the following command: `bash ./scripts/backup.sh`

@@ -21,6 +22,7 @@ The current directory of bash must be in IOTstack's directory, to ensure that it
```
./scripts/backup.sh {TYPE=3} {USER=$(whoami)}
```

* Types:
* 1 = Backup with Date
* A tarball file will be created that contains the date and time the backup was started, in the filename.
@@ -33,10 +35,12 @@ The current directory of bash must be in IOTstack's directory, to ensure that it
If this parameter is not supplied when run as root, the script will ask for the username as input

Backups:

* You can find the backups in the ./backups/ folder. With rolling being in ./backups/rolling/ and date backups in ./backups/backup/
* Log files can also be found in the ./backups/logs/ directory.

### Examples:

* `./scripts/backup.sh`
* `./scripts/backup.sh 3`

@@ -52,6 +56,7 @@ This will only produce a backup in the rollowing folder and change all the permi

## Restore
There are 2 ways to run a restore:

* From the menu: `Backup and Restore` > `Restore from backup`
* Running the following command: `bash ./scripts/restore.sh`

@@ -64,6 +69,7 @@ There are 2 ways to run a restore:
./scripts/restore.sh {FILENAME=backup.tar.gz} {noask}
```
The restore script takes 2 arguments:

* Filename: The name of the backup file. The file must be present in the `./backups/` directory, or a subfolder in it. That means it should be moved from `./backups/backup` to `./backups/`, or that you need to specify the `backup` portion of the directory (see examples)
* NoAsk: If a second parameter is present, is acts as setting the no ask flag to true.

2 changes: 1 addition & 1 deletion docs/Custom.md → docs/Basic_setup/Custom.md
Original file line number Diff line number Diff line change
@@ -125,7 +125,7 @@ services:
environment:
```

This will remove the default environment variables set in the template, and tell docker-compose to use the variables specified in your file. It is not mandatory that the *.env file be placed in the service's service directory, but is strongly suggested. Keep in mind the [PostBuild Script](https://sensorsiot.github.io/IOTstack/PostBuild-Script) functionality to automatically copy your *.env files into their directories on successful build if you need to.
This will remove the default environment variables set in the template, and tell docker-compose to use the variables specified in your file. It is not mandatory that the *.env file be placed in the service's service directory, but is strongly suggested. Keep in mind the [PostBuild Script](../Developers/PostBuild-Script.md) functionality to automatically copy your *.env files into their directories on successful build if you need to.

### Adding custom services

Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Build Stack Default Passwords for Services
# Default Passwords and ports

Here you can find a list of the default configurations for IOTstack for quick referece.

File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
47 changes: 47 additions & 0 deletions docs/Basic_setup/What-is-sudo.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# What is sudo?

Many first-time users of IOTstack get into difficulty by misusing the `sudo` command. The problem is best understood by example. In the following, you would expect `~` (tilde) to expand to `/home/pi`. It does:

```bash
$ echo ~/IOTstack
/home/pi/IOTstack
```

The command below sends the same `echo` command to `bash` for execution. This is what happens when you type the name of a shell script. You get a new instance of `bash` to run the script:

```bash
$ bash -c 'echo ~/IOTstack'
/home/pi/IOTstack
```

Same answer. Again, this is what you expect. But now try it with `sudo` on the front:

```bash
$ sudo bash -c 'echo ~/IOTstack'
/root/IOTstack
```

Different answer. It is different because `sudo` means "become root, and then run the command". The process of becoming root changes the home directory, and that changes the definition of `~`.

Any script designed for working with IOTstack assumes `~` (or the equivalent `$HOME` variable) expands to `/home/pi`. That assumption is invalidated if the script is run by `sudo`.

Of necessity, any script designed for working with IOTstack will have to invoke `sudo` **inside** the script **when it is required**. You do not need to second-guess the script's designer.

Please try to minimise your use of `sudo` when you are working with IOTstack. Here are some rules of thumb:

1. Is what you are about to run a script? If yes, check whether the script already contains `sudo` commands. Using `menu.sh` as the example:

```bash
$ grep -c 'sudo' ~/IOTstack/menu.sh
28
```

There are numerous uses of `sudo` within `menu.sh`. That means the designer thought about when `sudo` was needed.

2. Did the command you **just executed** work without `sudo`? Note the emphasis on the past tense. If yes, then your work is done. If no, and the error suggests elevated privileges are necessary, then re-execute the last command like this:

```bash
$ sudo !!
```

It takes time, patience and practice to learn when `sudo` is **actually** needed. Over-using `sudo` out of habit, or because you were following a bad example you found on the web, is a very good way to find that you have created so many problems for yourself that will need to reinstall your IOTstack. *Please* err on the side of caution!
169 changes: 55 additions & 114 deletions docs/Getting-Started.md → docs/Basic_setup/index.md

Large diffs are not rendered by default.

8 changes: 4 additions & 4 deletions docs/Containers/AdGuardHome.md
Original file line number Diff line number Diff line change
@@ -9,7 +9,7 @@

AdGuard Home and PiHole perform similar functions. They use the same ports so you can **not** run both at the same time. You must choose one or the other.

## <a name="quickStart"> Quick Start </a>
## Quick Start

When you first install AdGuard Home:

@@ -34,7 +34,7 @@ When you first install AdGuard Home:

Port 8089 is the default administrative user interface for AdGuard Home running under IOTstack.

Port 8089 is not active until you have completed the [Quick Start](#quickStart) procedure. You must start by connecting to port 3001.
Port 8089 is not active until you have completed the [Quick Start](#quick-start) procedure. You must start by connecting to port 3001.

Because of AdGuard Home limitations, you must take special precautions if you decide to change to a different port number:

@@ -50,11 +50,11 @@ Because of AdGuard Home limitations, you must take special precautions if you de
$ docker-compose up -d adguardhome
```

3. Repeat the [Quick Start](#quickStart) procedure, this time substituting the new Admin Web Interface port where you see "8089".
3. Repeat the [Quick Start](#quick-start) procedure, this time substituting the new Admin Web Interface port where you see "8089".

## About port 3001:3000

Port 3001 (external, 3000 internal) is only used during [Quick Start](#quickStart) procedure. Once port 8089 becomes active, port 3001 ceases to be active.
Port 3001 (external, 3000 internal) is only used during [Quick Start](#quick-start) procedure. Once port 8089 becomes active, port 3001 ceases to be active.

In other words, you need to keep port 3001 reserved even though it is only ever used to set up port 8089.

34 changes: 17 additions & 17 deletions docs/Containers/Blynk_server.md
Original file line number Diff line number Diff line change
@@ -2,7 +2,7 @@

This document discusses an IOTstack-specific version of Blynk-Server. It is built on top of an [Ubuntu](https://hub.docker.com/_/ubuntu) base image using a *Dockerfile*.

## <a name="references"> References </a>
## References

- [Ubuntu base image](https://hub.docker.com/_/ubuntu) at DockerHub
- [Peter Knight Blynk-Server fork](https://github.com/Peterkn2001/blynk-server) at GitHub (includes documentation)
@@ -18,7 +18,7 @@ Acknowledgement:

- Original writeup from @877dev

## <a name="significantFiles"> Significant directories and files </a>
## Significant directories and files

```
~/IOTstack
@@ -56,19 +56,19 @@ Everything in ❽:
* will be replaced if it is not present when the container starts; but
* will never be overwritten if altered by you.

## <a name="howBlynkServerIOTstackGetsBuilt"> How Blynk Server gets built for IOTstack </a>
## How Blynk Server gets built for IOTstack

### <a name="dockerHubImages"> GitHub Updates </a>
### GitHub Updates

Periodically, the source code is updated and a new version is released. You can check for the latest version at the [releases page](https://github.com/Peterkn2001/blynk-server/releases/).

### <a name="iotstackMenu"> IOTstack menu </a>
### IOTstack menu

When you select Blynk Server in the IOTstack menu, the *template service definition* is copied into the *Compose* file.

> Under old menu, it is also copied to the *working service definition* and then not really used.
### <a name="iotstackFirstRun"> IOTstack first run </a>
### IOTstack first run

On a first install of IOTstack, you run the menu, choose your containers, and are told to do this:

@@ -131,15 +131,15 @@ You *may* see the same pattern in *Portainer*, which reports the ***base image**

> Whether you see one or two rows depends on the version of `docker-compose` you are using and how your version of `docker-compose` builds local images.
## <a name="logging"> Logging </a>
## Logging

You can inspect Blynk Server's log by:

```
$ docker logs blynk_server
```

## <a name="editConfiguration"> Changing Blynk Server's configuration </a>
## Changing Blynk Server's configuration

The first time you launch the `blynk_server` container, the following structure will be created in the persistent storage area:

@@ -158,7 +158,7 @@ $ cd ~/IOTstack
$ docker-compose restart blynk_server
```

## <a name="cleanSlate"> Getting a clean slate </a>
## Getting a clean slate

Erasing Blynk Server's persistent storage area triggers self-healing and restores known defaults:

@@ -178,7 +178,7 @@ Note:
$ docker-compose restart blynk_server
```

## <a name="upgradingBlynkServer"> Upgrading Blynk Server </a>
## Upgrading Blynk Server

To find out when a new version has been released, you need to visit the [Blynk-Server releases](https://github.com/Peterkn2001/blynk-server/releases/) page at GitHub.

@@ -220,11 +220,11 @@ At the time of writing, version 0.41.16 was the most up-to-date. Suppose that ve

The second `prune` will only be needed if there is an old *base image* and that, in turn, depends on the version of `docker-compose` you are using and how your version of `docker-compose` builds local images.

## <a name="usingBlynkServer"> Using Blynk Server </a>
## Using Blynk Server

See the [References](#references) for documentation links.

### <a name="blynkAdmin"> Connecting to the administrative UI </a>
### Connecting to the administrative UI

To connect to the administrative interface, navigate to:

@@ -237,7 +237,7 @@ You may encounter browser security warnings which you will have to acknowledge i
- username = `admin@blynk.cc`
- password = `admin`

### <a name="changePassword"> Change username and password </a>
### Change username and password

1. Click on Users > "email address" and edit email, name and password.
2. Save changes.
@@ -248,19 +248,19 @@ You may encounter browser security warnings which you will have to acknowledge i
$ docker-compose restart blynk_server
```

### <a name="gmailSetup"> Setup gmail </a>
### Setup gmail

Optional step, useful for getting the auth token emailed to you.
(To be added once confirmed working....)

### <a name="mobileSetup"> iOS/Android app setup </a>
### iOS/Android app setup

1. When setting up the application on your mobile be sure to select "custom" setup [see](https://github.com/Peterkn2001/blynk-server#app-and-sketch-changes).
2. Press "New Project"
3. Give it a name, choose device "Raspberry Pi 3 B" so you have plenty of [virtual pins](http://help.blynk.cc/en/articles/512061-what-is-virtual-pins) available, and lastly select WiFi.
4. Create project and the [auth token](https://docs.blynk.cc/#getting-started-getting-started-with-the-blynk-app-4-auth-token) will be emailed to you (if emails configured). You can also find the token in app under the phone app settings, or in the admin web interface by clicking Users>"email address" and scroll down to token.

### <a name="quickAppGuide"> Quick usage guide for app </a>
### Quick usage guide for app

1. Press on the empty page, the widgets will appear from the right.
2. Select your widget, let's say a button.
@@ -273,7 +273,7 @@ Optional step, useful for getting the auth token emailed to you.

Enter Node-Red.....

### <a name="enterNodeRed"> Node-RED </a>
### Node-RED

1. Install `node-red-contrib-blynk-ws` from Manage Palette.
2. Drag a "write event" node into your flow, and connect to a debug node
8 changes: 4 additions & 4 deletions docs/Containers/Chronograf.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# Chronograf

## <a name="references"> References </a>
## References

- [*influxdata Chronograf* documentation](https://docs.influxdata.com/chronograf/)
- [*GitHub*: influxdata/influxdata-docker/chronograf](https://github.com/influxdata/influxdata-docker/tree/master/chronograf)
- [*DockerHub*: influxdata Chronograf](https://hub.docker.com/_/chronograf)

## <a name="kapacitorIntegration"> Kapacitor integration </a>
## Kapacitor integration

If you selected Kapacitor in the menu and want Chronograf to be able to interact with it, you need to edit `docker-compose.yml` to un-comment the lines which are commented-out in the following:

@@ -28,7 +28,7 @@ $ cd ~IOTstack
$ docker-compose up -d chronograf
```

## <a name="upgradingChronograf"> Upgrading Chronograf </a>
## Upgrading Chronograf

You can update the container via:

@@ -45,7 +45,7 @@ In words:
* `docker-compose up -d` causes any newly-downloaded images to be instantiated as containers (replacing the old containers); and
* the `prune` gets rid of the outdated images.

### <a name="versionPinning"> Chronograf version pinning </a>
### Chronograf version pinning

If you need to pin to a particular version:

File renamed without changes.
File renamed without changes.
10 changes: 10 additions & 0 deletions docs/Containers/Grafana.md
Original file line number Diff line number Diff line change
@@ -15,6 +15,16 @@ The default *~/IOTstack/services/grafana/grafana.env* contains this line:

Uncomment that line and change the right hand side to [your own timezone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones).

## Adding InfluxDB datasource

Select Data Sources -> Add data source -> InfluxDB.

Set options:

* HTTP / URL: `http://influxdb:8086`
* InfluxDB Details / Database: `telegraf`
* InfluxDB Details / User: `nodered`
* InfluxDB Details / Password: `nodered`

## Security

131 changes: 123 additions & 8 deletions docs/Containers/Home-Assistant.md
Original file line number Diff line number Diff line change
@@ -2,7 +2,7 @@

Home Assistant is a home automation platform. It is able to track and control all devices at your home and offer a platform for automating control.

## <a name="references"> References </a>
## References

- [Home Assistant home page](https://www.home-assistant.io/)

@@ -13,7 +13,7 @@ Home Assistant is a home automation platform. It is able to track and control al
- [DockerHub](https://hub.docker.com/r/homeassistant/home-assistant/)


## <a name="twoVersions">Home Assistant: two versions</a>
## Home Assistant: two versions

There are two versions of Home Assistant:

@@ -31,7 +31,7 @@ Technically, both versions of Home Assistant can be installed on your Raspberry

IOTstack used to offer a menu entry leading to a convenience script that could install Supervised Home Assistant but that stopped working when Home Assistant changed their approach. Now, the only method supported by IOTstack is Home Assistant Container.

### <a name="installHAContainer"> Installing Home Assistant Container </a>
### Installing Home Assistant Container

Home Assistant (Container) can be found in the `Build Stack` menu. Selecting it in this menu results in a service definition being added to:

@@ -56,11 +56,13 @@ $ cd ~/IOTstack
$ docker-compose up -d
```

### <a name="installHASupervised"> Installing Supervised Home Assistant </a>
### Installing Supervised Home Assistant

The direction being taken by the Home Assistant folks is to supply a ready-to-run image for your Raspberry Pi. That effectively dedicates your Raspberry Pi to Home Assistant and precludes the possibility of running alongside IOTstack and containers like Mosquitto, InfluxDB, Node-RED, Grafana, PiHole and WireGuard.

It is possible to run Supervised Home Assistant on the same Raspberry Pi as IOTstack. The recommended approach is to start from a clean slate and use [PiBuilder](https://github.com/Paraphraser/PiBuilder).
Alternatively you can try to manually install Supervised Home Assistant using their [installation instructions for advanced users](https://github.com/home-assistant/supervised-installer) and when it works, install IOTstack. In theory this should work, but isn't tested or supported.

The recommended approach is to start from a clean slate and use [PiBuilder](https://github.com/Paraphraser/PiBuilder).

When you visit the PiBuilder link you may well have a reaction like "all far too complicated" but you should try to get past that. PiBuilder has two main use-cases:

@@ -104,12 +106,12 @@ The first time you use PiBuilder, the process boils down to:

where «name» is the name you give to your Raspberry Pi (eg "iot-hub").

After step 9, Supervised Home Assistant will be running. The `04_setup.sh` script also deals with the [random MACs](#aboutRandomMACs) problem. After step 11, you'll be able to either:
After step 9, Supervised Home Assistant will be running. The `04_setup.sh` script also deals with the [random MACs](#why-random-macs-are-such-a-hassle) problem. After step 11, you'll be able to either:

1. Restore a backup; or
2. Run the IOTstack menu and choose your containers.

## <a name="aboutRandomMACs"> Why random MACs are such a hassle </a>
## Why random MACs are such a hassle

> This material was originally posted as part of [Issue 312](https://github.com/SensorsIot/IOTstack/issues/312). It was moved here following a suggestion by [lole-elol](https://github.com/lole-elol).
@@ -169,7 +171,7 @@ Random MACs are not a problem for a **client** device like a phone, tablet or la
It is not just configuration-time SSH sessions that break. If you decide to leave Raspberry Pi random Wifi MAC active **and** you have other clients (eq IoT devices) communicating with the Pi over WiFi, you will wrong-foot those clients each time the Raspberry Pi reboots. Data communications services from those clients will be impacted until those client devices time-out and catch up.

# Using bluetooth from the container
## Using bluetooth from the container
In order to be able to use BT & BLE devices from HA integrations, make sure that bluetooth is enabled and powered on at the start of the (Rpi) host by editing `/etc/bluetooth/main.conf`:

```conf
@@ -187,3 +189,116 @@ UP
...
```
ref: https://scribles.net/auto-power-on-bluetooth-adapter-on-boot-up/

## HTTPS with a valid SSL certificate

Some HA integrations (e.g google assistant) require your HA API to be
accessible via https with a valid certificate. You can configure HA to do this:
[docs](https://www.home-assistant.io/docs/configuration/remote/) /
[guide](https://www.home-assistant.io/docs/ecosystem/certificates/lets_encrypt/)
or use a reverse proxy container, as described below.

The linuxserver Secure Web Access Gateway container
([swag](https://docs.linuxserver.io/general/swag)) ([Docker hub
docs](https://hub.docker.com/r/linuxserver/swag)) will automatically generate a
SSL-certificate, update the SSL certificate before it expires and act as a
reverse proxy.

1. First test your HA is working correctly: `http://raspberrypi.local:8123/` (assuming
your RPi hostname is raspberrypi)
2. Make sure you have duckdns working.
3. On your internet router, forward public port 443 to the RPi port 443
4. Add swag to ~/IOTstack/docker-compose.yml beneath the `services:`-line:
```
swag:
image: ghcr.io/linuxserver/swag
cap_add:
- NET_ADMIN
environment:
- PUID=1000
- PGID=1000
- TZ=Etc/UTC
- URL=<yourdomain>.duckdns.org
- SUBDOMAINS=wildcard
- VALIDATION=duckdns
- DUCKDNSTOKEN=<token>
- CERTPROVIDER=zerossl
- EMAIL=<e-mail> # required when using zerossl
volumes:
- ./volumes/swag/config:/config
ports:
- 443:443
restart: unless-stopped
```
Replace the bracketed values. Do NOT use any "-characters to enclose the values.

5. Start the swag container, this creates the file to be edited in the next step:
```
cd ~/IOTstack && docker-compose up -d
```

Check it starts up OK: `docker-compose logs -f swag`. It will take a minute or two before it finally logs "Server ready".

6. Enable reverse proxy for `raspberrypi.local`. `homassistant.*` is already by default. and fix homeassistant container name ("upstream_app"):
```
sed -e 's/server_name/server_name *.local/' \
volumes/swag/config/nginx/proxy-confs/homeassistant.subdomain.conf.sample \
> volumes/swag/config/nginx/proxy-confs/homeassistant.subdomain.conf
```
7. Forward to correct IP when target is a container running in "network_mode:
host" (like Home Assistant does):
```
cat << 'EOF' | sudo tee volumes/swag/config/custom-cont-init.d/add-host.docker.internal.sh
#!/bin/sh
DOCKER_GW=$(ip route | awk 'NR==1 {print $3}')

sed -i -e "s/upstream_app .*/upstream_app ${DOCKER_GW};/" \
/config/nginx/proxy-confs/homeassistant.subdomain.conf
EOF
sudo chmod u+x volumes/swag/config/custom-cont-init.d/add-host.docker.internal.sh
```
(This needs to be copy-pasted/entered as-is, ignore any "> "-prefixes printed
by bash)
8. (optional) Add reverse proxy password protection if you don't want to rely
on the HA login for security, doesn't affect API-access:
```
sed -i -e 's/#auth_basic/auth_basic/' \
volumes/swag/config/nginx/proxy-confs/homeassistant.subdomain.conf
docker-compose exec swag htpasswd -c /config/nginx/.htpasswd anyusername
```
9. Add `use_x_forwarded_for` and `trusted_proxies` to your homeassistant [http
config](https://www.home-assistant.io/integrations/http). The configuration
file is at `volumes/home_assistant/configuration.yaml` For a default install
the resulting http-section should be:
```
http:
use_x_forwarded_for: true
trusted_proxies:
- 192.168.0.0/16
- 172.16.0.0/12
- 10.77.0.0/16
```
10. Refresh the stack: `cd ~/IOTstack && docker-compose stop && docker-compose
up -d` (again may take 1-3 minutes for swag to start if it recreates
certificates)
11. Test homeassistant is still working correctly:
`http://raspberrypi.local:8123/` (assuming your RPi hostname is
raspberrypi)
12. Test the reverse proxy https is working correctly:
`https://raspberrypi.local/` (browser will issue a warning about wrong
certificate domain, as the certificate is issued for you duckdns-domain, we
are just testing)
Or from the command line in the RPi:
```
curl --resolve homeassistant.<yourdomain>.duckdns.org:443:127.0.0.1 \
https://homeassistant.<yourdomain>.duckdns.org/
```
(output should end in `if (!window.latestJS) { }</script></body></html>`)
13. And finally test your router forwards correctly by accessing it from
outside your LAN(e.g. using a mobile phone):
`https://homeassistant.<yourdomain>.duckdns.org/` Now the certificate
should work without any warnings.
97 changes: 96 additions & 1 deletion docs/Containers/InfluxDB.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,104 @@
# InfluxDB
A time series database.

InfluxDB has configurable aggregation and retention policies allowing
measurement resolution reduction, storing all added data points for recent data
and only aggregated values for older data.

To connect use:

| Field | Default |
| --------- | ---------- |
| User | nodered |
| Password | nodered |
| URL (from other services) | http://influxdb:8086 |
| URL (on the host machine) | http://localhost:8086 |

## References
- [Docker](https://hub.docker.com/_/influxdb)
- [Website](https://www.influxdata.com/)

## Security
## Setup

To access the influx console, show current databases and database measurements:
```
pi@raspberrypi:~/IOTstack $ docker-compose exec influxdb bash
root@6bca535a945f:/# influx
Connected to http://localhost:8086 version 1.8.10
InfluxDB shell version: 1.8.10
> show databases
name: databases
name
----
_internal
telegraf
> use telegraf
Using database telegraf
> show measurements
name: measurements
name
----
cpu
cpu_temperature
disk
diskio
etc...
```

To create a new database and set a limited retention policy, here for instance
any data older than 52 weeks is deleted:

```
> create database mydb
> show retention policies on mydb
name duration shardGroupDuration replicaN default
---- -------- ------------------ -------- -------
autogen 0s 168h0m0s 1 true
> alter retention policy "autogen" on "mydb" duration 52w shard duration 1w replication 1 default
> show retention policies on mydb
name duration shardGroupDuration replicaN default
---- -------- ------------------ -------- -------
autogen 8736h0m0s 168h0m0s 1 true
```

Aggregation, on the other hand, is where you keep your relevant statistics, but
decrease their time-resolution and lose individual data-points. This is a much
more complicated topic and harder to configure. As such it is well outside the
scope of this guide.


## Reducing flash wear-out

SSD-drives have pretty good controllers spreading out writes, so this isn't a
this isn't really a concern for them. But if you store data on a SD-card,
flash wear may cause the card to fail prematurely. Flash memory has a limited
number of erase-write cycles per physical block. These blocks may be multiple
megabytes. You can use `sudo lsblk -D` to see how big the erase granularity is
on your card. The goal is to avoid writing lots of small changes targeting the
same physical blocks. Here are some tips to mitigate SD-card wear:

* Don't use short retention policies. This may mask heavy disk IO without
increasing disk space usage. Depending on the file system used, new data may
be written to the same flash blocks that were freed by expiration, wearing
them out.
* Take care not to add measurements too often. If possible no more often than
once a minute. Add all measurements in one operation.
* Adding measurements directly to Influxdb will cause a write on every
operation. If your client code can't aggregate multiple measurements into one
write, consider routing them via Telegraf. It has the
`flush_interval`-option, which will combine the measurements into one write.
* All InfluxDB queries are logged by default and logs are written to the
SD-card. To disable this, add to docker-compose.yml, next to the other
INFLUXDB_\* entries:
```
- INFLUXDB_DATA_QUERY_LOG_ENABLED=false
- INFLUXDB_HTTP_LOG_ENABLED=false
```
This is especially important if you plan on having Grafana or Chronograf
displaying up-to-date data on a dashboard.

## Old-menu branch
The credentials and default database name for influxdb are stored in the file called influxdb/influx.env . The default username and password is set to "nodered" for both. It is HIGHLY recommended that you change them. The environment file contains several commented out options allowing you to set several access options such as default admin user credentials as well as the default database name. Any change to the environment file will require a restart of the service.

To access the terminal for influxdb execute `./services/influxdb/terminal.sh`. Here you can set additional parameters or create other databases.
6 changes: 3 additions & 3 deletions docs/Containers/Kapacitor.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# Kapacitor

## <a name="references"> References </a>
## References

- [*influxdata Kapacitor* documentation](https://docs.influxdata.com/kapacitor/)
- [*GitHub*: influxdata/influxdata-docker/kapacitor](https://github.com/influxdata/influxdata-docker/tree/master/kapacitor)
- [*DockerHub*: influxdata Kapacitor](https://hub.docker.com/_/kapacitor)

## <a name="upgradingKapacitor"> Upgrading Kapacitor </a>
## Upgrading Kapacitor

You can update the container via:

@@ -23,7 +23,7 @@ In words:
* `docker-compose up -d` causes any newly-downloaded images to be instantiated as containers (replacing the old containers); and
* the `prune` gets rid of the outdated images.

### <a name="versionPinning"> Kapacitor version pinning </a>
### Kapacitor version pinning

If you need to pin to a particular version:

13 changes: 8 additions & 5 deletions docs/Containers/MariaDB.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# MariaDB
## Source

* [Docker hub](https://hub.docker.com/r/linuxserver/mariadb/)
@@ -59,14 +60,16 @@ You can open a terminal session within the mariadb container via:
$ docker exec -it mariadb bash
```

To connect to the database: `mysql -uroot -p`

To close the terminal session, either:

* type "exit" and press <kbd>return</kbd>; or
* press <kbd>control</kbd>+<kbd>d</kbd>.

## <a name="healthCheck"> Container health check </a>
## Container health check

### <a name="healthCheckTheory"> theory of operation </a>
### theory of operation

A script , or "agent", to assess the health of the MariaDB container has been added to the *local image* via the *Dockerfile*. In other words, the script is specific to IOTstack.

@@ -84,11 +87,11 @@ The agent is invoked 30 seconds after the container starts, and every 30 seconds
mysqld is alive
```

3. If the command returned the expected response, the agent tests the responsiveness of the TCP port the `mysqld` daemon should be listening on (see [customising health-check](#healthCheckCustom)).
3. If the command returned the expected response, the agent tests the responsiveness of the TCP port the `mysqld` daemon should be listening on (see [customising health-check](#customising-health-check)).

4. If all of those steps succeed, the agent concludes that MariaDB is functioning properly and returns "healthy".

### <a name="healthCheckMonitor"> monitoring health-check </a>
### monitoring health-check

Portainer's *Containers* display contains a *Status* column which shows health-check results for all containers that support the feature.

@@ -121,7 +124,7 @@ Possible reply patterns are:
mariadb Up About a minute (unhealthy)
```

### <a name="healthCheckCustom"> customising health-check </a>
### customising health-check

You can customise the operation of the health-check agent by editing the `mariadb` service definition in your *Compose* file:

68 changes: 34 additions & 34 deletions docs/Containers/Mosquitto.md
Original file line number Diff line number Diff line change
@@ -6,15 +6,15 @@ This document discusses an IOTstack-specific version of Mosquitto built on top o
<hr>

## <a name="references"> References </a>
## References

- [*Eclipse Mosquitto* home](https://mosquitto.org)
- [*GitHub*: eclipse/mosquitto](https://github.com/eclipse/mosquitto)
- [*DockerHub*: eclipse-mosquitto](https://hub.docker.com/_/eclipse-mosquitto)
- [Setting up passwords](https://www.youtube.com/watch?v=1msiFQT_flo) (video)
- [Tutorial: from MQTT to InfluxDB via Node-Red](https://gist.github.com/Paraphraser/c9db25d131dd4c09848ffb353b69038f)

## <a name="significantFiles"> Significant directories and files </a>
## Significant directories and files

```
~/IOTstack
@@ -57,23 +57,23 @@ This document discusses an IOTstack-specific version of Mosquitto built on top o
* You will normally need `sudo` to make changes in this area.
* Each time Mosquitto starts, it automatically replaces anything originating in ❹ that has gone missing from ❼. This "self-repair" function is intended to provide reasonable assurance that Mosquitto will at least **start** instead of going into a restart loop.

## <a name="howMosquittoIOTstackGetsBuilt"> How Mosquitto gets built for IOTstack </a>
## How Mosquitto gets built for IOTstack

### <a name="githubSourceCode"> Mosquitto source code ([*GitHub*](https://github.com)) </a>
### Mosquitto source code ([*GitHub*](https://github.com))

The source code for Mosquitto lives at [*GitHub* eclipse/mosquitto](https://github.com/eclipse/mosquitto).

### <a name="dockerHubImages"> Mosquitto images ([*DockerHub*](https://hub.docker.com)) </a>
### Mosquitto images ([*DockerHub*](https://hub.docker.com))

Periodically, the source code is recompiled and the resulting image is pushed to [eclipse-mosquitto](https://hub.docker.com/_/eclipse-mosquitto?tab=tags&page=1&ordering=last_updated) on *DockerHub*.

### <a name="iotstackMenu"> IOTstack menu </a>
### IOTstack menu

When you select Mosquitto in the IOTstack menu, the *template service definition* is copied into the *Compose* file.

> Under old menu, it is also copied to the *working service definition* and then not really used.
### <a name="iotstackFirstRun"> IOTstack first run </a>
### IOTstack first run

On a first install of IOTstack, you run the menu, choose Mosquitto as one of your containers, and are told to do this:

@@ -82,7 +82,7 @@ $ cd ~/IOTstack
$ docker-compose up -d
```

> See also the [Migration considerations](#migration) (below).
> See also the [Migration considerations](#migration-considerations) (below).
`docker-compose` reads the *Compose* file. When it arrives at the `mosquitto` fragment, it finds:

@@ -107,7 +107,7 @@ The *Dockerfile* begins with:
FROM eclipse-mosquitto:latest
```

> If you need to pin to a particular version of Mosquitto, the *Dockerfile* is the place to do it. See [Mosquitto version pinning](#versionPinning).
> If you need to pin to a particular version of Mosquitto, the *Dockerfile* is the place to do it. See [Mosquitto version pinning](#mosquitto-version-pinning).
The `FROM` statement tells the build process to pull down the ***base image*** from [*DockerHub*](https://hub.docker.com).

@@ -144,7 +144,7 @@ You *may* see the same pattern in Portainer, which reports the *base image* as "

> Whether you see one or two rows depends on the version of `docker-compose` you are using and how your version of `docker-compose` builds local images.
### <a name="migration"> Migration considerations </a>
### Migration considerations

Under the original IOTstack implementation of Mosquitto (just "as it comes" from *DockerHub*), the service definition expected the configuration files to be at:

@@ -205,7 +205,7 @@ Using `mosquitto.conf` as the example, assume you wish to use your existing file

5. If necessary, repeat these steps with `filter.acl`.

## <a name="logging"> Logging </a>
## Logging

Mosquitto logging is controlled by `mosquitto.conf`. This is the default configuration:

@@ -248,9 +248,9 @@ $ sudo tail ~/IOTstack/volumes/mosquitto/log/mosquitto.log
Logs written to `mosquitto.log` do not disappear when your IOTstack is restarted. They persist until you take action to prune the file.

## <a name="security"> Security </a>
## Security

### <a name="securityConfiguration"> Configuring security </a>
### Configuring security

Mosquitto security is controlled by `mosquitto.conf`. These are the relevant directives:

@@ -269,7 +269,7 @@ enabled | true | credentials optional | |
enabled | false | credentials required | |


### <a name="passwordManagement"> Password file management </a>
### Password file management

The password file for Mosquitto is part of a mapped volume:

@@ -287,7 +287,7 @@ The Mosquitto container performs self-repair each time the container is brought

* If `false` then **all** MQTT requests will be rejected.

#### <a name="passwordCreation"> create username and password </a>
#### create username and password

To create a username and password, use the following as a template.

@@ -303,9 +303,9 @@ $ docker exec mosquitto mosquitto_passwd -b /mosquitto/pwfile/pwfile hello world

Note:

* See also [customising health-check](#healthCheckCustom). If you are creating usernames and passwords, you may also want to create credentials for the health-check agent.
* See also [customising health-check](#customising-health-check). If you are creating usernames and passwords, you may also want to create credentials for the health-check agent.

#### <a name="checkPasswordFile"> check password file </a>
#### check password file

There are two ways to verify that the password file exists and has the expected content:

@@ -329,15 +329,15 @@ Each credential starts with the username and occupies one line in the file:
hello:$7$101$ZFOHHVJLp2bcgX+h$MdHsc4rfOAhmGG+65NpIEJkxY0beNeFUyfjNAGx1ILDmI498o4cVOaD9vDmXqlGUH9g6AgHki8RPDEgjWZMkDA==
```

#### <a name="deletePassword"> remove entry from password file </a>
#### remove entry from password file

To remove an entry from the password file:

```
$ docker exec mosquitto mosquitto_passwd -D /mosquitto/pwfile/pwfile «username»
```

#### <a name="resetPasswordFile"> reset the password file </a>
#### reset the password file

There are several ways to reset the password file. Your options are:

@@ -368,7 +368,7 @@ There are several ways to reset the password file. Your options are:

The result is an empty password file.

### <a name="activateSecurity"> Activate Mosquitto security </a>
### Activate Mosquitto security

1. Use `sudo` and your favourite text editor to open the following file:

@@ -411,23 +411,23 @@ There are several ways to reset the password file. Your options are:
$ docker-compose restart mosquitto
```

### <a name="testSecurity"> Testing Mosquitto security </a>
### Testing Mosquitto security

#### <a name="testAssumptions"> assumptions </a>
#### assumptions

1. You have created at least one username ("hello") and password ("world").
2. `password_file` is enabled.
3. `allow_anonymous` is `false`.

#### <a name="installTestTools"> install testing tools </a>
#### install testing tools

If you do not have the Mosquitto clients installed on your Raspberry Pi (ie `$ which mosquitto_pub` does not return a path), install them using:

```
$ sudo apt install -y mosquitto-clients
```

#### <a name="anonymousDenied"> test: *anonymous access is prohibited* </a>
#### test: *anonymous access is prohibited*

Test **without** providing credentials:

@@ -441,7 +441,7 @@ Note:

* The error is the expected result and shows that Mosquitto will not allow anonymous access.

#### <a name="pubPermitted"> test: *access with credentials is permitted* </a>
#### test: *access with credentials is permitted*

Test with credentials

@@ -454,7 +454,7 @@ Note:

* The absence of any error message means the message was sent. Silence = success!

#### <a name="pubSubPermitted"> test: *round-trip with credentials is permitted* </a>
#### test: *round-trip with credentials is permitted*

Prove round-trip connectivity will succeed when credentials are provided. First, set up a subscriber as a background process. This mimics the role of a process like Node-Red:

@@ -482,9 +482,9 @@ $
[1]+ Terminated mosquitto_sub -v -h 127.0.0.1 -p 1883 -t "/password/test" -F "%I %t %p" -u hello -P world
```

## <a name="healthCheck"> Container health check </a>
## Container health check

### <a name="healthCheckTheory"> theory of operation </a>
### theory of operation

A script , or "agent", to assess the health of the Mosquitto container has been added to the *local image* via the *Dockerfile*. In other words, the script is specific to IOTstack.

@@ -499,7 +499,7 @@ The agent is invoked 30 seconds after the container starts, and every 30 seconds
* Subscribes to the same broker for the same topic for a single message event.
* Compares the payload sent with the payload received. If the payloads (ie time-stamps) match, the agent concludes that the Mosquitto broker (the process running inside the same container) is functioning properly for round-trip messaging.

### <a name="healthCheckMonitor"> monitoring health-check </a>
### monitoring health-check

Portainer's *Containers* display contains a *Status* column which shows health-check results for all containers that support the feature.

@@ -545,7 +545,7 @@ Notes:
* If you enable authentication for your Mosquitto broker, you will need to add `-u «user»` and `-P «password»` parameters to this command.
* You should expect to see a new message appear approximately every 30 seconds. That indicates the health-check agent is functioning normally. Use <kbd>control</kbd>+<kbd>c</kbd> to terminate the command.

### <a name="healthCheckCustom"> customising health-check </a>
### customising health-check

You can customise the operation of the health-check agent by editing the `mosquitto` service definition in your *Compose* file:

@@ -565,7 +565,7 @@ You can customise the operation of the health-check agent by editing the `mosqui

Note:

* You will also need to use the same topic string in the `mosquitto_sub` command shown at [monitoring health-check](#healthCheckMonitor).
* You will also need to use the same topic string in the `mosquitto_sub` command shown at [monitoring health-check](#monitoring-health-check).

3. If you have enabled authentication for your Mosquitto broker service, you will need to provide appropriate credentials for your health-check agent:

@@ -594,7 +594,7 @@ You can customise the operation of the health-check agent by editing the `mosqui

You must remove the entire `healthcheck:` clause.

## <a name="upgradingMosquitto"> Upgrading Mosquitto </a>
## Upgrading Mosquitto

You can update most containers like this:

@@ -636,7 +636,7 @@ Your existing Mosquitto container continues to run while the rebuild proceeds. O

The `prune` is the simplest way of cleaning up. The first call removes the old *local image*. The second call cleans up the old *base image*. Whether an old *base image* exists depends on the version of `docker-compose` you are using and how your version of `docker-compose` builds local images.

### <a name="versionPinning"> Mosquitto version pinning </a>
### Mosquitto version pinning

If you need to pin Mosquitto to a particular version:

@@ -672,7 +672,7 @@ Note:

* As well as preventing Docker from updating the *base image*, pinning will also block incoming updates to the *Dockerfile* from a `git pull`. Nothing will change until you decide to remove the pin.

## <a name="aboutPort9001"> About Port 9001 </a>
## About Port 9001

Earlier versions of the IOTstack service definition for Mosquitto included two port mappings:

16 changes: 8 additions & 8 deletions docs/Containers/NextCloud.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Nextcloud

## <a name="serviceDefinition"> Service definition </a>
## Service definition

This is the **core** of the IOTstack Nextcloud service definition:

@@ -54,7 +54,7 @@ Under new-menu, the menu can generate random passwords for you. You can either u

The passwords need to be set before you bring up the Nextcloud service for the first time but the following initialisation steps assume you might not have done that and always start over from a clean slate.

## <a name="initialisation"> Initialising Nextcloud </a>
## Initialising Nextcloud

1. Be in the correct directory:

@@ -108,7 +108,7 @@ The passwords need to be set before you bring up the Nextcloud service for the f

* You **can't** use a multicast domain name (eg `myrpi.local`). An mDNS name will not work until Nextcloud has been initialised!
* Once you have picked a connection method, **STICK TO IT**.
* You are only stuck with this restriction until Nextcloud has been initialised. You **can** (and should) fix it later by completing the steps in ["Access through untrusted domain"](#untrustedDomain).
* You are only stuck with this restriction until Nextcloud has been initialised. You **can** (and should) fix it later by completing the steps in ["Access through untrusted domain"](#access-through-untrusted-domain).

7. On a computer that is **not** the Raspberry Pi running Nextcloud, launch a browser and point to the Raspberry Pi running Nextcloud using your chosen connection method. Examples:

@@ -243,7 +243,7 @@ See also:

* [Nextcloud documentation - trusted domains](https://docs.nextcloud.com/server/21/admin_manual/installation/installation_wizard.html#trusted-domains).

### <a name="dnsAlias"> Using a DNS alias for your Nextcloud service </a>
### Using a DNS alias for your Nextcloud service

The examples above include using a DNS alias (a CNAME record) for your Nextcloud service. If you decide to do that, you may see this warning in the log:

@@ -261,13 +261,13 @@ You can silence the warning by editing the Nextcloud service definition in `dock

Nextcloud traffic is not encrypted. Do **not** expose it to the web by opening a port on your home router. Instead, use a VPN like Wireguard to provide secure access to your home network, and let your remote clients access Nextcloud over the VPN tunnel.

## <a name="healthCheck"> Container health check </a>
## Container health check

A script , or "agent", to assess the health of the MariaDB container has been added to the *local image* via the *Dockerfile*. In other words, the script is specific to IOTstack.

Because it is an instance of MariaDB, Nextcloud_DB inherits the health-check agent. See the [IOTstack MariaDB](https://sensorsiot.github.io/IOTstack/Containers/MariaDB/) documentation for more information.
Because it is an instance of MariaDB, Nextcloud_DB inherits the health-check agent. See the [IOTstack MariaDB](MariaDB.md) documentation for more information.

## <a name="updatingNextcloud"> Keeping Nextcloud up-to-date </a>
## Keeping Nextcloud up-to-date

To update the `nextcloud` container:

@@ -290,7 +290,7 @@ $ docker system prune

The first "prune" removes the old *local* image, the second removes the old *base* image. Whether an old *base image* exists depends on the version of `docker-compose` you are using and how your version of `docker-compose` builds local images.

## <a name="backups"> Backups </a>
## Backups

Nextcloud is currently excluded from the IOTstack-supplied backup scripts due to its potential size.

80 changes: 40 additions & 40 deletions docs/Containers/Node-RED.md

Large diffs are not rendered by default.

3 changes: 3 additions & 0 deletions docs/Containers/Octoprint.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
---
title: Octoprint
---
# OctoPrint – the snappy web interface for your 3D printer

## References
File renamed without changes.
127 changes: 123 additions & 4 deletions docs/Containers/Pi-hole.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,127 @@
# Pi-hole
Pi-hole is a fantastic utility to reduce ads
Pi-hole is a fantastic utility to reduce ads.

The interface can be found on `"your_ip":8089/admin`
The web interface can be found at `http://«your_ip»:8089/admin`
where «your_ip» can be:

Default password is `pihole`. This can be changed in the `~/IOTstack/services/pihole/pihole.env` file
* The IP address of the Raspberry Pi running Pi-hole.
* The domain name of the Raspberry Pi running Pi-hole.
* The multicast DNS name (eg "raspberrypi.local") of the Raspberry Pi running
Pi-hole.

Default password is random, it can be changed by running:
```
docker-compose exec pihole pihole -a -p myNewPassword
```

References:

* [Pi-hole on GitHub](https://github.com/pi-hole/docker-pi-hole)
* [Pi-hole on Dockerhub](https://hub.docker.com/r/pihole/pihole)

## Environment variables

Environment variables govern much of Pi-hole's behaviour. If you are running new
menu (master branch), the variables are inline in `docker-compose.yml`. If you
are running old menu, the variables will be in:
`~/IOTstack/services/pihole/pihole.env`

The first time Pi-hole is launched, it checks for the `WEBPASSWORD` environment
variable. If found, sets the initial password.

Pi-hole supports a [long list of environment
variables](https://github.com/pi-hole/docker-pi-hole#environment-variables).

## Using Pi-hole as your DNS resolver

In order for the Pi-hole to ad-block or resolve anything, it needs to be
defined as the DNS server. This can either be done manually to each device or
you can define it as a DNS-nameserver for the whole LAN.

Note that using Pi-hole for clients on your network pretty much **requires** the
Raspberry Pi running Pi-hole to have a fixed IP address.

Assuming your RPi hostname is `raspberrypi` and has the static IP
`192.168.1.10`:

1. Go to your network's DHCP server, usually this is your Wireless Access Point
/ WLAN Router.
* Login into its web-interface
* Find where DNS servers are defined
* Change all DNS fields to `192.168.1.10`
2. All local machines have to be rebooted. Without this they will continue to
use the old DNS setting from an old DHCP lease for quite some time.

## Adding domain names

Login to the Pi-hole web interface: `http://raspberrypi.local:8089/admin`:

1. Select from Left menu: Local DNS -> DNS Records
2. Enter Domain: `raspberrypi.home.arpa` and IP Address: `192.168.1.10`. Press
Add.

Now you can use `raspberrypi.home.arpa` as the domain name for the Raspberry Pi
in your whole local network. You can also add domain names for your other
devices, provided they too have static IPs.

The Raspberry Pi itself must also use be configured to use the Pi-hole DNS
server. This is especially important when you add your own domains names,
otherwise DNS may work differently on the Pi than on other devices. Configure
this by running:
```bash
echo "name_servers=127.0.0.1" | sudo tee -a /etc/resolvconf.conf
echo "name_servers_append=8.8.8.8" | sudo tee -a /etc/resolvconf.conf
echo "resolv_conf_local_only=NO" | sudo tee -a /etc/resolvconf.conf
sudo resolvconf -u # Ignore "Too few arguments."-complaint
```
Quick explanation: resolv_conf_local_only is disabled and a public nameserver
is added, so that in case the Pi-hole container is stopped, the Raspberry won't
lose DNS functionality. It will just fallback to 8.8.8.8.

### Testing & Troubleshooting

Install dig:
```
apt install dnsutils
```

Test that Pi-hole is correctly configured (should respond 192.168.1.10):
```
dig raspberrypi.home.arpa @192.168.1.10
```

To test on your desktop if your network configuration is correct, and an ESP
will resolve its DNS queries correctly, restart your desktop machine to ensure
DNS changes are updated and then use:
```
dig raspberrypi.home.arpa
```
This should produce the same result as the previous command.

If this fails to resolve the IP, check that the server in the response is
`192.168.1.10`. If it's `127.0.0.xx` check `/etc/resolv.conf` begins with
`nameserver 192.168.1.10`.

## Why .home.arpa?

Instead of `.home.arpa` - which is the real standard, but a mouthful - you may
use `.internal`. Using `.local` would technically also work, but it should be
reserved only for mDNS use.

## Microcontrollers

If you want to avoid hardcoding your Raspberry Pi IP to your ESPhome devices,
you need a DNS server that will do the resolving. This can be done using the
Pi-hole container as described above.

!!! info "`*.local` won't work for ESPhome"

There is a special case for resolving `*.local` addresses. If you do a
`ping raspberrypi.local` on your desktop linux or the RPI, it will first
try using mDNS/bonjour to resolve the IP address raspberrypi.local. If this
fails it will then ask the DNS server. Esphome devices can't use mDNS to
resolve an IP address. You need a proper DNS server to respond to queries
made by an ESP. As such, `dig raspberrypi.local` will fail, simulating
ESPhome device behavior. This is as intended, and you should use
raspberrypi.home.arpa as the address on your ESP-device.

To enable your router to use the pihole container edit your DNS settings on your router to point to your Pi's IP address
18 changes: 9 additions & 9 deletions docs/Containers/Portainer-ce.md
Original file line number Diff line number Diff line change
@@ -1,27 +1,27 @@
# Portainer CE

## <a name="references"> References </a>
## References

- [Docker](https://hub.docker.com/r/portainer/portainer-ce/)
- [Website](https://www.portainer.io/portainer-ce/)

## <a name="definitions"> Definition </a>
## Definition

- "#yourip" means any of the following:

- the IP address of your Raspberry Pi (eg `192.168.1.10`)
- the multicast domain name of your Raspberry Pi (eg `iot-hub.local`)
- the domain name of your Raspberry Pi (eg `iot-hub.mydomain.com`)

## <a name="about"> About *Portainer CE* </a>
## About *Portainer CE*

*Portainer CE* (Community Edition) is an application for managing Docker. It is a successor to *Portainer*. According to [the *Portainer CE* documentation](https://www.portainer.io/2020/08/portainer-ce-2-0-what-to-expect/)

> Portainer 1.24.x will continue as a separate code branch, released as portainer/portainer:latest, and will receive ongoing security updates until at least 1st Sept 2021. No new features will be added beyond what was available in 1.24.1.
From that it should be clear that *Portainer* is deprecated and that *Portainer CE* is the way forward.

## <a name="installation"> Installing *Portainer CE* </a>
## Installing *Portainer CE*

Run the menu:

@@ -40,7 +40,7 @@ Ignore any message like this:

> WARNING: Found orphan containers (portainer) for this project …
## <a name="firstRun"> First run of *Portainer CE* </a>
## First run of *Portainer CE*

In your web browser navigate to `#yourip:9000/`:

@@ -51,7 +51,7 @@ From there, you can click on the "Local" group and take a look around. One of th

There are 'Quick actions' to view logs and other stats. This can all be done from terminal commands but *Portainer CE* makes it easier.

## <a name="setPublicIP"> Setting the Public IP address for your end-point </a>
## Setting the Public IP address for your end-point

If you click on a "Published Port" in the "Containers" list, your browser may return an error saying something like "can't connect to server" associated with an IP address of "0.0.0.0".

@@ -79,7 +79,7 @@ Keep in mind that clicking on a "Published Port" does not guarantee that your br

> All things considered, you will get more consistent behaviour if you simply bookmark the URLs you want to use for your IOTstack services.
## <a name="forgotPassword"> If you forget your password </a>
## If you forget your password

If you forget the password you created for *Portainer CE*, you can recover by doing the following:

@@ -92,5 +92,5 @@ $ docker-compose start portainer-ce

Then, follow the steps in:

1. [First run of *Portainer CE*](#firstRun); and
2. [Setting the Public IP address for your end-point](#setPublicIP).
1. [First run of *Portainer CE*](#first-run-of-portainer-ce); and
2. [Setting the Public IP address for your end-point](#setting-the-public-ip-address-for-your-end-point).
54 changes: 27 additions & 27 deletions docs/Containers/Prometheus.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Prometheus

## <a name="references"> References </a>
## References

* [*Prometheus* home](https://prometheus.io)
* *GitHub*:
@@ -15,19 +15,19 @@
- [*CAdvisor*](https://hub.docker.com/r/zcube/cadvisor)
- [*Node Exporter*](https://hub.docker.com/r/prom/node-exporter)

## <a name="overview"> Overview </a>
## Overview

Prometheus is a collection of three containers:

* *Prometheus*
* *CAdvisor*
* *Node Exporter*

The [default configuration](#activeConfig) for *Prometheus* supplied with IOTstack scrapes information from all three containers.
The [default configuration](#active-configuration-file) for *Prometheus* supplied with IOTstack scrapes information from all three containers.

## <a name="installProm"> Installing Prometheus </a>
## Installing Prometheus

### <a name="installPromNewMenu"> *if you are running New Menu …* </a>
### *if you are running New Menu …*

When you select *Prometheus* in the IOTstack menu, you must also select:

@@ -36,15 +36,15 @@ When you select *Prometheus* in the IOTstack menu, you must also select:

If you do not select all three containers, Prometheus will not start.

### <a name="installPromOldMenu"> *if you are running Old Menu …* </a>
### *if you are running Old Menu …*

When you select *Prometheus* in the IOTstack menu, the service definition includes the three containers:

* *prometheus*
* *prometheus-cadvisor;* and
* *prometheus-nodeexporter*.

## <a name="significantFiles"> Significant directories and files </a>
## Significant directories and files

```
~/IOTstack
@@ -75,25 +75,25 @@ When you select *Prometheus* in the IOTstack menu, the service definition includ
5. The *working service definition* (only relevant to old-menu, copied from ❶).
6. The *Compose* file (includes ❶).
7. The *persistent storage area*.
8. The [configuration directory](#configDir).
8. The [configuration directory](#configuration-directory).

## <a name="howPrometheusIOTstackGetsBuilt"> How *Prometheus* gets built for IOTstack </a>
## How *Prometheus* gets built for IOTstack

### <a name="githubSourceCode"> *Prometheus* source code ([*GitHub*](https://github.com)) </a>
### *Prometheus* source code ([*GitHub*](https://github.com))

The source code for *Prometheus* lives at [*GitHub* prometheus/prometheus](https://github.com/prometheus/prometheus).

### <a name="dockerHubImages"> *Prometheus* images ([*DockerHub*](https://hub.docker.com)) </a>
### *Prometheus* images ([*DockerHub*](https://hub.docker.com))

Periodically, the source code is recompiled and the resulting image is pushed to [prom/prometheus](https://hub.docker.com/r/prom/prometheus) on *DockerHub*.

### <a name="iotstackMenu"> IOTstack menu </a>
### IOTstack menu

When you select *Prometheus* in the IOTstack menu, the *template service definition* is copied into the *Compose* file.

> Under old menu, it is also copied to the *working service definition* and then not really used.
### <a name="iotstackFirstRun"> IOTstack first run </a>
### IOTstack first run

On a first install of IOTstack, you run the menu, choose *Prometheus* as one of your containers, and are told to do this:

@@ -124,7 +124,7 @@ The *Dockerfile* begins with:
FROM prom/prometheus:latest
```

> If you need to pin to a particular version of *Prometheus*, the *Dockerfile* is the place to do it. See [*Prometheus* version pinning](#versionPinning).
> If you need to pin to a particular version of *Prometheus*, the *Dockerfile* is the place to do it. See [*Prometheus* version pinning](#prometheus-version-pinning).
The `FROM` statement tells the build process to pull down the ***base image*** from [*DockerHub*](https://hub.docker.com).

@@ -158,15 +158,15 @@ You *may* see the same pattern in Portainer, which reports the *base image* as "

> Whether you see one or two rows depends on the version of `docker-compose` you are using and how your version of `docker-compose` builds local images.
### <a name="dependencies"> Dependencies: *CAdvisor* and *Node Exporter* </a>
### Dependencies: *CAdvisor* and *Node Exporter*

The *CAdvisor* and *Node Exporter* are included in the *Prometheus* service definition as dependent containers. What that means is that each time you start *Prometheus*, `docker-compose` ensures that *CAdvisor* and *Node Exporter* are already running, and keeps them running.

The [default configuration](#activeConfig) for *Prometheus* assumes *CAdvisor* and *Node Exporter* are running and starts scraping information from those targets as soon as it launches.
The [default configuration](#active-configuration-file) for *Prometheus* assumes *CAdvisor* and *Node Exporter* are running and starts scraping information from those targets as soon as it launches.

## <a name="configuringPrometheus"> Configuring **Prometheus** </a>
## Configuring **Prometheus**

### <a name="configDir"> Configuration directory </a>
### Configuration directory

The configuration directory for the IOTstack implementation of *Prometheus* is at the path:

@@ -181,9 +181,9 @@ That directory contains two files:

If you delete either file, *Prometheus* will replace it with a default the next time the container starts. This "self-repair" function is intended to provide reasonable assurance that *Prometheus* will at least **start** instead of going into a restart loop.

Unless you [decide to change it](#environmentVars), the `config` folder and its contents are owned by "pi:pi". This means you can edit the files in the configuration directory without needing the `sudo` command. Ownership is enforced each time the container restarts.
Unless you [decide to change it](#environment-variables), the `config` folder and its contents are owned by "pi:pi". This means you can edit the files in the configuration directory without needing the `sudo` command. Ownership is enforced each time the container restarts.

#### <a name="activeConfig"> Active configuration file </a>
#### Active configuration file

The file named `config.yml` is the active configuration. This is the file you should edit if you want to make changes. The default structure of the file is:

@@ -213,7 +213,7 @@ Note:

* The YAML parser used by *Prometheus* seems to be ***exceptionally*** sensitive to syntax errors (far less tolerant than `docker-compose`). For this reason, you should **always** check the *Prometheus* log after any configuration change.

#### <a name="referenceConfig"> Reference configuration file </a>
#### Reference configuration file

The file named `prometheus.yml` is a reference configuration. It is a **copy** of the original configuration file that ships inside the *Prometheus* container at the path:

@@ -231,7 +231,7 @@ $ docker-compose restart prometheus
$ docker logs prometheus
```

### <a name="environmentVars"> Environment variables </a>
### Environment variables

The IOTstack implementation of *Prometheus* supports two environment variables:

@@ -241,11 +241,11 @@ environment:
- IOTSTACK_GID=1000
```
Those variables control ownership of the [Configuration directory](#configDir) and its contents. Those environment variables are present in the standard IOTstack service definition for *Prometheus* and have the effect of assigning ownership to "pi:pi".
Those variables control ownership of the [Configuration directory](#configuration-directory) and its contents. Those environment variables are present in the standard IOTstack service definition for *Prometheus* and have the effect of assigning ownership to "pi:pi".
If you delete those environment variables from your *Compose* file, the [Configuration directory](#configDir) will be owned by "nobody:nobody"; otherwise the directory and its contents will be owned by whatever values you pass for those variables.
If you delete those environment variables from your *Compose* file, the [Configuration directory](#configuration-directory) will be owned by "nobody:nobody"; otherwise the directory and its contents will be owned by whatever values you pass for those variables.
### <a name="migration"> Migration considerations </a>
### Migration considerations
Under the original IOTstack implementation of *Prometheus* (just "as it comes" from *DockerHub*), the service definition expected the configuration file to be at:
@@ -276,7 +276,7 @@ Note:

* The YAML parser used by *Prometheus* is very sensitive to syntax errors. Always check the *Prometheus* log after any configuration change.

## <a name="upgradingPrometheus"> Upgrading *Prometheus* </a>
## Upgrading *Prometheus*

You can update `cadvisor` and `nodeexporter` like this:

@@ -320,7 +320,7 @@ The `prune` is the simplest way of cleaning up. The first call removes the old *

> Whether an old *base image* exists depends on the version of `docker-compose` you are using and how your version of `docker-compose` builds local images.
### <a name="versionPinning"> *Prometheus* version pinning </a>
### *Prometheus* version pinning

If you need to pin *Prometheus* to a particular version:

50 changes: 21 additions & 29 deletions docs/Containers/Python.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# Python

## <a name="references"> references </a>
## references

* [Python.org](https://www.python.org)
* [Dockerhub image library](https://hub.docker.com/_/python)
* [GitHub docker-library/python](https://github.com/docker-library/python)

## <a name="menuPython"> selecting Python in the IOTstack menu </a>
## selecting Python in the IOTstack menu

When you select Python in the menu:

@@ -40,15 +40,9 @@ When you select Python in the menu:
# - "external:internal"
volumes:
- ./volumes/python/app:/usr/src/app
networks:
- iotstack_nw
```

Note:

* This service definition is for "new menu" (master branch). The only difference with "old menu" (old-menu branch) is the omission of the last two lines.

### <a name="customisingPython"> customising your Python service definition </a>
### customising your Python service definition

The service definition contains a number of customisation points:

@@ -76,7 +70,7 @@ $ cd ~/IOTstack
$ docker-compose up -d python
```

## <a name="firstLaunchPython"> Python - first launch </a>
## Python - first launch

After running the menu, you are told to run the commands:

@@ -145,7 +139,7 @@ This is what happens:

Pressing <kbd>control</kbd>+<kbd>c</kbd> terminates the log display but does not terminate the running container.

## <a name="stopPython"> stopping the Python service </a>
## stopping the Python service

To stop the container from running, either:

@@ -163,7 +157,7 @@ To stop the container from running, either:
$ docker-compose rm --force --stop -v python
```

## <a name="startPython"> starting the Python service </a>
## starting the Python service

To bring up the container again after you have stopped it, either:

@@ -181,23 +175,23 @@ To bring up the container again after you have stopped it, either:
$ docker-compose up -d python
```

## <a name="reLaunchPython"> Python - second-and-subsequent launch </a>
## Python - second-and-subsequent launch

Each time you launch the Python container *after* the first launch:

1. The existing local image (`iotstack_python`) is instantiated to become the running container.
2. The `docker-entrypoint.sh` script runs and performs "self-repair" by replacing any files that have gone missing from the persistent storage area. Self-repair does **not** overwrite existing files!
3. The `app.py` Python script is run.

## <a name="debugging"> when things go wrong - check the log </a>
## when things go wrong - check the log

If the container misbehaves, the log is your friend:

```
$ docker logs python
```

## <a name="yourPythonScript"> project development life-cycle </a>
## project development life-cycle

It is **critical** that you understand that **all** of your project development should occur within the folder:

@@ -207,7 +201,7 @@ It is **critical** that you understand that **all** of your project development

So long as you are performing some sort of routine backup (either with a supplied script or a third party solution like [Paraphraser/IOTstackBackup](https://github.com/Paraphraser/IOTstackBackup)), your work will be protected.

### <a name="gettingStarted"> getting started </a>
### getting started

Start by editing the file:

@@ -228,7 +222,7 @@ $ cd ~/IOTstack
$ docker-compose restart python
```

### <a name="persistentStorage"> reading and writing to disk </a>
### reading and writing to disk

Consider this line in the service definition:

@@ -255,7 +249,7 @@ What it means is that:

If your script writes into any other directory inside the container, the data will be lost when the container re-launches.

### <a name="cleanSlate"> getting a clean slate </a>
### getting a clean slate

If you make a mess of things and need to start from a clean slate, erase the persistent storage area:

@@ -268,7 +262,7 @@ $ docker-compose up -d python

The container will re-initialise the persistent storage area from its defaults.

### <a name="addingPackages"> adding packages </a>
### adding packages

As you develop your project, you may find that you need to add supporting packages. For this example, we will assume you want to add "[Flask](https://pypi.org/project/Flask/)" and "[beautifulsoup4](https://pypi.org/project/beautifulsoup4/)".

@@ -322,7 +316,7 @@ To make *Flask* and *beautifulsoup4* a permanent part of your container:
Flask==2.0.1
```

5. Continue your development work by returning to [getting started](#gettingStarted).
5. Continue your development work by returning to [getting started](#getting-started).

Note:

@@ -346,11 +340,11 @@ Note:

The `requirements.txt` file will be recreated and it will be a copy of the version in the *services* directory as of the last image rebuild.

### <a name="scriptBaking"> making your own Python script the default </a>
### making your own Python script the default

Suppose the Python script you have been developing reaches a major milestone and you decide to "freeze dry" your work up to that point so that it becomes the default when you ask for a [clean slate](#cleanSlate). Proceed like this:
Suppose the Python script you have been developing reaches a major milestone and you decide to "freeze dry" your work up to that point so that it becomes the default when you ask for a [clean slate](#getting-a-clean-slate). Proceed like this:

1. If you have added any packages by following the steps in [adding packages](#addingPackages), run the following command:
1. If you have added any packages by following the steps in [adding packages](#adding-packages), run the following command:

```bash
$ docker exec python bash -c 'pip3 freeze >requirements.txt'
@@ -412,11 +406,11 @@ Suppose the Python script you have been developing reaches a major milestone and
$ docker system prune -f
```

### <a name="scriptCanning"> canning your project </a>
### canning your project

Suppose your project has reached the stage where you wish to put it into production as a service under its own name. Make two further assumptions:

1. You have gone through the steps in [making your own Python script the default](#scriptBaking) and you are **certain** that the content of `./services/python/app` correctly captures your project.
1. You have gone through the steps in [making your own Python script the default](#making-your-own-python-script-the-default) and you are **certain** that the content of `./services/python/app` correctly captures your project.
2. You want to give your project the name "wishbone".

Proceed like this:
@@ -456,8 +450,6 @@ Proceed like this:
# - "external:internal" # - "external:internal"
volumes: volumes:
- ./volumes/python/app:/usr/src/app | - ./volumes/wishbone/app:/usr/src/app
networks: networks:
- iotstack_nw - iotstack_nw
```

Note:
@@ -479,7 +471,7 @@ Remember:
~/IOTstack/volumes/wishbone/app
```

## <a name="routineMaintenance"> routine maintenance </a>
## routine maintenance

To make sure you are running from the most-recent **base** image of Python from Dockerhub:

@@ -503,4 +495,4 @@ The old base image can't be removed until the old local image has been removed,

Note:

* If you have followed the steps in [canning your project](#scriptCanning) and your service has a name other than `python`, just substitute the new name where you see `python` in the two `dockerc-compose` commands.
* If you have followed the steps in [canning your project](#canning-your-project) and your service has a name other than `python`, just substitute the new name where you see `python` in the two `dockerc-compose` commands.
52 changes: 26 additions & 26 deletions docs/Containers/Telegraf.md
Original file line number Diff line number Diff line change
@@ -7,13 +7,13 @@ The purpose of the Dockerfile is to:
* tailor the default configuration to be IOTstack-ready; and
* enable the container to perform self-repair if essential elements of the persistent storage area disappear.

## <a name="references"> References </a>
## References

- [*influxdata Telegraf* home](https://www.influxdata.com/time-series-platform/telegraf/)
- [*GitHub*: influxdata/influxdata-docker/telegraf](https://github.com/influxdata/influxdata-docker/tree/master/telegraf)
- [*DockerHub*: influxdata Telegraf](https://hub.docker.com/_/telegraf)

## <a name="significantFiles"> Significant directories and files </a>
## Significant directories and files

```
~/IOTstack
@@ -38,34 +38,34 @@ The purpose of the Dockerfile is to:

1. The *Dockerfile* used to customise Telegraf for IOTstack.
2. A replacement for the `telegraf` container script of the same name, extended to handle container self-repair.
3. The *additions folder*. See [Applying optional additions](#optionalAdditions).
3. The *additions folder*. See [Applying optional additions](#applying-optional-additions).
4. The *auto_include folder*. Additions automatically applied to
`telegraf.conf`. See [Automatic includes to telegraf.conf](#autoInclude).
`telegraf.conf`. See [Automatic includes to telegraf.conf](#automatic-includes-to-telegrafconf).
5. The *template service definition*.
6. The *working service definition* (only relevant to old-menu, copied from ❹).
7. The *persistent storage area* for the `telegraf` container.
8. A working copy of the *additions folder* (copied from ❸). See [Applying optional additions](#optionalAdditions).
9. The *reference configuration file*. See [Changing Telegraf's configuration](#editConfiguration).
8. A working copy of the *additions folder* (copied from ❸). See [Applying optional additions](#applying-optional-additions).
9. The *reference configuration file*. See [Changing Telegraf's configuration](#changing-telegrafs-configuration).
10. The *active configuration file*. A subset of ➒ altered to support communication with InfluxDB running in a container in the same IOTstack instance.

Everything in the persistent storage area ❼:

* will be replaced if it is not present when the container starts; but
* will never be overwritten if altered by you.

## <a name="howTelegrafIOTstackGetsBuilt"> How Telegraf gets built for IOTstack </a>
## How Telegraf gets built for IOTstack

### <a name="dockerHubImages"> Telegraf images ([*DockerHub*](https://hub.docker.com)) </a>
### Telegraf images ([*DockerHub*](https://hub.docker.com))

Periodically, the source code is recompiled and the resulting image is pushed to [influxdata Telegraf](https://hub.docker.com/_/telegraf?tab=tags&page=1&ordering=last_updated) on *DockerHub*.

### <a name="iotstackMenu"> IOTstack menu </a>
### IOTstack menu

When you select Telegraf in the IOTstack menu, the *template service definition* is copied into the *Compose* file.

> Under old menu, it is also copied to the *working service definition* and then not really used.
### <a name="iotstackFirstRun"> IOTstack first run </a>
### IOTstack first run

On a first install of IOTstack, you run the menu, choose your containers, and are told to do this:

@@ -74,7 +74,7 @@ $ cd ~/IOTstack
$ docker-compose up -d
```

> See also the [Migration considerations](#migration) (below).
> See also the [Migration considerations](#migration-considerations) (below).
`docker-compose` reads the *Compose* file. When it arrives at the `telegraf` fragment, it finds:

@@ -99,7 +99,7 @@ The *Dockerfile* begins with:
FROM telegraf:latest
```

> If you need to pin to a particular version of Telegraf, the *Dockerfile* is the place to do it. See [Telegraf version pinning](#versionPinning).
> If you need to pin to a particular version of Telegraf, the *Dockerfile* is the place to do it. See [Telegraf version pinning](#telegraf-version-pinning).
The `FROM` statement tells the build process to pull down the ***base image*** from [*DockerHub*](https://hub.docker.com).

@@ -134,7 +134,7 @@ You *may* see the same pattern in *Portainer*, which reports the ***base image**

> Whether you see one or two rows depends on the version of `docker-compose` you are using and how your version of `docker-compose` builds local images.
### <a name="migration"> Migration considerations </a>
### Migration considerations

Under the original IOTstack implementation of Telegraf (just "as it comes" from *DockerHub*), the service definition expected `telegraf.conf` to be at:

@@ -154,9 +154,9 @@ With one exception, all prior and current versions of the default configuration

> In other words, once you strip away comments and blank lines, and remove any "active" configuration options that simply repeat their default setting, you get the same subset of "active" configuration options. The default configuration file supplied with gcgarner/IOTstack is available [here](https://github.com/gcgarner/IOTstack/blob/master/.templates/telegraf/telegraf.conf) if you wish to refer to it.
The exception is `[[inputs.mqtt_consumer]]` which is now provided as an optional addition. If your existing Telegraf configuration depends on that input, you will need to apply it. See [applying optional additions](#optionalAdditions).
The exception is `[[inputs.mqtt_consumer]]` which is now provided as an optional addition. If your existing Telegraf configuration depends on that input, you will need to apply it. See [applying optional additions](#applying-optional-additions).

## <a name="logging"> Logging </a>
## Logging

You can inspect Telegraf's log by:

@@ -166,7 +166,7 @@ $ docker logs telegraf

These logs are ephemeral and will disappear when your Telegraf container is rebuilt.

### <a name="logTelegrafDB"> log message: *database "telegraf" creation failed* </a>
### log message: *database "telegraf" creation failed*

The following log message can be misleading:

@@ -178,7 +178,7 @@ If InfluxDB is not running when Telegraf starts, the `depends_on:` clause in Tel

What this error message *usually* means is that Telegraf has tried to communicate with InfluxDB before the latter is ready to accept connections. Telegraf typically retries after a short delay and is then able to communicate with InfluxDB.

## <a name="editConfiguration"> Changing Telegraf's configuration </a>
## Changing Telegraf's configuration

The first time you launch the Telegraf container, the following structure will be created in the persistent storage area:

@@ -204,7 +204,7 @@ The file:
- is created by removing all comment lines and blank lines from `telegraf-reference.conf`, leaving only the "active" configuration options, and then adding options necessary for IOTstack.
- is less than 30 lines and is significantly easier to understand than `telegraf-reference.conf`.

* `inputs.docker.conf` – see [Applying optional additions](#optionalAdditions) below.
* `inputs.docker.conf` – see [Applying optional additions](#applying-optional-additions) below.

The intention of this structure is that you:

@@ -219,7 +219,7 @@ $ cd ~/IOTstack
$ docker-compose restart telegraf
```

### <a name="autoInclude"> Automatic includes to telegraf.conf </a>
### Automatic includes to telegraf.conf

* `inputs.docker.conf` instructs Telegraf to collect metrics from Docker. Requires kernel control
groups to be enabled to collect memory usage data. If not done during initial installation,
@@ -229,9 +229,9 @@ $ docker-compose restart telegraf
```
* `inputs.cpu_temp.conf' collects cpu temperature.

### <a name="optionalAdditions"> Applying optional additions </a>
### Applying optional additions

The *additions folder* (see [Significant directories and files](#significantFiles)) is a mechanism for additional *IOTstack-ready* configuration options to be provided for Telegraf.
The *additions folder* (see [Significant directories and files](#significant-directories-and-files)) is a mechanism for additional *IOTstack-ready* configuration options to be provided for Telegraf.

Currently there is one addition:

@@ -249,9 +249,9 @@ $ docker-compose restart telegraf

The `grep` strips comment lines and the `sudo tee` is a safe way of appending the result to `telegraf.conf`. The `restart` causes Telegraf to notice the change.

## <a name="cleanSlate"> Getting a clean slate </a>
## Getting a clean slate

### <a name="resetDB"> Erasing the persistent storage area </a>
### Erasing the persistent storage area

Erasing Telegraf's persistent storage area triggers self-healing and restores known defaults:

@@ -272,7 +272,7 @@ Note:
$ docker-compose restart telegraf
```

### <a name="resetDB"> Resetting the InfluxDB database </a>
### Resetting the InfluxDB database

To reset the InfluxDB database that Telegraf writes into, proceed like this:

@@ -293,7 +293,7 @@ In words:
* Delete the `telegraf` database, and then exit the CLI.
* Start the Telegraf container. This re-creates the database automatically.

## <a name="upgradingTelegraf"> Upgrading Telegraf </a>
## Upgrading Telegraf

You can update most containers like this:

@@ -335,7 +335,7 @@ Your existing Telegraf container continues to run while the rebuild proceeds. On

The `prune` is the simplest way of cleaning up. The first call removes the old ***local image***. The second call cleans up the old ***base image***. Whether an old ***base image*** exists depends on the version of `docker-compose` you are using and how your version of `docker-compose` builds local images.

### <a name="versionPinning"> Telegraf version pinning </a>
### Telegraf version pinning

If you need to pin Telegraf to a particular version:

91 changes: 54 additions & 37 deletions docs/Containers/WireGuard.md

Large diffs are not rendered by default.

File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
@@ -2,7 +2,7 @@
The postbuild bash script allows for executing arbitrary execution of bash commands after the stack has been build.

## How to use
Place a file in the main directory called `postbuild.sh`. When the buildstack [build logic](https://sensorsiot.github.io/IOTstack/Menu-System) finishes, it'll execute the `postbuild.sh` script, passing in each service selected from the buildstack menu as a parameter. This script is run each time the buildstack logic runs.
Place a file in the main directory called `postbuild.sh`. When the buildstack [build logic](../Developers/Menu-System.md) finishes, it'll execute the `postbuild.sh` script, passing in each service selected from the buildstack menu as a parameter. This script is run each time the buildstack logic runs.

## Updates
The `postbuild.sh` file has been added to gitignore, so it won't be updated by IOTstack when IOTstack is updated. It has also been added to the backup script so that it will be backed up with your personal IOTstack backups.
36 changes: 27 additions & 9 deletions docs/Contributing-Services.md → docs/Developers/index.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,23 @@
# Contributing a service to IOTstack
# Contributing

On this page you can find information on how to contribute a service to IOTstack. We are generally very accepting of new services where they are useful. Keep in mind that if it is not IOTstack, selfhosted, or automation related we may not approve the PR.
## Writing documentation

Documentation is is written as markdown, processed using mkdocs ([docs](https://www.mkdocs.org/user-guide/writing-your-docs/#writing-your-docs)) and the Material theme ([docs](https://squidfunk.github.io/mkdocs-material/reference/)). The Material theme is not just styling, but provides additional syntax extensions.

Setup your system for mkdocs and Material:
```
pip3 install -r requirements-mkdocs.txt
```

To test your local changes while writing them and before making a pull-request:
```
cd ~/IOTstack
mkdocs serve
```

## Creating a new service

In this section you can find information on how to contribute a service to IOTstack. We are generally very accepting of new services where they are useful. Keep in mind that if it is not IOTstack, selfhosted, or automation related we may not approve the PR.

Services will grow over time, we may split up the buildstack menu into subsections or create filters to make organising all the services we provide easier to find.

@@ -9,8 +26,8 @@ Services will grow over time, we may split up the buildstack menu into subsectio
* `build.py` file is correct
* Service allows for changing external WUI port from Build Stack's options menu if service uses a HTTP/S port
* Use a default password, or allow the user to generate a random password for the service for initial installation. If the service asks to setup an account this can be ignored.
* Ensure [Default Configs](https://sensorsiot.github.io/IOTstack/Default-Configs) is updated with WUI port and username/password.
* Must detect port confilicts with other services on [BuildStack](https://sensorsiot.github.io/IOTstack/Menu-System) Menu.
* Ensure [Default Configs](../Basic_setup/Default-Configs.md) is updated with WUI port and username/password.
* Must detect port confilicts with other services on [BuildStack](Menu-System.md) Menu.
* `Pre` and `Post` hooks work with no errors.
* Does not require user to edit config files in order to get the service running.
* Ensure that your service can be backed up and restored without errors or data loss.
@@ -21,8 +38,9 @@ Services will grow over time, we may split up the buildstack menu into subsectio
If your new service is approved and merged then congratulations! Please watch the Issues page on github over the next few days and weeks to see if any users have questions or issues with your new service.

Links:
* [Default configs](https://sensorsiot.github.io/IOTstack/Default-Configs)
* [Password configuration for Services](https://sensorsiot.github.io/IOTstack/BuildStack-RandomPassword)
* [Build Stack Menu System](https://sensorsiot.github.io/IOTstack/Menu-System)
* [Coding a new service](https://sensorsiot.github.io/IOTstack/BuildStack-Services)
* [IOTstack issues](https://github.com/SensorsIot/IOTstack/issues)

* [Default configs](../Basic_setup/Default-Configs.md)
* [Password configuration for Services](BuildStack-RandomPassword.md)
* [Build Stack Menu System](Menu-System.md)
* [Coding a new service](BuildStack-Services.md)
* [IOTstack issues](htps://github.com/SensorsIot/IOTstack/issues)
75 changes: 0 additions & 75 deletions docs/Home.md

This file was deleted.

Original file line number Diff line number Diff line change
@@ -8,7 +8,8 @@ There are many features that are needing to be introduced into the new menu syst

## Breaking changes
There are a few changes that you need to be aware of:
* Docker Environmental `*.env` files are no longer a thing by default. Everything needed is specified in the service.yml file, you can still optionally use them though either with [Custom Overrides](https://sensorsiot.github.io/IOTstack/Custom) or with the [PostBuild](https://sensorsiot.github.io/IOTstack/PostBuild-Script) script. Specific config files for certain services still work as they once did.

* Docker Environmental `*.env` files are no longer a thing by default. Everything needed is specified in the service.yml file, you can still optionally use them though either with [Custom Overrides](../Basic_setup/Custom.md) or with the [PostBuild](../Developers/PostBuild-Script.md) script. Specific config files for certain services still work as they once did.
* Python 3, pip3, PyYAML and Blessed are all required to be installed.
* Not backwards compatible with old menu system. You will be able to switch back to the old menu system for a period of time by changing to the `old-menu` branch. It will be unmaintained except for critical updates. It will eventually be removed - but not before everyone is ready to leave it.

@@ -26,4 +27,4 @@ There are a few changes that you need to be aware of:
* Removed env files
* Backup and restoring more streamlined
* Documentation updated for all services
* No longer needs to be installed in the home directory `~`.
* No longer needs to be installed in the home directory `~`.
42 changes: 21 additions & 21 deletions docs/gcgarner-migration.md → docs/Updates/gcgarner-migration.md
Original file line number Diff line number Diff line change
@@ -6,9 +6,9 @@ Migrating to SensorsIot/IOTstack was fairly easy when this repository was first

The probability of conflicts developing increases as a function of time since the fork. Conflicts were and are pretty much inevitable so a more involved procedure is needed.

## <a name="migrationSteps"> Migration Steps </a>
## Migration Steps

### <a name="checkAssumptions"> Step 1 – Check your assumptions </a>
### Step 1 – Check your assumptions

Make sure that you are, *actually*, on gcgarner. Don't assume!

@@ -20,7 +20,7 @@ origin https://github.com/gcgarner/IOTstack.git (push)

Do not proceed if you don't see those URLs!

### <a name="downStack"> Step 2 – Take IOTstack down </a>
### Step 2 – Take IOTstack down

Take your stack down. This is not *strictly* necessary but we'll be moving the goalposts a bit so it's better to be on the safe side.

@@ -29,22 +29,22 @@ $ cd ~/IOTstack
$ docker-compose down
```

### <a name="chooseMigrationMethod"> Step 3 – Choose your migration method </a>
### Step 3 – Choose your migration method

There are two basic approaches to switching from gcgarner/IOTstack to SensorsIot/IOTstack:

- [Migration by changing upstream repository](#migrateChangeUpstream)
- [Migration by clone and merge](#migrateCloneMerge)
- [Migration by changing upstream repository](#migration-option-1-change-upstream-repository)
- [Migration by clone and merge](#migration-option-2-clone-and-merge)

You can think of the first as "working *with* git" while the second is "using brute force".

The first approach will work if you haven't tried any other migration steps and/or have not made too many changes to items in your gcgarner/IOTstack that are under git control.

If you are already stuck or you try the first approach and get a mess, or it all looks far too hard to sort out, then try the [Migration by clone and merge](#migrateCloneMerge) approach.
If you are already stuck or you try the first approach and get a mess, or it all looks far too hard to sort out, then try the [Migration by clone and merge](#migration-option-2-clone-and-merge) approach.

#### <a name="migrateChangeUpstream"> Migration Option 1 – change upstream repository </a>
#### Migration Option 1 – change upstream repository

##### <a name="checkLocalChanges"> Check for local changes </a>
##### Check for local changes

Make sure you are on the master branch (you probably are so this is just a precaution), and then see if Git thinks you have made any local changes:

@@ -93,15 +93,15 @@ The simplest way to deal with modified files is to rename them to move them out
menu.sh.jqh
```

##### <a name="synchroniseGcgarner"> Synchronise with gcgarner on GitHub </a>
##### Synchronise with gcgarner on GitHub

Make sure your local copy of gcgarner is in sync with GitHub.

```
$ git pull
```

##### <a name="removeUpstream"> Get rid of any upstream reference </a>
##### Get rid of any upstream reference

There may or may not be any "upstream" set. The most likely reason for this to happen is if you used your local copy as the basis of a Pull Request.

@@ -111,15 +111,15 @@ The next command will probably return an error, which you should ignore. It's ju
$ git remote remove upstream
```

##### <a name="pointToSensorsIoT"> Point to SensorsIot </a>
##### Point to SensorsIot

Change your local repository to point to SensorsIot.

```
$ git remote set-url origin https://github.com/SensorsIot/IOTstack.git
```

##### <a name="syncSensorsIoT"> Synchronise with SensorsIot on GitHub </a>
##### Synchronise with SensorsIot on GitHub

This is where things can get a bit tricky so please read these instructions carefully **before** you proceed.

@@ -174,19 +174,19 @@ Auto-merging .templates/someRandomService/service.yml

If you don't use `someRandomService` then you could safely ignore this on the basis that it was "probably right". However, if you did use that service and it started to misbehave after migration, you would know that the `service.yml` file was a good place to start looking for explanations.

##### <a name="finishWithPull"> Finish with a pull </a>
##### Finish with a pull

At this point, only the migrated master branch is present on your local copy of the repository. The next command brings you fully in-sync with GitHub:

```
$ git pull
```

#### <a name="migrateCloneMerge"> Migration Option 2 – clone and merge </a>
#### Migration Option 2 – clone and merge

If you have been following the process correctly, your IOTstack will already be down.

##### <a name="renameOldIOTstack"> Rename your existing IOTstack folder </a>
##### Rename your existing IOTstack folder

Move your old IOTstack folder out of the way, like this:

@@ -199,7 +199,7 @@ Note:

* You should not need `sudo` for the `mv` command but it is OK to use it if necessary.

##### <a name="fetchCleanClone"> Fetch a clean clone of SensorsIot/IOTstack </a>
##### Fetch a clean clone of SensorsIot/IOTstack

```
$ git clone https://github.com/SensorsIot/IOTstack.git ~/IOTstack
@@ -240,7 +240,7 @@ Observe what is **not** there:

From this, it should be self-evident that a clean checkout from GitHub is the factory for *all* IOTstack installations, while the contents of `backups`, `services`, `volumes` and `docker-compose.yml` represent each user's individual choices, configuration options and data.

##### <a name="mergeOldWithNew"> Merge old into new </a>
##### Merge old into new

Execute the following commands:

@@ -272,7 +272,7 @@ There is no need to migrate the `backups` directory. You are better off creating
$ mkdir ~/IOTstack/backups
```

### <a name="chooseMenu"> Step 4 – Choose your menu </a>
### Step 4 – Choose your menu

If you have reached this point, you have migrated to SensorsIot/IOTstack where you are on the "master" branch. This implies "new menu".

@@ -353,15 +353,15 @@ Although you can freely change branches, it's probably not a good idea to try to

Even so, nothing will change **until** you run your chosen menu to completion and allow it to generate a new `docker-compose.yml`.

### <a name="upStack"> Step 5 – Bring up your stack </a>
### Step 5 – Bring up your stack

Unless you have gotten ahead of yourself and have already run the menu (old or new) then nothing will have changed in the parts of your `~/IOTstack` folder that define your IOTstack implementation. You can safely:

```
$ docker-compose up -d
```

## <a name="seeAlso"> See also </a>
## See also

There is another gist [Installing Docker for IOTstack](https://gist.github.com/Paraphraser/d119ae81f9e60a94e1209986d8c9e42f) which explains how to overcome problems with outdated Docker and Docker-Compose installations.

33 changes: 30 additions & 3 deletions docs/Updating-the-Project.md → docs/Updates/index.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,36 @@
# Updating the project
**If you ran the git checkout -- 'git ls-files -m' as suggested in the old wiki entry then please check your duck.sh because it removed your domain and token**


Periodically updates are made to project which include new or modified container template, changes to backups or additional features. As these are released your local copy of this project will become out of date. This section deals with how to bring your project to the latest published state.

## Quick instructions

1. backup your current settings: `cp docker-compose.yml docker-compose.yml.bak`
2. check `git status` for any local changes you may have made to project files. Save and preserve your changes by doing a commit: `git commit -a -m "local customization"`. Or revert them using: `git checkout -- path/to/changed_file`.
3. update project files from github: `git pull origin master -r`
4. get latest images from the web: `docker-compose pull`
5. rebuild localy created images from new Dockerfiles: `docker-compose build --pull --no-cache`
6. update running containers to latest: `docker-compose up --build -d`

### Troubleshooting: if a container fails to start after update

* try restarting the whole stack: `docker-compose restart`
* backup your stack settings: `cp docker-compose.yml docker-compose.yml.bak`
* Check log output of the failing service: `docker-compose logs *service-name*`
* try googling and fixing problems in docker-compose.yml manually.
* try recreating the failing service definition using menu.sh:
1. `./menu.sh`, select Build Stack, unselect the failing service, press
enter to build, and then exit.
2. `./menu.sh`, select Build Stack, select the service back again, press
enter to build, and then exit.
3. Try starting now: `docker-compose up -d`
* Go to the [IOTStack Discord](https://discord.gg/ZpKHnks) and describe your
problem. We're happy to help.

## Details, partly outdated

!!! warning
If you ran `git checkout -- 'git ls-files -m'` as suggested in the old wiki entry then please check your duck.sh because it removed your domain and token

Git offers build in functionality to fetch the latest changes.

`git pull origin master` will fetch the latest changes from GitHub without overwriting files that you have modified yourself. If you have done a local commit then your project may to handle a merge conflict.
@@ -18,4 +45,4 @@ With the new latest version of the project you can now use the menu to build you

![image](https://user-images.githubusercontent.com/46672225/68646024-8fee2f80-0522-11ea-8b6e-f1d439a5be7f.png)

After your stack had been rebuild you can run `docker-compose up -d` to pull in the latest changes. If you have not update your images in a while consider running the `./scripts/update.sh` to get the latest version of the image from Docker hub as well
After your stack had been rebuild you can run `docker-compose up -d` to pull in the latest changes. If you have not update your images in a while consider running the `./scripts/update.sh` to get the latest version of the image from Docker hub as well
23 changes: 19 additions & 4 deletions docs/index.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,26 @@
---
title: Home
hide:
- navigation
---
# IOTStack Wiki

!!! abstract inline end "What is IOTstack"
IOTstack is a builder for docker-compose to easily make and maintain IoT
stacks on the Raspberry Pi

Welcome to the IOTstack Wiki:

* Use the list of contents at the left of this page to explore this Wiki.
* <span class="show-when-wide-layout">
Use the header tabs and content list at the left to explore this Wiki.
</span>
<label class="show-when-narrow-layout">
Click the "≡" icon to navigate this Wiki.
</label>

- If you are viewing this on a device that does not show the list by default, click the "≡" icon.
* If you are just getting started with IOTstack, see [Getting Started](Basic_setup/).
* If you're running gcgarner/IOTstack see [Migrating to SensorsIot](Updates/gcgarner-migration.md).

* If you are looking for information on a specific container, click on the "Containers" folder at the bottom of the list.
* You're always welcome to ask questions on the [IOTStack Discord](https://discord.gg/ZpKHnks).

* If you are just getting started with IOTstack, see [Getting Started](./Getting-Started.md).
* Fixes and improvements welcome, see [Contributing](Developers/)
1 change: 1 addition & 0 deletions docs/stack-24.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
17 changes: 17 additions & 0 deletions docs/style.css
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
/* vim: set sw=2: */

/* hide "Made with Material" footer */
.md-footer-meta {
display: none;
}

@media screen and (max-width:76.25em) {
.show-when-wide-layout {
display:none
}
}
@media screen and (min-width:76.25em) {
.show-when-narrow-layout {
display:none
}
}
54 changes: 52 additions & 2 deletions mkdocs.yml
Original file line number Diff line number Diff line change
@@ -1,10 +1,60 @@
site_name: IOTstack
site_description: 'Docker stack for getting started on IOT on the Raspberry PI'

# Repository
repo_url: https://github.com/SensorsIot/IOTstack
repo_name: SensorsIot/IOTstack
edit_uri: "https://github.com/SensorsIot/IOTstack/edit/master/docs"

theme:
name: material
icon:
logo: octicons/stack-24
favicon: stack-24.svg
palette:
- scheme: default
toggle:
icon: material/weather-sunny
name: Switch to dark mode
- scheme: slate
toggle:
icon: material/weather-night
name: Switch to light mode
features:
- tabs
- navigation.tabs
- navigation.tabs.sticky
- navigation.sections

plugins:
- search
# - awesome-pages
- redirects:
# Forward renamed pages to avoid breaking old links.
redirect_maps:
Getting-Started.md: Basic_setup/index.md
Accessing-your-Device-from-the-internet.md: Basic_setup/Accessing-your-Device-from-the-internet.md
Backup-and-Restore.md: Basic_setup/Backup-and-Restore.md
Custom.md: Basic_setup/Custom.md
Default-Configs.md: Basic_setup/Default-Configs.md
Docker-commands.md: Basic_setup/Docker-commands.md
How-the-script-works.md: Basic_setup/How-the-script-works.md
Misc.md: Basic_setup/Misc.md
Native-RTL_433.md: Basic_setup/Native-RTL_433.md
Networking.md: Basic_setup/Networking.md
RPIEasy_native.md: Basic_setup/RPIEasy_native.md
Understanding-Containers.md: Basic_setup/Understanding-Containers.md
Updates/Updating-the-Project.md: Updates/index.md
PostBuild-Script.md: Developers/PostBuild-Script.md
BuildStack-RandomPassword.md: Developers/BuildStack-RandomPassword.md
BuildStack-Services.md: Developers/BuildStack-Services.md
Menu-System.md: Developers/Menu-System.md
Contributing-Services.md: Developers/index.md

extra_css:
- style.css

markdown_extensions:
- admonition
- pymdownx.superfences
repo_url: https://github.com/SensorsIot/IOTstack
- toc:
permalink: true
3 changes: 3 additions & 0 deletions requirements-mkdocs.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
mkdocs-material
mkdocs-material-extensions
mkdocs-redirects