From 976eea836960d430f8ec2a83162fe53e347436fb Mon Sep 17 00:00:00 2001 From: Patrick Dwyer Date: Tue, 1 May 2018 16:49:12 -0400 Subject: [PATCH] Documentation updates after re-aligning prod and dev modes for API. Closes #222. Closes #194. --- README.md | 282 ++----------------------------------------- bin/clear_network.py | 6 +- bin/readme.md | 18 +-- dev.md | 159 ++++++++++++++++++++++++ swarm.md | 2 +- 5 files changed, 181 insertions(+), 286 deletions(-) create mode 100644 dev.md diff --git a/README.md b/README.md index 5736112..d9b55da 100644 --- a/README.md +++ b/README.md @@ -21,290 +21,26 @@ In the deployed Virtue environment these Virtues are stand-alone virtual machine ## Starting the API -We're going to start the API using a `docker-compose.yml` file located in the root of the project repository. This compose file defines, among other things, the PostGRES database that the API uses to store run time sensor meta data. - -The first time we run the API with `docker-compose up`, this database will get initialized. The PostGRES initialization will be successful, but the `docker-compose` log will quickly fill with error messages (heavily pruned output): - -```bash -> docker-compose up -Starting savior_kafka_1 ... -Starting savior_api_server_postgres_1 ... -Starting savior_dropper_callback_1 ... -Starting savior_cfssl_1 ... done -Starting savior_api_1 ... done -Starting savior_target_1_1 ... done -Attaching to savior_kafka_1, savior_dropper_callback_1, savior_api_server_postgres_1, savior_cfssl_1, savior_api_1, savior_target_1_1 -api_server_postgres_1 | The files belonging to this database system will be owned by user "postgres". -api_server_postgres_1 | This user must also own the server process. -api_server_postgres_1 | -api_server_postgres_1 | The database cluster will be initialized with locale "en_US.utf8". -... -api_server_postgres_1 | creating collations ... ok -api_server_postgres_1 | creating conversions ... ok -api_server_postgres_1 | creating dictionaries ... ok -... -api_1 | ** (DBConnection.ConnectionError) connection not available because of disconnection -api_1 | (db_connection) lib/db_connection.ex:934: DBConnection.checkout/2 -api_1 | (db_connection) lib/db_connection.ex:750: DBConnection.run/3 -api_1 | (db_connection) lib/db_connection.ex:1141: DBConnection.run_meter/3 -... -api_server_postgres_1 | LOG: database system was shut down at 2018-02-19 20:43:14 UTC -api_server_postgres_1 | LOG: MultiXact member wraparound protections are now enabled -api_server_postgres_1 | LOG: database system is ready to accept connections -api_server_postgres_1 | LOG: autovacuum launcher started -``` - -Stop the `docker-compose` with `ctrl-c`, and tear down the compose environment with `docker-compose down`. At this point most of the infrastructure has failed to start, but our PostGRES database has been primed, and a new directory (`./pgdata`) will be in the root directory of your checked out Savior repository. - -We'll restart the Sensing APi again with `docker-compose up`, and we'll get further, with our API and supporting infrastructure starting, but our sensors reporting errors trying to register with the API (logs again heavily pruned): - -```bash -> docker-compose up -Starting savior_kafka_1 ... -Starting savior_api_server_postgres_1 ... -Starting savior_dropper_callback_1 ... done -Starting savior_api_server_postgres_1 ... done -Starting savior_api_1 ... done -Starting savior_target_1_1 ... done -Attaching to savior_kafka_1, savior_dropper_callback_1, savior_cfssl_1, savior_api_server_postgres_1, savior_api_1, savior_target_1_1 -cfssl_1 | 2018/02/19 20:48:58 [INFO] Initializing signer -target_1_1 | Starting Sensors -... -target_1_1 | Starting lsof(version=1.20171117) -target_1_1 | Sensor Identification -target_1_1 | sensor_id == 0ea7e0c0-1933-46d3-aca9-5b10af4f221c -... -target_1_1 | @ Waiting for Sensing API -target_1_1 | ! Exception while waiting for the Sensing API (HTTPConnectionPool(host='api', port=17141): Max retries exceeded with url: /api/v1/ready (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))) -target_1_1 | ~ retrying in 0.50 seconds -... -api_1 | [info] == Running ApiServer.Repo.Migrations.CreateConfigurations.change/0 forward -api_1 | [info] create table configurations -api_1 | [info] create index configurations_component_version_index -api_1 | [info] create index configurations_component_version_level_index -... -api_1 | [info] Running ApiServer.Endpoint with Cowboy using http://0.0.0.0:17141 -api_1 | [info] Running ApiServer.Endpoint with Cowboy using https://:::17504 -... -target_1_1 | Couldn't register sensor with Sensing API -target_1_1 | status_code == 400 -target_1_1 | {"timestamp":"2018-02-19 20:49:21.122936Z","msg":"sensor(id=0ea7e0c0-1933-46d3-aca9-5b10af4f221c) failed registration with invalid field (default configuration = Cannot locate default configuration for lsof)","error":true} -``` - -Now we have a running API, albeit one that can't register sensors, as it's lacking any sensor configuration data. Keep the environment running, and move to a new terminal window, where we'll install the existing sensor configurations. +Refer to either of the following guides for running the API: + - [Development Mode](development.md) - run the API locally for development of API and sensor capabilities + - [Production Mode](swarm.md) - run the Sensing API on a multi-node Docker Swarm environment on AWS ## Installing Sensor Configurations Every sensor defines a [set of configuration files](sensors/readme.md#configuring-sensors) that define sensor characteristics at different _observation levels_. The _observation levels_ define how intrusive a sensor is, and range from **off** to **adversarial**. -The configuration files for sensors are colocated with each sensor, and are defined, along with metadata, in `sensor_configurations.json` files. Using the `load_sensor_configurations.py` tool in the `./bin` directory, we'll find and install these configurations in our local running Sensing API instance (pruned output for brevity): - -```bash - ./bin/load_sensor_configurations.py install -Sensor Configuration Tool -% searching for configurations in [./] -Installing os(linux) context(virtue) name(ps) - = 5 configuration variants - % installing component name(ps) - Component name(ps) os(linux) context(virtue) installed successfully - = created - ... - % install configuration level(high) version(latest) - Configuration level(high) version(latest) installed successfully - = created - % install configuration level(adversarial) version(latest) - Configuration level(adversarial) version(latest) installed successfully - = created -``` - -We used the default command line options of `load_sensor_configurations.py` to target our local API instance, and scanned all of the repository for configurations. See the documentation for [loading sensor configurations](bin/readme.md#load_sensor_configurationspy) for more command line options. - -We can verify that our configurations loaded with the `list` option: - -```bash -> ./bin/load_sensor_configurations.py list -Sensor Configuration Tool -ps - % os(linux) context(virtue) - [ off / latest ] - format( json) - last updated_at(2018-02-19T21:01:13.702450) - [ low / latest ] - format( json) - last updated_at(2018-02-19T21:01:13.768332) - [ default / latest ] - format( json) - last updated_at(2018-02-19T21:01:13.609205) - [ high / latest ] - format( json) - last updated_at(2018-02-19T21:01:13.837523) - [ adversarial / latest ] - format( json) - last updated_at(2018-02-19T21:01:13.908156) -lsof - % os(linux) context(virtue) - [ off / latest ] - format( json) - last updated_at(2018-02-19T21:01:14.132824) - [ low / latest ] - format( json) - last updated_at(2018-02-19T21:01:14.224955) - [ default / latest ] - format( json) - last updated_at(2018-02-19T21:01:14.049705) - [ high / latest ] - format( json) - last updated_at(2018-02-19T21:01:14.293504) - [ adversarial / latest ] - format( json) - last updated_at(2018-02-19T21:01:14.365097) -kernel-ps - % os(linux) context(virtue) - [ off / latest ] - format( json) - last updated_at(2018-02-19T21:01:14.573557) - [ low / latest ] - format( json) - last updated_at(2018-02-19T21:01:14.638121) - [ default / latest ] - format( json) - last updated_at(2018-02-19T21:01:14.502159) - [ high / latest ] - format( json) - last updated_at(2018-02-19T21:01:14.703862) - [ adversarial / latest ] - format( json) - last updated_at(2018-02-19T21:01:14.772859) -``` - -With the configurations loaded, we're ready to restart the Sensing API compose environment and see sensors in action. Go back to the terminal running the `docker-compose` environment, and `ctrl-c` to stop the compose, and then tear down the environment with `docker-compose down`. ## Interacting with the Sensor API -With our PostGRES database intialized and sensor configurations loaded, we can run a full Sensing API environment with a Virtue running sensors. Start up the environment. A ton of log messages will fly by, but will eventually slow down as the infrastructure stabalizes (pruned for brevity): - -```bash -> docker-compose up -Creating savior_dropper_callback_1 ... done -Creating savior_api_server_postgres_1 ... done -Creating savior_cfssl_1 ... done -Creating savior_kafka_1 ... done -Creating savior_api_server_postgres_1 ... -Creating savior_api_1 ... done -Creating savior_api_1 ... -Creating savior_target_1_1 ... done -Attaching to savior_api_server_postgres_1, savior_cfssl_1, savior_dropper_callback_1, savior_kafka_1, savior_api_1, savior_target_1_1 -... -api_1 | [info] Sent 200 in 17ms -target_1_1 | + pinned API certificate match -target_1_1 | Synced sensor with Sensing API -``` - -Until the entire environent is up and responding, different containers will report warnings and errors. So long as the sensors get to the point of logging a `Synced sensor with Sensing API`, everything is functioning normally. - -To interact with the API we'll use the `virtue-security` command line interface, which we can build with the [dockerized-build.sh](bin/readme.md#dockerized-buildsh) command from the `bin` directory: - -```bash -> ./bin/dockerized-build.sh -[building] virtue-security -Sending build context to Docker daemon 29.12MB -Step 1/10 : FROM python:3.6 -... -Successfully tagged virtue-savior/virtue-security:latest -[building] demo-target -Sending build context to Docker daemon 105.5kB -Step 1/27 : FROM python:3.6 -... -Successfully tagged virtue-savior/demo-target:latest -``` - -With that we've successfully built two containers - the `virtue-security` container, which provides all of the run time dependencies and libraries for the `virtue-security` command, and the container for the `demo-target` Virtue. - -Now our Virtue is running, and the API and infrastructure are running, so let's interact with the API. Switch to a new terminal, where we'll work from the root of the repository. Start by inspecting the running Virtues with the pre-built command [dockerized-inspect.sh](bin/readme.md#dockerized-inspectsh): - -```bash -> ./bin/dockerized-inspect.sh -[dockerized-run] -Getting Client Certificate -Running virtue-security -{ - "timestamp": "2018-02-19 21:13:58.529968Z", - "targeting_scope": "user", - "targeting": { - "username": "root" - }, - "sensors": [ - { - "virtue_id": "54ea2464-0cc3-4785-b85f-33009ea6ea7d", - "username": "root", - "updated_at": "2018-02-19T21:13:27.015944", - "sensor_id": "bf417f69-1cb9-46c9-a640-1c2eca28c949", - "public_key": "-----BEGIN CERTIFICATE-----\nMI...cs=\n-----END CERTIFICATE-----\n", - "port": 11020, - "last_sync_at": "2018-02-19T21:13:27.015350Z", - "kafka_topic": "582d9148-d0ff-4ce3-b9ff-a7ecda7b0025", - "inserted_at": "2018-02-19T21:09:25.779244", - "has_registered": true, - "has_certificates": true, - "configuration_id": 6, - "component_id": 2, - "address": "5fe46e57cdad" - } - ], - "error": false -} -``` - -You can also stream the sensor data from any sensor observing the **root** user: - -```bash -> ./bin/dockerized-stream.sh -{"timestamp":"2017-11-29T16:19:22.324018","sensor":"e56d0901-fa8e-4ad5-96c8-ee8581819d40","message":"COMMAND PID TID USER FD TYPE DEVICE SIZE/OFF NODE NAME\n","level":"debug"} -{"timestamp":"2017-11-29T16:19:22.324136","sensor":"e56d0901-fa8e-4ad5-96c8-ee8581819d40","message":"python 1 root cwd DIR 0,72 4096 332393 /usr/src/app\n","level":"debug"} -... -``` - -The `virtue-security stream` command will continue receiving messages until you force quit the command (`ctrl-c` or the Windows equivalent). - -# Running the Sensor Architecture - Docker Swarm - -On our AWS dev, test, and production machines, we run most of the API and related services on top of Docker Swarm. See the [running on docker swarm](swarm.md) instructions for more details. - -# Running the Sensing Architecture - Native - -The instructions for running natively are likely buggy - unless absolutely -necessary, run the API locally via Docker. - -Running the Sensing architecture natively outside of Docker is cumbersome: - - - container startup scripts handle partial automation of the Certificate Authority - - Runtime options are codified on the command line controls of the `docker-compose` containers - - DNS and service naming are handled via Docker networking - -If you still want to try and run the components natively, the original and incomplete instructions -follow: - -## Start Kafka - -```bash -cd control/logging -./start.sh -``` - -See [Start Kafka](control/logging/README.md#start-kafka-docker) for more info. - - -## Start the Sensing API - -**TODO**: Update with the environment variable to use when running native, so Phoenix knows how to address Kafka. - -```bash -cd control/api_server -mix phoenix.server -``` - -See [Sensing API - Running](control/api_server/README.md) for more info. - - -## Start the Dummy sensor - -With this script we're using the `virtue-security` command line interface to the Sensing API to issue an **inspect** command, which is returning a list of sensor metadata. The real command hidden behind the `dockerized-inspect.sh` command looks like: - -```bash -virtue-security inspect --username root -``` - -With the `--username` flag we're asking the Sensing API to scope our action to any Virtue's used by the user with username `root`. - -Sensors within Virtues send their logs to a Kafka instance in the Sensing infrastructure - let's make sure that the sensors we have active are really logging by using the [dockerized-stream.sh](bin/readme.md#dockerized-streamsh) command (again, pruned for brevity): +See the various `dockerized-` commands in the `bin` directory for interacting with the running Sensing API via the [virtue-security](control/virtue-security) tool: -```bash -> ./bin/dockerized-stream.sh -[dockerized-run] -Getting Client Certificate -Running virtue-security -{"timestamp":"2018-02-19T21:35:28.728212","sensor_name":"lsof-1.20171117","sensor_id":"bf417f69-1cb9-46c9-a640-1c2eca28c949","message":"COMMAND PID TID USER FD TYPE DEVICE SIZE/OFF NODE NAME\n","level":"debug"} -... -{"timestamp":"2018-02-19T21:35:28.728302","sensor_name":"lsof-1.20171117","sensor_id":"bf417f69-1cb9-46c9-a640-1c2eca28c949","message":"bash 1 root cwd DIR 0,149 4096 66171 /usr/src/app/kmods\n","level":"debug"} -{"timestamp":"2018-02-19T21:35:28.728335","sensor_name":"lsof-1.20171117","sensor_id":"bf417f69-1cb9-46c9-a640-1c2eca28c949","message":"bash 1 root rtd DIR 0,149 4096 66167 /\n","level":"debug"} -``` + - [dockerized-inspect.sh](bin/readme.md#dockerized-inspectsh) - View data about the running sensors + - [dockerized-observe.sh](bin/readme.md#dockerized-observesh) - Change the sensor observation level + - [dockerized-run.sh](bin/readme.md#dockerized-runsh) - Pass through for any `virtue-security` command + - [dockerized-stream.sh](bin/readme.md#dockerized-streamsh) - Stream log data from the API -It may take a few seconds for messages to start streaming, and you can stop the stream with `ctrl-c`. You'll notice that every log message is a properly formatted and encoded JSON object. All of the sensors stream log messages to Kafka in **jsonl** format, encoded as UTF-8. For the curious, the actual `virtue-security` command hidden behind `dockerized-stream.sh` is: -```bash -virtue-security stream --username root --filter-log-level debug --follow --since "100 minutes ago" -``` -Which selects all Virtues belonging to the user `root`, filters the logs to receive messages of level `debug` and higher, starts by replaying the last 100 minutes of messages, and the `--follow` flag means the `virtue-security` command will stay connected to the API and stream new messages as the sensors log them. \ No newline at end of file +`updated 2018-05-01T16:34:00EST` \ No newline at end of file diff --git a/bin/clear_network.py b/bin/clear_network.py index e92f641..7fe06e0 100755 --- a/bin/clear_network.py +++ b/bin/clear_network.py @@ -18,13 +18,13 @@ def find_containers(): :return: List of strings """ - network_raw = subprocess.check_output("docker network inspect savior_default", shell=True) + network_raw = subprocess.check_output("docker network inspect apinet", shell=True) network = json.loads(network_raw) containers = [] for id, config in network[0]["Containers"].items(): - if not config["Name"].startswith("savior_"): + if not config["Name"].startswith("savior-api"): containers.append(config["Name"]) return containers @@ -45,7 +45,7 @@ def stop_container(name): if __name__ == "__main__": - print "Looking for containers to remove from [savior_default]" + print "Looking for containers to remove from [apinet]" containers = find_containers() print " = found %d containers to remove" % (len(containers),) diff --git a/bin/readme.md b/bin/readme.md index cf688ba..b7588f9 100644 --- a/bin/readme.md +++ b/bin/readme.md @@ -15,15 +15,15 @@ While a handful of the scripts may act normally when called in the `bin` directo The following scripts from the `bin` directory are documented, and used during various phases of development and deployment of the Sensing and Response tools: - - [add_target.sh](#add_target.sh) - Add one or more new Virtues to a running Sensing environment. - - [clear_network.sh](#clear_network.sh) - Remove containers from a running Sensing environment that aren't part of the Sensing infrastructure. - - [dockerized-build.sh](#dockerized-build.sh) - Build the `virtue-security` CLI tool and stand-alone Virtue images. - - [dockerized-inspect.sh](#dockerized-inspect.sh) - Run the **inspect** command on a running Sensing environment. - - [dockerized-run.sh](#dockerized-run.sh) - Run any of the `virtue-security` commands on a running Sensing environment. - - [dockerized-stream.sh](#dockerized-stream.sh) - **stream** live log messages from a running Sensing environment. - - [install_sensors.py](#install_sensors.py) - Install sensors and associated files/data in defined Virtue targets. - - [load_sensor_configurations.py](#load_sensor_configurations.py) - Load the configuration files for one or more sensors into a running Sensing API. - - [update_tools.sh](#update_tools.sh) - Install various support tools into multiple directories. + - [add_target.sh](#add_targetsh) - Add one or more new Virtues to a running Sensing environment. + - [clear_network.sh](#clear_networksh) - Remove containers from a running Sensing environment that aren't part of the Sensing infrastructure. + - [dockerized-build.sh](#dockerized-buildsh) - Build the `virtue-security` CLI tool and stand-alone Virtue images. + - [dockerized-inspect.sh](#dockerized-inspectsh) - Run the **inspect** command on a running Sensing environment. + - [dockerized-run.sh](#dockerized-runsh) - Run any of the `virtue-security` commands on a running Sensing environment. + - [dockerized-stream.sh](#dockerized-streamsh) - **stream** live log messages from a running Sensing environment. + - [install_sensors.py](#install_sensorspy) - Install sensors and associated files/data in defined Virtue targets. + - [load_sensor_configurations.py](#load_sensor_configurationspy) - Load the configuration files for one or more sensors into a running Sensing API. + - [update_tools.sh](#update_toolssh) - Install various support tools into multiple directories. # add_target.sh diff --git a/dev.md b/dev.md new file mode 100644 index 0000000..f214177 --- /dev/null +++ b/dev.md @@ -0,0 +1,159 @@ +These instructions detail how to run the Sensing API, Dockerized target machines, and stand alone sensors locally in a development mode. These instructions closely follow the instructions (and architecture) for [running the API in production swarm mode](swarm.md). The primary deviations from the production mode: + + - Instead of Virtues running in VMs, they run as Docker containers, either attached to the APINET network of the Sensing API, or connecting to the Swarm externally + - Route 53 is unavailable in local mode, so the DNS routes required by the API are emulated by container naming and modifications to the Docker host `/etc/hosts` file + +Otherwise, the API itself is identical to that running in production mode. Our development mode Sensing API is defined in `docker-compose.yml`. + + +# Setup + +These instructions assume that you already have Docker and `docker-compose` installed locally. Ideally, disable host firewalls to open all ports on the host machine. If this is undesirable (for instance, running on an untrusted network), you can get the list of required open ports from the [deployment guide](deployment.md). + +## Environment + +The [dockerized](bin/) tools are equiped to work in production and development mode. To trigger development mode, export an `API_MODE` environment variable: + +```bash +> export API_MODE=dev +``` + +## Swarm Init + +For local development, we'll be running on a single node Docker Swarm. Initialize the local Docker environment to be a Swarm manager: + +```bash +> docker swarm init +``` + +# Running + +From this point on, the network and deployed stack can be started, stopped, and removed as needed during development. + + +## The APINET network + +We directly configure the network that will be used by the swarm using a `docker network` command. The subnet used doesn't matter, so long as it doesn't cause IP and subnet conflicts with the local LAN: + +```bash +> docker network create --driver overlay --attachable --subnet 192.168.1.0/24 apinet +``` + +Our network is named `apinet`, which is important - this is the identifier we use to later attach new networks or containers to the same network running the API. Because we're using an `overlay` network, any ports exposed by containers within the `apinet` network (explicity declared in the `docker-compose.yml` file) can be accessed at the IP and hostname of the Docker host (your local machine). + +## Setup a Docker Registry + +Moving containers built with `docker-compose` into a docker swarm requires a registry. Rather than using the global Docker Hub registry, we spin up our own registry as part of our deploy step. Start the registry with: + +```bash +> sudo docker service create --name registry --publish 5000:5000 registry:2 +``` + +You can confirm that the registry is running with: + +```bash +> curl http://localhost:5000/v2/ +{} +``` + +Your `registry` can stay active throughout development, and only needs to be restarted if your Docker Swarm node has restarted, or Docker has otherwise restarted. + +# Development Cycle + +## Sensing API + +Running containers and compose files on Docker Swarm is slightly different than the normal Docker cycle. During development that impacts any of the Sensing API services, you'll likely follow the cycle: + + - build + - push + - deploy + - tear down + + +### Building the API + +You need to build the API using `docker-compose`: + +```bash +> docker-compose build +``` + +This will rebuild any containers explicity defined in the `docker-compose.yml` file. Notably, this will not rebuild target (sensor instrumented) containers. + +### Pushing the API Containers + +Before you can deploy the API, the built containers must be pushed to the `registry` we started on our swarm: + +```bash +> docker-compose push +``` + +### Deploying the API + +Deploy the API with the `docker stack` command: + +```bash +> docker stack deploy --compose-file docker-compose.yml savior-api +``` + +If this is a first deployment, you may need to load configuration data into the API: + +```bash +./bin/load_sensor_configurations.py +``` + +### Tearing down the API + +You need to tear down the API before re-deploying after making changes and rebuilding: + +```bash +> docker stack rm savior-api +``` + +This tear down may take a few seconds. **NOTE**: you _do not_ need to tear down the `apinet` network between deploys. + +## Target Containers + +The primary mode of testing new sensing capabilities with the Sensing API in development mode is to add the sensors to the `demo-target`. This target can be built with a `bin` command: + +```bash +> ./bin/dockerized-build.sh +``` + +New targets can be attached to the network with the `add-target.sh` script: + +```bash +> ./bin/add-target.sh target_2 +``` + +Multiple targets can be added at once, so long as each target attached to the network has a unique name: + +```bash +> ./bin/add-target.sh target_3 target_4 my_target +``` + +You can clear out the added targets with another `bin` command: + +```bash +> ./bin/clear_network.py +``` + +## Other Containers + +Containers other that the `demo-target` can be added to the network, just be sure to use the `---network=apinet` flag: + +```bash +docker run -d -rm --name my-container-name --network=apinet ubuntu:trusty bash +``` + +## External Capabilities/Sensors + +Developing components that need to talk to the API, but don't need to attach to the `apinet` network is also possible. The only additional change required to support this is adding the following entries to your *host machine* `/etc/hosts` file: + +``` +127.0.0.1 sensing-api.savior.internal +127.0.0.1 sensing-ca.savior.internal +127.0.0.1 sensing-kafka.savior.internal +``` + +Then your component should be able to use the normal DNS names of the services, with the Swarm hosting the Sensing API receiving the traffic on your local Docker Swarm. \ No newline at end of file diff --git a/swarm.md b/swarm.md index ca78aa6..76f4e25 100644 --- a/swarm.md +++ b/swarm.md @@ -136,7 +136,7 @@ Make sure you're on the branch you intend to run from. ## Setup a Docker Registry -Moving containers build with `docker-compose` between the different nodes of a docker swarm requires a registry. Rather than using the global Docker Hub registry, we spin up our own registry as part of our deploy step. Start the registry with: +Moving containers built with `docker-compose` between the different nodes of a docker swarm requires a registry. Rather than using the global Docker Hub registry, we spin up our own registry as part of our deploy step. Start the registry with: ```bash > sudo docker service create --name registry --publish 5000:5000 registry:2