diff --git a/.gitignore b/.gitignore index 1ab2a9a..b7750de 100644 --- a/.gitignore +++ b/.gitignore @@ -1 +1,2 @@ -_env +_env* +examples/triton-multi-dc/docker-compose-*.yml diff --git a/README.md b/README.md index 00b2407..0f0b5c7 100644 --- a/README.md +++ b/README.md @@ -18,7 +18,7 @@ When run locally for testing, we don't have access to Triton CNS. The `local-com 1. [Get a Joyent account](https://my.joyent.com/landing/signup/) and [add your SSH key](https://docs.joyent.com/public-cloud/getting-started). 1. Install the [Docker Toolbox](https://docs.docker.com/installation/mac/) (including `docker` and `docker-compose`) on your laptop or other environment, as well as the [Joyent Triton CLI](https://www.joyent.com/blog/introducing-the-triton-command-line-tool) (`triton` replaces our old `sdc-*` CLI tools). -Check that everything is configured correctly by running `./setup.sh`. This will check that your environment is setup correctly and will create an `_env` file that includes injecting an environment variable for a service name for Consul in Triton CNS. We'll use this CNS name to bootstrap the cluster. +Check that everything is configured correctly by changing to the `examples/triton` directory and executing `./setup.sh`. This will check that your environment is setup correctly and will create an `_env` file that includes injecting an environment variable for a service name for Consul in Triton CNS. We'll use this CNS name to bootstrap the cluster. ```bash $ docker-compose up -d @@ -52,6 +52,60 @@ $ docker exec -it consul_consul_3 consul info | grep num_peers ``` +### Run it with more than one datacenter! + +Within the `examples/triton-multi-dc` directory, execute `./setup-multi-dc.sh`, providing as arguments Triton profiles which belong to the desired data centers. + +Since interacting with multiple data centers requires switching between Triton profiles it's easier to perform the following steps in separate terminals. It is possible to perform all the steps for a single data center and then change profiles. Additionally, setting `COMPOSE_PROJECT_NAME` to match the profile or data center will help distinguish nodes in Triton Portal and the `triton instance ls` listing. + +One `_env` and one `docker-compose-.yml` should be generated for each profile. Execute the following commands, once for each profile/datacenter, within `examples/triton-multi-dc`: + +``` +$ eval "$(TRITON_PROFILE= triton env -d)" + +# The following helps when executing docker-compose multiple times. Alternatively, pass the -f flag to each invocation of docker-compose. +$ export COMPOSE_FILE=docker-compose-.yml + +# The following is not strictly necessary but helps to discern between clusters. Alternatively, pass the -p flag to each invocation of docker-compose. +$ export COMPOSE_PROJECT_NAME= + +$ docker-compose up -d +Creating _consul_1 ... done + +$ docker-compose scale consul=3 +``` + +Note: the `cns.joyent.com` hostnames cannot be resolved from outside the datacenters. Change `cns.joyent.com` to `triton.zone` to access the web UI. + +## Environment Variables + +- `CONSUL_DEV`: Enable development mode, allowing a node to self-elect as a cluster leader. Consul flag: [`-dev`](https://www.consul.io/docs/agent/options.html#_dev). + - The following errors will occur if `CONSUL_DEV` is omitted and not enough Consul instances are deployed: + ``` + [ERR] agent: failed to sync remote state: No cluster leader + [ERR] agent: failed to sync changes: No cluster leader + [ERR] agent: Coordinate update error: No cluster leader + ``` +- `CONSUL_DATACENTER_NAME`: Explicitly set the name of the data center in which Consul is running. Consul flag: [`-datacenter`](https://www.consul.io/docs/agent/options.html#datacenter). + - If this variable is specified it will be used as-is. + - If not specified, automatic detection of the datacenter will be attempted. See [issue #23](https://github.com/autopilotpattern/consul/issues/23) for more details. + - Consul's default of "dc1" will be used if none of the above apply. + +- `CONSUL_BIND_ADDR`: Explicitly set the corresponding Consul configuration. This value will be set to `0.0.0.0` if `CONSUL_BIND_ADDR` is not specified and `CONSUL_RETRY_JOIN_WAN` is provided. Be aware of the security implications of binding the server to a public address and consider setting up encryption or using a VPN to isolate WAN traffic from the public internet. +- `CONSUL_SERF_LAN_BIND`: Explicitly set the corresponding Consul configuration. This value will be set to the server's private address automatically if not specified. Consul flag: [`-serf-lan-bind`](https://www.consul.io/docs/agent/options.html#serf_lan_bind). +- `CONSUL_SERF_WAN_BIND`: Explicitly set the corresponding Consul configuration. This value will be set to the server's public address automatically if not specified. Consul flag: [`-serf-wan-bind`](https://www.consul.io/docs/agent/options.html#serf_wan_bind). +- `CONSUL_ADVERTISE_ADDR`: Explicitly set the corresponding Consul configuration. This value will be set to the server's private address automatically if not specified. Consul flag: [`-advertise-addr`](https://www.consul.io/docs/agent/options.html#advertise_addr). +- `CONSUL_ADVERTISE_ADDR_WAN`: Explicitly set the corresponding Consul configuration. This value will be set to the server's public address automatically if not specified. Consul flag: [`-advertise-addr-wan`](https://www.consul.io/docs/agent/options.html#advertise_addr_wan). + +- `CONSUL_RETRY_JOIN_WAN`: sets the remote datacenter addresses to join. Must be a valid HCL list (i.e. comma-separated quoted addresses). Consul flag: [`-retry-join-wan`](https://www.consul.io/docs/agent/options.html#retry_join_wan). + - The following error will occur if `CONSUL_RETRY_JOIN_WAN` is provided but improperly formatted: + ``` + ==> Error parsing /etc/consul/consul.hcl: ... unexpected token while parsing list: IDENT + ``` + - Gossip over the WAN requires the following ports to be accessible between data centers, make sure that adequate firewall rules have been established for the following ports (this should happen automatically when using docker-compose with Triton): + - `8300`: Server RPC port (TCP) + - `8302`: Serf WAN gossip port (TCP + UDP) + ## Using this in your own composition There are two ways to run Consul and both come into play when deploying ContainerPilot, a cluster of Consul servers and individual Consul client agents. @@ -82,20 +136,6 @@ services: In our experience, including a Consul cluster within a project's `docker-compose.yml` can help developers understand and test how a service should be discovered and registered within a wider infrastructure context. -#### Environment Variables - -- `CONSUL_DEV`: Enable development mode, allowing a node to self-elect as a cluster leader. Consul flag: [`-dev`](https://www.consul.io/docs/agent/options.html#_dev). - - The following errors will occur if `CONSUL_DEV` is omitted and not enough Consul instances are deployed: - ``` - [ERR] agent: failed to sync remote state: No cluster leader - [ERR] agent: failed to sync changes: No cluster leader - [ERR] agent: Coordinate update error: No cluster leader - ``` -- `CONSUL_DATACENTER_NAME`: Explicitly set the name of the data center in which Consul is running. Consul flag: [`-datacenter`](https://www.consul.io/docs/agent/options.html#datacenter). - - If this variable is specified it will be used as-is. - - If not specified, automatic detection of the datacenter will be attempted. See [issue #23](https://github.com/autopilotpattern/consul/issues/23) for more details. - - Consul's default of "dc1" will be used if none of the above apply. - ### Clients ContainerPilot utilizes Consul's [HTTP Agent API](https://www.consul.io/api/agent.html) for a handful of endpoints, such as `UpdateTTL`, `CheckRegister`, `ServiceRegister` and `ServiceDeregister`. Connecting ContainerPilot to Consul can be achieved by running Consul as a client to a cluster (mentioned above). It's easy to run this Consul client agent from ContainerPilot itself. diff --git a/bin/consul-manage b/bin/consul-manage index 8a47749..320f83d 100755 --- a/bin/consul-manage +++ b/bin/consul-manage @@ -6,8 +6,6 @@ set -eo pipefail # been told to listen on. # preStart() { - _log "Updating consul advertise address" - sed -i "s/CONTAINERPILOT_CONSUL_IP/${CONTAINERPILOT_CONSUL_IP}/" /etc/consul/consul.hcl if [ -n "$CONSUL_DATACENTER_NAME" ]; then _log "Updating consul datacenter name (specified: '${CONSUL_DATACENTER_NAME}' )" @@ -20,6 +18,46 @@ preStart() { _log "Updating consul datacenter name (default: 'dc1')" sed -i "s/CONSUL_DATACENTER_NAME/dc1/" /etc/consul/consul.hcl fi + + if [ -n "$CONSUL_RETRY_JOIN_WAN" ]; then + _log "Updating consul retry_join_wan field" + sed -i '/^retry_join_wan/d' /etc/consul/consul.hcl + echo "retry_join_wan = [${CONSUL_RETRY_JOIN_WAN}]" >> /etc/consul/consul.hcl + + # translate_wan_addrs allows us to reach remote nodes through their advertise_addr_wan + sed -i '/^translate_wan_addrs/d' /etc/consul/consul.hcl + _log "Updating consul translate_wan_addrs field" + echo "translate_wan_addrs = true" >> /etc/consul/consul.hcl + + # only set bind_addr = 0.0.0.0 if none was specified explicitly with CONSUL_BIND_ADDR + if [ -n "$CONSUL_BIND_ADDR" ]; then + updateConfigFromEnvOrDefault 'bind_addr' 'CONSUL_BIND_ADDR' "$CONTAINERPILOT_CONSUL_IP" + else + sed -i '/^bind_addr/d' /etc/consul/consul.hcl + _log "Updating consul field bind_addr to 0.0.0.0 CONSUL_BIND_ADDR was empty and CONSUL_RETRY_JOIN_WAN was not empty" + echo "bind_addr = \"0.0.0.0\"" >> /etc/consul/consul.hcl + fi + else + # if no WAN addresses were provided, set the bind_addr to the private address + updateConfigFromEnvOrDefault 'bind_addr' 'CONSUL_BIND_ADDR' "$CONTAINERPILOT_CONSUL_IP" + fi + + IP_ADDRESS=$(hostname -i) + + # the serf_lan_bind field was recently renamed to serf_wan + # serf_lan tells nodes their address within the LAN + updateConfigFromEnvOrDefault 'serf_lan' 'CONSUL_SERF_LAN_BIND' "$CONTAINERPILOT_CONSUL_IP" + + # the serf_wan_bind field was recently renamed to serf_wan + # if this field is not set WAN joins will be refused since the bind address will differ + # from the address used to reach the node + updateConfigFromEnvOrDefault 'serf_wan' 'CONSUL_SERF_WAN_BIND' "$IP_ADDRESS" + + # advertise_addr tells nodes their private, routeable address + updateConfigFromEnvOrDefault 'advertise_addr' 'CONSUL_ADVERTISE_ADDR' "$CONTAINERPILOT_CONSUL_IP" + + # advertise_addr_wan tells nodes their public address for WAN communication + updateConfigFromEnvOrDefault 'advertise_addr_wan' 'CONSUL_ADVERTISE_ADDR_WAN' "$IP_ADDRESS" } # @@ -44,6 +82,28 @@ _log() { echo " $(date -u '+%Y-%m-%d %H:%M:%S') containerpilot: $@" } + +# +# Defines $1 in the consul configuration as either an env or a default. +# This basically behaves like ${!name_of_var} and ${var:-default} together +# but separates the indirect reference from the default so it's more obvious +# +# Check if $2 is the name of a defined environment variable and use ${!2} to +# reference it indirectly. +# +# If it is not defined, use $3 as the value +# +updateConfigFromEnvOrDefault() { + _log "Updating consul field $1" + sed -i "/^$1/d" /etc/consul/consul.hcl + + if [ -n "${!2}" ]; then + echo "$1 = \"${!2}\"" >> /etc/consul/consul.hcl + else + echo "$1 = \"$3\"" >> /etc/consul/consul.hcl + fi +} + # --------------------------------------------------- # parse arguments diff --git a/etc/consul.hcl b/etc/consul.hcl index c8a0330..1c79748 100644 --- a/etc/consul.hcl +++ b/etc/consul.hcl @@ -1,4 +1,4 @@ -bind_addr = "CONTAINERPILOT_CONSUL_IP" +bind_addr = "0.0.0.0" datacenter = "CONSUL_DATACENTER_NAME" data_dir = "/data" client_addr = "0.0.0.0" diff --git a/local-compose.yml b/examples/compose/docker-compose.yml similarity index 80% rename from local-compose.yml rename to examples/compose/docker-compose.yml index f3c21ba..60133f0 100644 --- a/local-compose.yml +++ b/examples/compose/docker-compose.yml @@ -7,8 +7,9 @@ services: # created user-defined network and internal DNS for the name "consul". # Nodes will use Docker DNS for the service (passed in via the CONSUL # env var) to find each other and bootstrap the cluster. + # Note: Unless CONSUL_DEV is set, at least three instances are required for quorum. consul: - build: . + image: autopilotpattern/consul:${TAG:-latest} restart: always mem_limit: 128m ports: diff --git a/examples/triton-multi-dc/docker-compose-multi-dc.yml.template b/examples/triton-multi-dc/docker-compose-multi-dc.yml.template new file mode 100644 index 0000000..a45af29 --- /dev/null +++ b/examples/triton-multi-dc/docker-compose-multi-dc.yml.template @@ -0,0 +1,23 @@ +version: '2.1' + +services: + + # Service definition for Consul cluster running in us-east-1. + # Cloned by ../../setup-multi-datacenter.sh once per profile + consul: + image: autopilotpattern/consul:${TAG:-latest} + labels: + - triton.cns.services=consul + - com.docker.swarm.affinities=["container!=~*consul*"] + restart: always + mem_limit: 128m + ports: + - 8300 # Server RPC port + - "8302/tcp" # Serf WAN port + - "8302/udp" # Serf WAN port + - 8500 + env_file: + - ENV_FILE_NAME + network_mode: bridge + command: > + /usr/local/bin/containerpilot \ No newline at end of file diff --git a/examples/triton-multi-dc/setup-multi-dc.sh b/examples/triton-multi-dc/setup-multi-dc.sh new file mode 100755 index 0000000..97e9919 --- /dev/null +++ b/examples/triton-multi-dc/setup-multi-dc.sh @@ -0,0 +1,160 @@ +#!/bin/bash +set -e -o pipefail + +help() { + echo + echo 'Usage ./setup-multi-datacenter.sh [ [...]]' + echo + echo 'Generates one _env file and docker-compose.yml file per triton profile, each of which' + echo 'is presumably associated with a different datacenter.' +} + +if [ "$#" -lt 1 ]; then + help + exit 1 +fi + +# --------------------------------------------------- +# Top-level commands + +# +# Check for triton profile $1 and output _env file named $2 +# +generate_env() { + local triton_profile=$1 + local output_file=$2 + + command -v docker >/dev/null 2>&1 || { + echo + tput rev # reverse + tput bold # bold + echo 'Docker is required, but does not appear to be installed.' + tput sgr0 # clear + echo 'See https://docs.joyent.com/public-cloud/api-access/docker' + exit 1 + } + command -v triton >/dev/null 2>&1 || { + echo + tput rev # reverse + tput bold # bold + echo 'Error! Joyent Triton CLI is required, but does not appear to be installed.' + tput sgr0 # clear + echo 'See https://www.joyent.com/blog/introducing-the-triton-command-line-tool' + exit 1 + } + + # make sure Docker client is pointed to the same place as the Triton client + local docker_user=$(docker info 2>&1 | awk -F": " '/SDCAccount:/{print $2}') + local docker_dc=$(echo $DOCKER_HOST | awk -F"/" '{print $3}' | awk -F'.' '{print $1}') + + local triton_user=$(triton profile get $triton_profile | awk -F": " '/account:/{print $2}') + local triton_dc=$(triton profile get $triton_profile | awk -F"/" '/url:/{print $3}' | awk -F'.' '{print $1}') + local triton_account=$(TRITON_PROFILE=$triton_profile triton account get | awk -F": " '/id:/{print $2}') + + if [ ! "$docker_user" = "$triton_user" ] || [ ! "$docker_dc" = "$triton_dc" ]; then + echo + tput rev # reverse + tput bold # bold + echo 'Error! The Triton CLI configuration does not match the Docker CLI configuration.' + tput sgr0 # clear + echo + echo "Docker user: ${docker_user}" + echo "Triton user: ${triton_user}" + echo "Docker data center: ${docker_dc}" + echo "Triton data center: ${triton_dc}" + exit 1 + fi + + local triton_cns_enabled=$(triton account get | awk -F": " '/cns/{print $2}') + if [ ! "true" == "$triton_cns_enabled" ]; then + echo + tput rev # reverse + tput bold # bold + echo 'Error! Triton CNS is required and not enabled.' + tput sgr0 # clear + echo + exit 1 + fi + + # setup environment file + if [ ! -f "$output_file" ]; then + echo '# Consul bootstrap via Triton CNS' >> $output_file + echo CONSUL=consul.svc.${triton_account}.${triton_dc}.cns.joyent.com >> $output_file + echo >> $output_file + else + echo "Existing _env file found at $1, exiting" + exit + fi +} + + +declare -a written +declare -a consul_hostnames + +# check that we won't overwrite any _env files first +if [ -f "_env" ]; then + echo "Existing env file found, exiting: _env" +fi + +# check the names of _env files we expect to generate +for profile in "$@" +do + if [ -f "_env-$profile" ]; then + echo "Existing env file found, exiting: _env-$profile" + exit 2 + fi + + if [ -f "_env-$profile" ]; then + echo "Existing env file found, exiting: _env-$profile" + exit 3 + fi + + if [ -f "docker-compose-$profile.yml" ]; then + echo "Existing docker-compose file found, exiting: docker-compose-$profile.yml" + exit 4 + fi +done + +# check that the docker-compose.yml template is in the right place +if [ ! -f "docker-compose-multi-dc.yml.template" ]; then + echo "Multi-datacenter docker-compose.yml template is missing!" + exit 5 +fi + +echo "profiles: $@" + +# invoke ./setup.sh once per profile +for profile in "$@" +do + echo "Temporarily switching profile: $profile" + eval "$(TRITON_PROFILE=$profile triton env -d)" + generate_env $profile "_env-$profile" + + unset CONSUL + source "_env-$profile" + + consul_hostnames+=("\"${CONSUL//cns.joyent.com/triton.zone}\"") + + cp docker-compose-multi-dc.yml.template \ + "docker-compose-$profile.yml" + + sed -i '' "s/ENV_FILE_NAME/_env-$profile/" "docker-compose-$profile.yml" + + written+=("_env-$profile") +done + + +# finalize _env and prepare docker-compose.yml files +for profile in "$@" +do + # add the CONSUL_RETRY_JOIN_WAN addresses to each _env + echo '# Consul multi-DC bootstrap via Triton CNS' >> _env-$profile + echo "CONSUL_RETRY_JOIN_WAN=$(IFS=,; echo "${consul_hostnames[*]}")" >> _env-$profile + + cp docker-compose-multi-dc.yml.template \ + "docker-compose-$profile.yml" + + sed -i '' "s/ENV_FILE_NAME/_env-$profile/" "docker-compose-$profile.yml" +done + +echo "Wrote: ${written[@]}" diff --git a/docker-compose.yml b/examples/triton/docker-compose.yml similarity index 89% rename from docker-compose.yml rename to examples/triton/docker-compose.yml index d2ab4ec..7cdcd1f 100644 --- a/docker-compose.yml +++ b/examples/triton/docker-compose.yml @@ -9,6 +9,7 @@ services: image: autopilotpattern/consul:${TAG:-latest} labels: - triton.cns.services=consul + - com.docker.swarm.affinities=["container!=~*consul*"] restart: always mem_limit: 128m ports: diff --git a/setup.sh b/examples/triton/setup.sh similarity index 99% rename from setup.sh rename to examples/triton/setup.sh index be7f2db..b7e7687 100755 --- a/setup.sh +++ b/examples/triton/setup.sh @@ -42,6 +42,7 @@ check() { # make sure Docker client is pointed to the same place as the Triton client local docker_user=$(docker info 2>&1 | awk -F": " '/SDCAccount:/{print $2}') local docker_dc=$(echo $DOCKER_HOST | awk -F"/" '{print $3}' | awk -F'.' '{print $1}') + TRITON_USER=$(triton profile get | awk -F": " '/account:/{print $2}') TRITON_DC=$(triton profile get | awk -F"/" '/url:/{print $3}' | awk -F'.' '{print $1}') TRITON_ACCOUNT=$(triton account get | awk -F": " '/id:/{print $2}') diff --git a/makefile b/makefile index 822b929..6260f92 100644 --- a/makefile +++ b/makefile @@ -81,6 +81,13 @@ test/triton: test/triton/dev: ./test/triton.sh + +# ------------------------------------------------ +# Multi-datacenter usage +clean/triton-multi-dc: + rm -rf examples/triton-multi-dc/_env* examples/triton-multi-dc/docker-compose-*.yml + + ## Print environment for build debugging debug: @echo GIT_COMMIT=$(GIT_COMMIT) diff --git a/test/Dockerfile b/test/Dockerfile index db9cd52..5740ac9 100644 --- a/test/Dockerfile +++ b/test/Dockerfile @@ -22,10 +22,10 @@ RUN sed -i 's/1.9.0/1.10.0/' /usr/local/bin/triton-docker \ # install test targets -COPY local-compose.yml /src/local-compose.yml -COPY docker-compose.yml /src/docker-compose.yml +COPY examples/compose/docker-compose.yml /src/local-compose.yml +COPY examples/triton/docker-compose.yml /src/docker-compose.yml # install test code COPY test/triton.sh /src/triton.sh COPY test/compose.sh /src/compose.sh -COPY setup.sh /src/setup.sh +COPY examples/triton/setup.sh /src/setup.sh