Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CONSUL_RETRY_JOIN_WAN env #48

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
ea04c0f
Resolve #23 by adding CONSUL_DATACENTER_NAME env
tjcelaya Dec 7, 2017
cc21275
Mention the environment variable in the README
tjcelaya Dec 8, 2017
189414c
Resolves #47 by adding CONSUL_RETRY_JOIN_WAN
tjcelaya Dec 8, 2017
23c8c60
Remove debugging output
tjcelaya Dec 8, 2017
abad90e
Fix the environment value now that we've collected the error message
tjcelaya Dec 8, 2017
b15de52
Environment variable documentation and moving examples.
tjcelaya Dec 11, 2017
ee400d6
Triton datacenter autodetection
tjcelaya Dec 11, 2017
5c4c2d4
README update for envs
tjcelaya Dec 11, 2017
33ceafc
Merge updates to #46
tjcelaya Dec 11, 2017
32ed8fc
more things
tjcelaya Dec 13, 2017
5bec8ff
it was bind_addr all along
tjcelaya Dec 13, 2017
25de3dd
Always set the alternate addresses for now, might move all of these b…
tjcelaya Dec 13, 2017
0c3db58
Fix the CONSUL_BIND_ADDR check
tjcelaya Dec 14, 2017
b44294b
Add self-anti-affinity
tjcelaya Dec 14, 2017
3ebb138
Fix silly config error
tjcelaya Dec 15, 2017
24ae977
Move env vars section
tjcelaya Dec 15, 2017
2f2893d
Resolve merge conflicts
tjcelaya Dec 18, 2017
ac0d6e2
Split up triton from multi-dc triton examples
tjcelaya Dec 19, 2017
a84a860
Remove fake multi-dc docker-compose example
tjcelaya Dec 19, 2017
667f0e2
Fix test Dockerfile paths
tjcelaya Dec 19, 2017
5ab7825
Fix multi-dc clean target
tjcelaya Dec 19, 2017
efb1e9b
Fix template name
tjcelaya Dec 19, 2017
ff95e50
Try a new multi-dc layout
tjcelaya Dec 19, 2017
db96dd7
Separate the triton and triton-multi-dc setup scripts
tjcelaya Dec 19, 2017
eb360fc
make the check function easier to use
tjcelaya Dec 19, 2017
3c6f336
Fix account id query command
tjcelaya Dec 19, 2017
8b27a61
Path and script name change in README
tjcelaya Dec 19, 2017
06bc9ea
Clarify README
tjcelaya Dec 19, 2017
6759213
Fix duplicate env vars from merge conflict resolution
tjcelaya Dec 20, 2017
9ea5646
Fix usage output description
tjcelaya Dec 20, 2017
ecc75ed
Fix description again
tjcelaya Dec 21, 2017
658d4e7
Remove the build key from examples/compose/docker-compose.yml
tjcelaya Dec 21, 2017
4cf52d9
Revert ./setup.sh change
tjcelaya Dec 21, 2017
dc5018f
Typo
tjcelaya Dec 21, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
_env
_env*
examples/triton-multi-dc/docker-compose-*.yml
70 changes: 55 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ When run locally for testing, we don't have access to Triton CNS. The `local-com
1. [Get a Joyent account](https://my.joyent.com/landing/signup/) and [add your SSH key](https://docs.joyent.com/public-cloud/getting-started).
1. Install the [Docker Toolbox](https://docs.docker.com/installation/mac/) (including `docker` and `docker-compose`) on your laptop or other environment, as well as the [Joyent Triton CLI](https://www.joyent.com/blog/introducing-the-triton-command-line-tool) (`triton` replaces our old `sdc-*` CLI tools).

Check that everything is configured correctly by running `./setup.sh`. This will check that your environment is setup correctly and will create an `_env` file that includes injecting an environment variable for a service name for Consul in Triton CNS. We'll use this CNS name to bootstrap the cluster.
Check that everything is configured correctly by changing to the `examples/triton` directory and executing `./setup.sh`. This will check that your environment is setup correctly and will create an `_env` file that includes injecting an environment variable for a service name for Consul in Triton CNS. We'll use this CNS name to bootstrap the cluster.

```bash
$ docker-compose up -d
Expand Down Expand Up @@ -52,6 +52,60 @@ $ docker exec -it consul_consul_3 consul info | grep num_peers

```

### Run it with more than one datacenter!

Within the `examples/triton-multi-dc` directory, execute `./setup-multi-dc.sh`, providing as arguments Triton profiles which belong to the desired data centers.

Since interacting with multiple data centers requires switching between Triton profiles it's easier to perform the following steps in separate terminals. It is possible to perform all the steps for a single data center and then change profiles. Additionally, setting `COMPOSE_PROJECT_NAME` to match the profile or data center will help distinguish nodes in Triton Portal and the `triton instance ls` listing.

One `_env` and one `docker-compose-<PROFILE>.yml` should be generated for each profile. Execute the following commands, once for each profile/datacenter, within `examples/triton-multi-dc`:

```
$ eval "$(TRITON_PROFILE=<PROFILE> triton env -d)"

# The following helps when executing docker-compose multiple times. Alternatively, pass the -f flag to each invocation of docker-compose.
$ export COMPOSE_FILE=docker-compose-<PROFILE>.yml

# The following is not strictly necessary but helps to discern between clusters. Alternatively, pass the -p flag to each invocation of docker-compose.
$ export COMPOSE_PROJECT_NAME=<PROFILE>

$ docker-compose up -d
Creating <PROFILE>_consul_1 ... done

$ docker-compose scale consul=3
```

Note: the `cns.joyent.com` hostnames cannot be resolved from outside the datacenters. Change `cns.joyent.com` to `triton.zone` to access the web UI.

## Environment Variables

- `CONSUL_DEV`: Enable development mode, allowing a node to self-elect as a cluster leader. Consul flag: [`-dev`](https://www.consul.io/docs/agent/options.html#_dev).
- The following errors will occur if `CONSUL_DEV` is omitted and not enough Consul instances are deployed:
```
[ERR] agent: failed to sync remote state: No cluster leader
[ERR] agent: failed to sync changes: No cluster leader
[ERR] agent: Coordinate update error: No cluster leader
```
- `CONSUL_DATACENTER_NAME`: Explicitly set the name of the data center in which Consul is running. Consul flag: [`-datacenter`](https://www.consul.io/docs/agent/options.html#datacenter).
- If this variable is specified it will be used as-is.
- If not specified, automatic detection of the datacenter will be attempted. See [issue #23](https://github.com/autopilotpattern/consul/issues/23) for more details.
- Consul's default of "dc1" will be used if none of the above apply.

- `CONSUL_BIND_ADDR`: Explicitly set the corresponding Consul configuration. This value will be set to `0.0.0.0` if `CONSUL_BIND_ADDR` is not specified and `CONSUL_RETRY_JOIN_WAN` is provided. Be aware of the security implications of binding the server to a public address and consider setting up encryption or using a VPN to isolate WAN traffic from the public internet.
- `CONSUL_SERF_LAN_BIND`: Explicitly set the corresponding Consul configuration. This value will be set to the server's private address automatically if not specified. Consul flag: [`-serf-lan-bind`](https://www.consul.io/docs/agent/options.html#serf_lan_bind).
- `CONSUL_SERF_WAN_BIND`: Explicitly set the corresponding Consul configuration. This value will be set to the server's public address automatically if not specified. Consul flag: [`-serf-wan-bind`](https://www.consul.io/docs/agent/options.html#serf_wan_bind).
- `CONSUL_ADVERTISE_ADDR`: Explicitly set the corresponding Consul configuration. This value will be set to the server's private address automatically if not specified. Consul flag: [`-advertise-addr`](https://www.consul.io/docs/agent/options.html#advertise_addr).
- `CONSUL_ADVERTISE_ADDR_WAN`: Explicitly set the corresponding Consul configuration. This value will be set to the server's public address automatically if not specified. Consul flag: [`-advertise-addr-wan`](https://www.consul.io/docs/agent/options.html#advertise_addr_wan).

- `CONSUL_RETRY_JOIN_WAN`: sets the remote datacenter addresses to join. Must be a valid HCL list (i.e. comma-separated quoted addresses). Consul flag: [`-retry-join-wan`](https://www.consul.io/docs/agent/options.html#retry_join_wan).
- The following error will occur if `CONSUL_RETRY_JOIN_WAN` is provided but improperly formatted:
```
==> Error parsing /etc/consul/consul.hcl: ... unexpected token while parsing list: IDENT
```
- Gossip over the WAN requires the following ports to be accessible between data centers, make sure that adequate firewall rules have been established for the following ports (this should happen automatically when using docker-compose with Triton):
- `8300`: Server RPC port (TCP)
- `8302`: Serf WAN gossip port (TCP + UDP)

## Using this in your own composition

There are two ways to run Consul and both come into play when deploying ContainerPilot, a cluster of Consul servers and individual Consul client agents.
Expand Down Expand Up @@ -82,20 +136,6 @@ services:

In our experience, including a Consul cluster within a project's `docker-compose.yml` can help developers understand and test how a service should be discovered and registered within a wider infrastructure context.

#### Environment Variables

- `CONSUL_DEV`: Enable development mode, allowing a node to self-elect as a cluster leader. Consul flag: [`-dev`](https://www.consul.io/docs/agent/options.html#_dev).
- The following errors will occur if `CONSUL_DEV` is omitted and not enough Consul instances are deployed:
```
[ERR] agent: failed to sync remote state: No cluster leader
[ERR] agent: failed to sync changes: No cluster leader
[ERR] agent: Coordinate update error: No cluster leader
```
- `CONSUL_DATACENTER_NAME`: Explicitly set the name of the data center in which Consul is running. Consul flag: [`-datacenter`](https://www.consul.io/docs/agent/options.html#datacenter).
- If this variable is specified it will be used as-is.
- If not specified, automatic detection of the datacenter will be attempted. See [issue #23](https://github.com/autopilotpattern/consul/issues/23) for more details.
- Consul's default of "dc1" will be used if none of the above apply.

### Clients

ContainerPilot utilizes Consul's [HTTP Agent API](https://www.consul.io/api/agent.html) for a handful of endpoints, such as `UpdateTTL`, `CheckRegister`, `ServiceRegister` and `ServiceDeregister`. Connecting ContainerPilot to Consul can be achieved by running Consul as a client to a cluster (mentioned above). It's easy to run this Consul client agent from ContainerPilot itself.
Expand Down
64 changes: 62 additions & 2 deletions bin/consul-manage
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@ set -eo pipefail
# been told to listen on.
#
preStart() {
_log "Updating consul advertise address"
sed -i "s/CONTAINERPILOT_CONSUL_IP/${CONTAINERPILOT_CONSUL_IP}/" /etc/consul/consul.hcl

if [ -n "$CONSUL_DATACENTER_NAME" ]; then
_log "Updating consul datacenter name (specified: '${CONSUL_DATACENTER_NAME}' )"
Expand All @@ -20,6 +18,46 @@ preStart() {
_log "Updating consul datacenter name (default: 'dc1')"
sed -i "s/CONSUL_DATACENTER_NAME/dc1/" /etc/consul/consul.hcl
fi

if [ -n "$CONSUL_RETRY_JOIN_WAN" ]; then
_log "Updating consul retry_join_wan field"
sed -i '/^retry_join_wan/d' /etc/consul/consul.hcl
echo "retry_join_wan = [${CONSUL_RETRY_JOIN_WAN}]" >> /etc/consul/consul.hcl

# translate_wan_addrs allows us to reach remote nodes through their advertise_addr_wan
sed -i '/^translate_wan_addrs/d' /etc/consul/consul.hcl
_log "Updating consul translate_wan_addrs field"
echo "translate_wan_addrs = true" >> /etc/consul/consul.hcl

# only set bind_addr = 0.0.0.0 if none was specified explicitly with CONSUL_BIND_ADDR
if [ -n "$CONSUL_BIND_ADDR" ]; then
updateConfigFromEnvOrDefault 'bind_addr' 'CONSUL_BIND_ADDR' "$CONTAINERPILOT_CONSUL_IP"
else
sed -i '/^bind_addr/d' /etc/consul/consul.hcl
_log "Updating consul field bind_addr to 0.0.0.0 CONSUL_BIND_ADDR was empty and CONSUL_RETRY_JOIN_WAN was not empty"
echo "bind_addr = \"0.0.0.0\"" >> /etc/consul/consul.hcl
fi
else
# if no WAN addresses were provided, set the bind_addr to the private address
updateConfigFromEnvOrDefault 'bind_addr' 'CONSUL_BIND_ADDR' "$CONTAINERPILOT_CONSUL_IP"
fi

IP_ADDRESS=$(hostname -i)

# the serf_lan_bind field was recently renamed to serf_wan
# serf_lan tells nodes their address within the LAN
updateConfigFromEnvOrDefault 'serf_lan' 'CONSUL_SERF_LAN_BIND' "$CONTAINERPILOT_CONSUL_IP"

# the serf_wan_bind field was recently renamed to serf_wan
# if this field is not set WAN joins will be refused since the bind address will differ
# from the address used to reach the node
updateConfigFromEnvOrDefault 'serf_wan' 'CONSUL_SERF_WAN_BIND' "$IP_ADDRESS"

# advertise_addr tells nodes their private, routeable address
updateConfigFromEnvOrDefault 'advertise_addr' 'CONSUL_ADVERTISE_ADDR' "$CONTAINERPILOT_CONSUL_IP"

# advertise_addr_wan tells nodes their public address for WAN communication
updateConfigFromEnvOrDefault 'advertise_addr_wan' 'CONSUL_ADVERTISE_ADDR_WAN' "$IP_ADDRESS"
}

#
Expand All @@ -44,6 +82,28 @@ _log() {
echo " $(date -u '+%Y-%m-%d %H:%M:%S') containerpilot: $@"
}


#
# Defines $1 in the consul configuration as either an env or a default.
# This basically behaves like ${!name_of_var} and ${var:-default} together
# but separates the indirect reference from the default so it's more obvious
#
# Check if $2 is the name of a defined environment variable and use ${!2} to
# reference it indirectly.
#
# If it is not defined, use $3 as the value
#
updateConfigFromEnvOrDefault() {
_log "Updating consul field $1"
sed -i "/^$1/d" /etc/consul/consul.hcl

if [ -n "${!2}" ]; then
echo "$1 = \"${!2}\"" >> /etc/consul/consul.hcl
else
echo "$1 = \"$3\"" >> /etc/consul/consul.hcl
fi
}

# ---------------------------------------------------
# parse arguments

Expand Down
2 changes: 1 addition & 1 deletion etc/consul.hcl
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
bind_addr = "CONTAINERPILOT_CONSUL_IP"
bind_addr = "0.0.0.0"
datacenter = "CONSUL_DATACENTER_NAME"
data_dir = "/data"
client_addr = "0.0.0.0"
Expand Down
3 changes: 2 additions & 1 deletion local-compose.yml → examples/compose/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,9 @@ services:
# created user-defined network and internal DNS for the name "consul".
# Nodes will use Docker DNS for the service (passed in via the CONSUL
# env var) to find each other and bootstrap the cluster.
# Note: Unless CONSUL_DEV is set, at least three instances are required for quorum.
consul:
build: .
image: autopilotpattern/consul:${TAG:-latest}
restart: always
mem_limit: 128m
ports:
Expand Down
23 changes: 23 additions & 0 deletions examples/triton-multi-dc/docker-compose-multi-dc.yml.template
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
version: '2.1'

services:

# Service definition for Consul cluster running in us-east-1.
# Cloned by ../../setup-multi-datacenter.sh once per profile
consul:
image: autopilotpattern/consul:${TAG:-latest}
labels:
- triton.cns.services=consul
- com.docker.swarm.affinities=["container!=~*consul*"]
restart: always
mem_limit: 128m
ports:
- 8300 # Server RPC port
- "8302/tcp" # Serf WAN port
- "8302/udp" # Serf WAN port
- 8500
env_file:
- ENV_FILE_NAME
network_mode: bridge
command: >
/usr/local/bin/containerpilot
160 changes: 160 additions & 0 deletions examples/triton-multi-dc/setup-multi-dc.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
#!/bin/bash
set -e -o pipefail

help() {
echo
echo 'Usage ./setup-multi-datacenter.sh <triton-profile1> [<triton-profile2> [...]]'
echo
echo 'Generates one _env file and docker-compose.yml file per triton profile, each of which'
echo 'is presumably associated with a different datacenter.'
}

if [ "$#" -lt 1 ]; then
help
exit 1
fi

# ---------------------------------------------------
# Top-level commands

#
# Check for triton profile $1 and output _env file named $2
#
generate_env() {
local triton_profile=$1
local output_file=$2

command -v docker >/dev/null 2>&1 || {
echo
tput rev # reverse
tput bold # bold
echo 'Docker is required, but does not appear to be installed.'
tput sgr0 # clear
echo 'See https://docs.joyent.com/public-cloud/api-access/docker'
exit 1
}
command -v triton >/dev/null 2>&1 || {
echo
tput rev # reverse
tput bold # bold
echo 'Error! Joyent Triton CLI is required, but does not appear to be installed.'
tput sgr0 # clear
echo 'See https://www.joyent.com/blog/introducing-the-triton-command-line-tool'
exit 1
}

# make sure Docker client is pointed to the same place as the Triton client
local docker_user=$(docker info 2>&1 | awk -F": " '/SDCAccount:/{print $2}')
local docker_dc=$(echo $DOCKER_HOST | awk -F"/" '{print $3}' | awk -F'.' '{print $1}')

local triton_user=$(triton profile get $triton_profile | awk -F": " '/account:/{print $2}')
local triton_dc=$(triton profile get $triton_profile | awk -F"/" '/url:/{print $3}' | awk -F'.' '{print $1}')
local triton_account=$(TRITON_PROFILE=$triton_profile triton account get | awk -F": " '/id:/{print $2}')

if [ ! "$docker_user" = "$triton_user" ] || [ ! "$docker_dc" = "$triton_dc" ]; then
echo
tput rev # reverse
tput bold # bold
echo 'Error! The Triton CLI configuration does not match the Docker CLI configuration.'
tput sgr0 # clear
echo
echo "Docker user: ${docker_user}"
echo "Triton user: ${triton_user}"
echo "Docker data center: ${docker_dc}"
echo "Triton data center: ${triton_dc}"
exit 1
fi

local triton_cns_enabled=$(triton account get | awk -F": " '/cns/{print $2}')
if [ ! "true" == "$triton_cns_enabled" ]; then
echo
tput rev # reverse
tput bold # bold
echo 'Error! Triton CNS is required and not enabled.'
tput sgr0 # clear
echo
exit 1
fi

# setup environment file
if [ ! -f "$output_file" ]; then
echo '# Consul bootstrap via Triton CNS' >> $output_file
echo CONSUL=consul.svc.${triton_account}.${triton_dc}.cns.joyent.com >> $output_file
echo >> $output_file
else
echo "Existing _env file found at $1, exiting"
exit
fi
}


declare -a written
declare -a consul_hostnames

# check that we won't overwrite any _env files first
if [ -f "_env" ]; then
echo "Existing env file found, exiting: _env"
fi

# check the names of _env files we expect to generate
for profile in "$@"
do
if [ -f "_env-$profile" ]; then
echo "Existing env file found, exiting: _env-$profile"
exit 2
fi

if [ -f "_env-$profile" ]; then
echo "Existing env file found, exiting: _env-$profile"
exit 3
fi

if [ -f "docker-compose-$profile.yml" ]; then
echo "Existing docker-compose file found, exiting: docker-compose-$profile.yml"
exit 4
fi
done

# check that the docker-compose.yml template is in the right place
if [ ! -f "docker-compose-multi-dc.yml.template" ]; then
echo "Multi-datacenter docker-compose.yml template is missing!"
exit 5
fi

echo "profiles: $@"

# invoke ./setup.sh once per profile
for profile in "$@"
do
echo "Temporarily switching profile: $profile"
eval "$(TRITON_PROFILE=$profile triton env -d)"
generate_env $profile "_env-$profile"

unset CONSUL
source "_env-$profile"

consul_hostnames+=("\"${CONSUL//cns.joyent.com/triton.zone}\"")

cp docker-compose-multi-dc.yml.template \
"docker-compose-$profile.yml"

sed -i '' "s/ENV_FILE_NAME/_env-$profile/" "docker-compose-$profile.yml"

written+=("_env-$profile")
done


# finalize _env and prepare docker-compose.yml files
for profile in "$@"
do
# add the CONSUL_RETRY_JOIN_WAN addresses to each _env
echo '# Consul multi-DC bootstrap via Triton CNS' >> _env-$profile
echo "CONSUL_RETRY_JOIN_WAN=$(IFS=,; echo "${consul_hostnames[*]}")" >> _env-$profile

cp docker-compose-multi-dc.yml.template \
"docker-compose-$profile.yml"

sed -i '' "s/ENV_FILE_NAME/_env-$profile/" "docker-compose-$profile.yml"
done

echo "Wrote: ${written[@]}"
1 change: 1 addition & 0 deletions docker-compose.yml → examples/triton/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ services:
image: autopilotpattern/consul:${TAG:-latest}
labels:
- triton.cns.services=consul
- com.docker.swarm.affinities=["container!=~*consul*"]
restart: always
mem_limit: 128m
ports:
Expand Down
1 change: 1 addition & 0 deletions setup.sh → examples/triton/setup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@ check() {
# make sure Docker client is pointed to the same place as the Triton client
local docker_user=$(docker info 2>&1 | awk -F": " '/SDCAccount:/{print $2}')
local docker_dc=$(echo $DOCKER_HOST | awk -F"/" '{print $3}' | awk -F'.' '{print $1}')

TRITON_USER=$(triton profile get | awk -F": " '/account:/{print $2}')
TRITON_DC=$(triton profile get | awk -F"/" '/url:/{print $3}' | awk -F'.' '{print $1}')
TRITON_ACCOUNT=$(triton account get | awk -F": " '/id:/{print $2}')
Expand Down
Loading