Skip to content

Commit

Permalink
Various local dev improvements (#973)
Browse files Browse the repository at this point in the history
  • Loading branch information
niallthomson authored Jun 18, 2024
1 parent e706119 commit 0188a10
Show file tree
Hide file tree
Showing 15 changed files with 111 additions and 165 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/pr.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ jobs:
uses: actions/checkout@v4
- name: Make shell
run: |
make shell shell_simple_command='ls'
bash hack/exec.sh '' 'ls -la'
pre-commit:
name: "Pre-commit hooks"
Expand Down
15 changes: 3 additions & 12 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@ shell_command=''
shell_simple_command=''
glob='-'


.PHONY: install
install:
cd website; npm install
Expand All @@ -24,7 +23,7 @@ test:

.PHONY: shell
shell:
bash hack/shell.sh $(environment) $(shell_command) $(shell_simple_command)
bash hack/shell.sh $(environment)

.PHONY: reset-environment
reset-environment:
Expand All @@ -34,18 +33,10 @@ reset-environment:
delete-environment:
bash hack/shell.sh $(environment) delete-environment

.PHONY: update-helm-versions
update-helm-versions:
bash hack/update-helm-versions.sh

.PHONY: verify-helm-metadata
verify-helm-metadata:
bash hack/verify-helm-metadata.sh

.PHONY: create-infrastructure
create-infrastructure:
bash hack/create-infrastructure.sh $(environment)
bash hack/exec.sh $(environment) 'cat /cluster/eksctl/cluster.yaml | envsubst | eksctl create cluster -f -'

.PHONY: destroy-infrastructure
destroy-infrastructure:
bash hack/destroy-infrastructure.sh $(environment)
bash hack/exec.sh $(environment) 'cat /cluster/eksctl/cluster.yaml | envsubst | eksctl delete cluster --wait --force --disable-nodegroup-eviction --timeout 45m -f -'
32 changes: 17 additions & 15 deletions docs/authoring_content.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,6 @@ The following pre-requisites are necessary to work on the content:
- Installed locally:
- Docker
- `make`
- `terraform`
- `jq`
- `npm`
- `kubectl`
Expand Down Expand Up @@ -65,9 +64,16 @@ There are some additional things to set up which are not required but will make

### Creating the infrastructure

When creating your content you will want to test the commands you specify against infrastructure that mirrors what will be used in the actual workshop by learners. This can easily by done locally and will use the cluster configuration in `./cluster/eksctl/cluster.yaml`.
When creating your content you will want to test the commands you specify against infrastructure that mirrors what will be used in the actual workshop by learners. This can easily by done locally and with some convenience scripts that have been included.

Ensure that your AWS credentials are set so eksctl is able to authenticate against your IAM account. It will source credentials following the standard mechanism used by the likes of the AWS CLI, which you can find documented [here](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-authentication.html).
> [!TIP]
> Why should you use the `make` commands and the associated convenience scripts instead of "doing it yourself"? The various scripts provided are intended to provide an environment consistent with what the end-user of the workshop will use. This is important because the workshop has a number of 3rd party dependencies that are carefully managed with regards to versioning.
Many of the convenience scripts we'll use will make calls to AWS APIs so will need to be able to authenticate. Getting AWS credentials in to a container in a portable way can be a challenge, and there are several options available:

1. Set `ASSUME_ROLE` environment variable in the terminal where you run the `make` commands to the ARN of an IAM role that you can assume with your current credentials. This will use the STS service to generate temporary credentials that will be injected in to the container. Example: `export ASSUME_ROLE='arn:aws:iam::123456789012:role/my-role'`
1. Set `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables in the terminal where you run the `make` commands. It is recommended that these credentials be temporary. These variables will be injected in to the container.
1. If you are developing on an EC2 instance which has an instance profile that provides the necessary IAM permissions then no action is needed as the container will automatically assume the role of the EC2 on which you're authoring your content.

You can then use the following convenience command to create the infrastructure:

Expand All @@ -85,20 +91,16 @@ make destroy-infrastructure

When in the process of creating the content its likely you'll need to be fairly interactive in testing commands etc. During a real workshop users would do this on the Cloud9 IDE, but for our purposes for developing content quickly this is a poor experience because it is designed to refresh content automatically from GitHub. As a result it is recommended to _NOT use the Cloud9 IDE_ created by the Cloud Formation in this repository and instead use the flow below.

The repository provides a mechanism to easily create an interactive shell with access to the EKS cluster created by `make create-infrastructure`. This shell will automatically pick up changes to the content on your local machine and mirrors the Cloud9 used in a real workshop in terms of tools and setup.

To use this utility you must:

- Already run `make create-infrastructure`
- Have some AWS credentials available in your current shell session (ie. you `aws` CLI must work)
The repository provides a mechanism to easily create an interactive shell with access to the EKS cluster created by `make create-infrastructure`. This shell will automatically pick up changes to the content on your local machine and mirrors the Cloud9 used in a real workshop in terms of tools and setup. As such to use this utility you must have already run `make create-infrastructure`.

The shell session created will have AWS credentials injected, so you will immediately be able to use the `aws` CLI and `kubectl` commands with no further configuration:
The shell session created will have AWS credentials injected, so you will immediately be able to use the `aws` CLI and `kubectl` commands with no further configuration.

If using [finch CLI](https://github.com/runfinch/finch) instead of `docker` CLI you need to set two environment variable `CONTAINER_CLI` or run `make` with the variable set like `CONTAINER_CLI=finch make shell` here how to set the variable in the terminal session for every command.

```bash
export CONTAINER_CLI=finch
```
> [!NOTE]
> If using [finch CLI](https://github.com/runfinch/finch) instead of `docker` CLI you need to set two environment variable `CONTAINER_CLI` or run `make` with the variable set like `CONTAINER_CLI=finch make shell` here how to set the variable in the terminal session for every command.
>
> ```bash
> export CONTAINER_CLI=finch
> ```
Run `make shell`:
Expand Down
25 changes: 0 additions & 25 deletions hack/create-infrastructure.sh

This file was deleted.

13 changes: 0 additions & 13 deletions hack/destroy-infrastructure.sh

This file was deleted.

31 changes: 31 additions & 0 deletions hack/exec.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
#!/bin/bash

environment=$1
shift 1
shell_command=$@

set -Eeuo pipefail

# You can run script with finch like CONTAINER_CLI=finch ./shell.sh <terraform_context> <shell_command>
CONTAINER_CLI=${CONTAINER_CLI:-docker}

SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )

source $SCRIPT_DIR/lib/common-env.sh

echo "Building container images..."

container_image='eks-workshop-environment'

(cd $SCRIPT_DIR/../lab && $CONTAINER_CLI build -q -t $container_image .)

source $SCRIPT_DIR/lib/generate-aws-creds.sh

echo "Executing command in container..."

$CONTAINER_CLI run --rm \
-v $SCRIPT_DIR/../manifests:/manifests \
-v $SCRIPT_DIR/../cluster:/cluster \
--entrypoint /bin/bash \
-e 'EKS_CLUSTER_NAME' -e 'AWS_REGION' \
$aws_credential_args $container_image -c "$shell_command"
23 changes: 19 additions & 4 deletions hack/lib/generate-aws-creds.sh
Original file line number Diff line number Diff line change
@@ -1,6 +1,21 @@
echo "Generating temporary AWS credentials..."
aws_credential_args=""

ACCESS_VARS=$(aws sts assume-role --role-arn $ASSUME_ROLE --role-session-name ${EKS_CLUSTER_NAME}-shell --output json | jq -r '.Credentials | "export AWS_ACCESS_KEY_ID=\(.AccessKeyId) AWS_SECRET_ACCESS_KEY=\(.SecretAccessKey) AWS_SESSION_TOKEN=\(.SessionToken)"')
ASSUME_ROLE=${ASSUME_ROLE:-""}
AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID:-""}

# TODO: This should probably not use eval
eval "$ACCESS_VARS"
if [ ! -z "$AWS_ACCESS_KEY_ID" ]; then
echo "Using environment AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY"

aws_credential_args="-e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN"
elif [ ! -z "$ASSUME_ROLE" ]; then
echo "Generating temporary AWS credentials..."

ACCESS_VARS=$(aws sts assume-role --role-arn $ASSUME_ROLE --role-session-name ${EKS_CLUSTER_NAME}-shell --output json | jq -r '.Credentials | "export AWS_ACCESS_KEY_ID=\(.AccessKeyId) AWS_SECRET_ACCESS_KEY=\(.SecretAccessKey) AWS_SESSION_TOKEN=\(.SessionToken)"')

# TODO: This should probably not use eval
eval "$ACCESS_VARS"

aws_credential_args="-e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN"
else
echo "Inheriting credentials from instance profile"
fi
13 changes: 0 additions & 13 deletions hack/refresh-terraform-lock.sh

This file was deleted.

13 changes: 1 addition & 12 deletions hack/run-tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,6 @@ set -u
# You can run script with finch like CONTAINER_CLI=finch ./run-tests.sh <terraform_context> <module>
CONTAINER_CLI=${CONTAINER_CLI:-docker}

# Right now the container images are only designed for amd64
export DOCKER_DEFAULT_PLATFORM=linux/amd64

AWS_EKS_WORKSHOP_TEST_FLAGS=${AWS_EKS_WORKSHOP_TEST_FLAGS:-""}

if [[ "$module" == '-' && "$glob" == '-' ]]; then
Expand Down Expand Up @@ -40,15 +37,7 @@ container_image='eks-workshop-test'

(cd $SCRIPT_DIR/../test && $CONTAINER_CLI build -q -t $container_image .)

aws_credential_args=""

ASSUME_ROLE=${ASSUME_ROLE:-""}

if [ ! -z "$ASSUME_ROLE" ]; then
source $SCRIPT_DIR/lib/generate-aws-creds.sh

aws_credential_args="-e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN"
fi
source $SCRIPT_DIR/lib/generate-aws-creds.sh

BACKGROUND=${BACKGROUND:-""}

Expand Down
28 changes: 1 addition & 27 deletions hack/shell.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,23 +2,12 @@

environment=$1
shell_command=$2
shell_simple_command=$3

set -Eeuo pipefail

# You can run script with finch like CONTAINER_CLI=finch ./shell.sh <terraform_context> <shell_command>
CONTAINER_CLI=${CONTAINER_CLI:-docker}

# Right now the container images are only designed for amd64
export DOCKER_DEFAULT_PLATFORM=linux/amd64

AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION:-""}

if [ ! -z "$AWS_DEFAULT_REGION" ]; then
echo "Error: AWS_DEFAULT_REGION must be set"
exit 1
fi

SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )

source $SCRIPT_DIR/lib/common-env.sh
Expand All @@ -29,25 +18,10 @@ container_image='eks-workshop-environment'

(cd $SCRIPT_DIR/../lab && $CONTAINER_CLI build -q -t $container_image .)

aws_credential_args=""

ASSUME_ROLE=${ASSUME_ROLE:-""}

if [ ! -z "$ASSUME_ROLE" ]; then
source $SCRIPT_DIR/lib/generate-aws-creds.sh

aws_credential_args="-e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN"
fi

command_args=""
source $SCRIPT_DIR/lib/generate-aws-creds.sh

interactive_args=""

if [ ! -z "$shell_simple_command" ]; then
export EKS_CLUSTER_NAME=''
shell_command="$shell_simple_command"
fi

if [ -z "$shell_command" ]; then
echo "Starting shell in container..."
interactive_args="-it"
Expand Down
11 changes: 0 additions & 11 deletions hack/update-helm-versions.sh

This file was deleted.

11 changes: 0 additions & 11 deletions hack/verify-helm-metadata.sh

This file was deleted.

2 changes: 1 addition & 1 deletion lab/bin/use-cluster
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ EKS_IP_FAMILY=ipv4
set +a
EOT

aws eks update-kubeconfig --name $cluster_name > /dev/null 2>&1
aws eks update-kubeconfig --name $cluster_name

if [[ -v C9_USER ]]; then
echo "Granting C9_USER access to the cluster via the AWS Console ${C9_USER}"
Expand Down
1 change: 1 addition & 0 deletions lab/scripts/entrypoint.sh
Original file line number Diff line number Diff line change
Expand Up @@ -12,5 +12,6 @@ if [ $# -eq 0 ]
then
bash -l
else
source /home/ec2-user/.bashrc.d/env.bash
bash -c "$@"
fi
Loading

0 comments on commit 0188a10

Please sign in to comment.