Skip to content

Latest commit

 

History

History
130 lines (100 loc) · 5.85 KB

README.md

File metadata and controls

130 lines (100 loc) · 5.85 KB

kraken-ci

Bring up a configured jenkins server

Install terraform and terraform-provider-coreosbox

brew install terraform
brew tap 'samsung-cnct/terraform-provider-coreosbox'
brew install terraform-provider-coreosbox

On a non-OSX platorm, follow the installation directions for terraform and then unzip the appropriate release of terraform-provider-coreosbox to the terraform path.

Install ansible

pip install -r requirements.txt

NOTE: If you are running in a virtualenv, you'll need to add ansible_python_interpreter=${VIRTUAL_ENV}/bin/python to localhost

You'll need to pre-create some credentials for each instance of kraken-ci:

  • kraken-ci
    • choose a name for this instance, we'll call it "example-kraken-ci"
    • generate ssh keys, eg: mkdir -p keys && ssh-keygen -q -t rsa -N '' -C example-kraken-ci -f ./keys/id_rsa
    • generate secrets
      • TODO: how to generate jenkins secrets
      • TODO: how to generate docker/config.json
      • re-use secrets from a previous kraken-ci installation, we'll assume example-kraken-ci-prime
      • aws s3 cp --recursive s3://sundry-automata/secrets/example-kraken-ci-prime.kubeme.io ./secrets
    • create a pull request adding your own github id to ansible/roles/ci-properties/defaults/main.yaml
  • kraken-ci-jobs
    • create a pull request adding your own github id to jobs/samsung-cnct-project-pr.yaml
  • AWS
    • choose a region, we'll assume "us-west-2"
    • make an s3 bucket: s3://example-kraken-ci-backup
    • choose an s3 bucket, we'll assume "sundry-automata"
    • upload generated ssh keys
      • aws s3 cp ./keys/* s3://sundry-automata/keys/example-kraken-ci.kubeme.io/
    • upload jenkins secrets
      • aws s3 cp ./secrets/* s3://sundry-automata/secrets/example-kraken-ci.kubeme.io/
    • AWS_DEFAULT_REGION
      • this would be "us-west-2"
    • AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
  • GKE/GCE
    • Pick a "dev project" (we'll assume this to be k8s-work)
    • Pick a "prod project" (we'll assume this to be cnct-productioncluster)
    • Generate JSON-formatted keys for the 'Compute Engine default service account' (or another account with at least Editor access) for both projects
      • GCE_SERVICE_ACCOUNT_ID - this is the id of the dev project SA
      • GCE_PROD_SERVICE_ACCOUNT_ID - this is the id of the prod project SA
    • Upload the dev project key to s3://sundry-automata/secrets/example-kraken-ci.kubeme.io/gcloud/service-account.json
    • Upload the prod project key to s3://sundry-automata/secrets/example-kraken-ci.kubeme.io/gcloud/prod-service-account.json
  • Slack
    • SLACK_API_TOKEN
      • choose/create a slack team, we'll assume "example-team"
      • manage apps for that team at https://example-team.slack.com/apps
      • add a jenkins-ci app
      • choose a channel, we'll assume "#pipeline"
      • look for the "Token" setting on the next page
  • Github

Create an env file or otherwise populate your environment with the required secrets and settings.

$ cat > .env-example-kraken-ci <<EOS
export AWS_ACCESS_KEY_ID="<aws access key>"
export AWS_SECRET_ACCESS_KEY="<aws secret key>"
export AWS_DEFAULT_REGION="<aws region>"
export SLACK_API_TOKEN="<slack api token>"
export GITHUB_CLIENT_ID="<github app id>"
export GITHUB_CLIENT_KEY="<github app key>"
export GITHUB_ACCESS_TOKEN="<github token>"
export GITHUB_USERNAME="<github user>"
export GCE_SERVICE_ACCOUNT_ID="dev project SA id"
export GCE_PROD_SERVICE_ACCOUNT_ID="prod project SA id"

export KRAKEN_CI_NAME="example-kraken-ci"
EOS

Run:

$ . .env-example-kraken-ci && ./setup.sh --dump-data yes

Try it out

Point your browser to

https://example-kraken-ci.kubeme.io

You should see the jenkins dashboard. Now try:

To update in place

$ . .env-example-kraken-ci && ./setup.sh --dump-data no

No graceful termination / draining is in place, so coordinate with your team members accordingly

To destroy

$ . .env-example-kraken-ci && ./destroy.sh

To use test certificates

To test out / verify letsencrypt connectivity using their staging server, use the --test-instance yes flag or export TEST_INSTANCE=yes. This will produce invalid certificates that may be rejected by your browser.

Use environment variables

Instead of specifying all of the command line switches you can export the environment variables used in utils.sh file

Known Issues

  • Currently no locking is implemented for the S3 state backend. Coordinate with your team members accordingly.
  • jenkins secrets are manually generated
  • docker/config.json generation is undocumented