Skip to content

Latest commit

 

History

History
123 lines (90 loc) · 3.61 KB

eks.md

File metadata and controls

123 lines (90 loc) · 3.61 KB

Deploying Scylla on EKS

This guide is focused on deploying Scylla on EKS with improved performance. Performance tricks used by the script won't work with different machine tiers. It sets up the kubelets on EKS nodes to run with static cpu policy and uses local sdd disks in RAID0 for maximum performance.

Most of the commands used to setup the Scylla cluster are the same for all environments As such we have tried to keep them separate in the general guide.

TL;DR;

If you don't want to run the commands step-by-step, you can just run a script that will set everything up for you:

# Edit according to your preference
EKS_REGION=us-east-1
EKS_ZONES=us-east-1a,us-east-1b,us-east-1c

# From inside the examples/eks folder
cd examples/eks
./eks.sh -z "$EKS_ZONES" -r "$EKS_REGION"

After you deploy, see how you can benchmark your cluster with cassandra-stress.

Walkthrough

EKS Setup

Configure environment variables

First of all, we export all the configuration options as environment variables. Edit according to your own environment.

EKS_REGION=us-east-1
EKS_ZONES=us-east-1a,us-east-1b,us-east-1c
CLUSTER_NAME=scylla-demo

Creating an EKS cluster

For this guide, we'll create an EKS cluster with the following:

  • A NodeGroup of 3 i3-2xlarge Nodes, where the Scylla Pods will be deployed. These nodes will only accept pods having scylla-clusters toleration.
  - name: scylla-pool
    instanceType: i3.2xlarge
    desiredCapacity: 3
    labels:
      pool: "scylla-pool"
    taints:
      role: "scylla-clusters:NoSchedule"
    ssh:
      allow: true
    kubeletExtraConfig:
      cpuManagerPolicy: static
  • A NodeGroup of 4 c4.2xlarge Nodes to deploy cassandra-stress later on. These nodes will only accept pods having cassandra-stress toleration.
  - name: cassandra-stress-pool
    instanceType: c4.2xlarge
    desiredCapacity: 4
    labels:
      pool: "cassandra-stress-pool"
    taints:
      role: "cassandra-stress:NoSchedule"
    ssh:
      allow: true
  • A NodeGroup of 1 i3.large Node, where the monitoring stack and operator will be deployed.
  - name: monitoring-pool
    instanceType: i3.large
    desiredCapacity: 1
    labels:
      pool: "monitoring-pool"
    ssh:
      allow: true

Installing Required Tools

Installing script third party dependencies

Script requires several dependencies:

Install the local provisioner

We deploy the local volume provisioner, which will discover their mount points and make them available as PersistentVolumes.

helm install local-provisioner examples/common/provisioner

Deploy tuning DaemonSet

Deploy tuning DaemonSet, this will configure your disks and apply several optimizations

kubectl apply -f node-setup-daemonset.yaml

Installing the Scylla Operator and Scylla

Now you can follow the generic guide to launch your Scylla cluster in a highly performant environment.

Accessing the database

Instructions on how to access the database can also be found in the generic guide.

Deleting an EKS cluster

Once you are done with your experiments delete your cluster using the following command:

eksctl delete cluster "${CLUSTER_NAME}"