Skip to content
This repository has been archived by the owner on Jun 7, 2021. It is now read-only.

Kubernetes CSI driver for Equnix Metal, formerly Packet

License

Notifications You must be signed in to change notification settings

equinixmetal-archive/csi-packet

Repository files navigation

Kubernetes Container Storage Interface (CSI) plugin for Equinix Metal

GitHub release Go Report Card Continuous Integration Docker Pulls Slack Twitter Follow End of Life

csi-packet was the Kubernetes CSI implementation for Equinix Metal Block Storage provided by Datera. Read more about the CSI standard here.

This repository is End-Of-Life meaning that this software is no longer supported nor maintained by Equinix Metal or its community.

The following information is obsolete. Please see https://metal.equinix.com/developers/docs/kubernetes/kubernetes-on-equinix-metal/#storage for alternatives.


If you have any queries about CSI or would like to raise any bug reports or features requests please contact support.

Please Note: "Elastic Block Storage is only available in Core Legacy Sites: AMS1, DFW2, EWR1, NRT1, SJC1. If you do not have access to these sites, you may reach out to our support team to request it."

Requirements

At the current state of Kubernetes, running the CSI requires a few things. Please read through the requirements carefully as they are critical to running the CSI on a Kubernetes cluster.

Version

Recommended versions of Equinix Metal CSI based on your Kubernetes version:

  • Equinix Metal CSI version v0.0.2 supports Kubernetes version >=v1.10

Privilege

In order for CSI to work, your kubernetes cluster must allow privileged pods. Both the kube-apiserver and the kubelet must start with the flag --allow-privileged=true.

Deploying in a kubernetes cluster

Token

To run csi-packet, you need your Equinix Metal api key and project ID that your cluster is running in. If you are already logged in, you can create one by clicking on your profile in the upper right then "API keys". To get project ID click into the project that your cluster is under and select "project settings" from the header. Under General you will see "Project ID". Once you have this information you will be able to fill in the config needed for the CCM.

Create config

Copy deploy/template/secret.yaml to a local file:

cp deploy/template/secret.yaml packet-cloud-config.yaml

Replace the placeholder in the copy with your token. When you're done, the packet-cloud-config.yaml should look something like this:

apiVersion: v1
kind: Secret
metadata:
  name: packet-cloud-config
  namespace: kube-system
stringData:
  cloud-sa.json: |
    {
    "apiKey": "abc123abc123abc123",
    "projectID": "abc123abc123abc123"
    }

Then run:

kubectl apply -f ./packet-cloud-config.yaml`

You can confirm that the secret was created in the kube-system with the following:

$ kubectl -n kube-system get secrets packet-cloud-config
NAME                  TYPE                                  DATA      AGE
packet-cloud-config   Opaque                                1         2m

Note: This is the exact same config as used for Equinix Metal CCM, allowing you to create a single set of credentials in a single secret to support both.

Set up Driver

$ kubectl -n kube-system apply -f deploy/kubernetes/setup.yaml
$ kubectl -n kube-system apply -f deploy/kubernetes/node.yaml
$ kubectl -n kube-system apply -f deploy/kubernetes/controller.yaml

Run demo (optional):

$ kubectl apply -f deploy/demo/demo-deployment.yaml

Command-Line Options

You can run the binary with --help to get command-line options. Important options are:

  • --endpoint=<path> : (required) path to the kubelet registration socket. According to the spec, this should be /var/lib/kubelet/plugins/<unique_provider_name>/csi.sock. Thus we strongly recommend you mount it at /var/lib/kubelet/plugins/csi.packet.net/csi.sock. The deployment files in this repository assume that path.
  • --v=<level> : (optional) verbosity level per logrus
  • --config=<path> : (optional) path to config file, in json format, that contains the Equinix Metal configuration information as set below.
  • --nodeid=<id> : (optional) override the unique ID of this node as understood by the Equinix Metal API. If not provided, will retrieve the node ID from the Equinix Metal Metadata service.

Config File Format

The configuration file passed to --config must be a json file, and should contain the following keys:

  • apiKey : Equinix Metal API key to use
  • projectID : Equinix Metal project ID
  • facilityID : Equinix Metal facility ID

Environment Variables

In addition to passing information via the config file, you can set it in environment variables. Environment variables always override any setting in the config file. The variables are:

  • PACKET_API_KEY
  • PACKET_PROJECT_ID
  • PACKET_FACILITY_ID

Running the csi-sanity tests

csi-sanity is a set of integration tests that can be run on a host where a csi-plugin is running. In a kubernetes cluster, csi-sanity can be run on a node and communicate with the daemonset node controller running there.

The steps are as follows

  1. Install the csi-packet plugin as above into a kubernetes cluster, but use node_controller_sanity_test.yaml instead of node.yaml. The crucial difference is to start the driver with the Equinix Metal credentials so that the csi-controller is running.
  2. ssh to a node, install a golang environment and build the csi-sanity binaries.
  3. Run ./csi-sanity --ginkgo.v --csi.endpoint=/var/lib/kubelet/plugins/csi.packet.net/csi.sock

Please report any failures to this repository.

Build and Design

To build the Equinix Metal CSI and understand its design, please see BUILD.md.