Skip to content

zmc/ceph-devstack

Repository files navigation

ceph-devstack

A tool for testing Ceph locally using nested rootless podman containers

Overview

ceph-devstack is a tool that can deploy and manage containerized versions of teuthology and its associated services, to test Ceph (or just teuthology) on your local machine. It lets you avoid:

  • Accessing Ceph's Sepia lab
  • Needing dedicated storage devices to test Ceph OSDs

Basically, the goal is that you can test your Ceph branch locally using containers as storage test nodes.

It is currently under active development and has not yet had a formal release.

Supported Operating Systems

☑︎ CentOS 9.Stream should work out of the box

☑︎ CentOS 8.Stream mostly works - but has not yet passed a Ceph test

☐ A recent Fedora should work but has not been tested

☒ Ubuntu does not currently ship a new enough podman

☒ MacOS will require special effort to support since podman operations are done inside a VM

Requirements

  • A supported operating system
  • podman 4.0+ using the crun runtime.
    • On CentOS 8, modify /etc/containers/containers.conf to set the runtime
  • Linux kernel 5.12+, or 4.15+ and fuse-overlayfs
  • cgroup v2
  • With podman <5.0, podman's DNS plugin, from the podman-plugins package
  • A user account that has sudo access and also is a member of the disk group
  • The following sysctl settings:
    • fs.aio-max-nr=1048576
    • kernel.pid_max=4194304
  • If using SELinux in enforcing mode:
    • setsebool -P container_manage_cgroup=true
    • setsebool -P container_use_devices=true

ceph-devstack doctor will check the above and report any issues along with suggested remedies; its --fix flag will apply them for you.

Setup

sudo usermod -a -G disk $(whoami)  # and re-login afterward
git clone https://github.com/ceph/teuthology/
cd teuthology && ./bootstrap
python3 -m venv venv
source ./venv/bin/activate
python3 -m pip install git+https://github.com/zmc/ceph-devstack.git

Configuration

ceph-devstack 's default configuration is here. It can be extended by placing a file at ~/.config/ceph-devstack/config.toml or by using the --config-file flag.

ceph-devstack config dump will output the current configuration.

As an example, the following configuration will use a local image for paddles with the tag TEST; it will also create ten testnode containers; and will build its teuthology container from the git repo at ~/src/teuthology:

containers:
  paddles:
    image: localhost/paddles:TEST
  testnode:
    count: 10
  teuthology:
    repo: ~/src/teuthology

Usage

By default, pre-built container images are pulled from quay.io/ceph-infra. The images can be overridden via the config file. It's also possible to build images from on-disk git repositories.

First, you'll want to pull all the images:

ceph-devstack pull

Optional: If building any images from repos:

ceph-devstack build

Next, you can start the containers with:

ceph-devstack start

Once everything is started, a message similar to this will be logged:

View test results at http://smithi065.front.sepia.ceph.com:8081/

This link points to the running Pulpito instance. Test archives are also stored in the --data-dir (default: ~/.local/share/ceph-devstack).

To watch teuthology's output, you can:

podman logs -f teuthology

If you want testnode containers to be replaced as they are stopped and destroyed, you can:

ceph-devstack watch

When finished, this command removes all the resources that were created:

ceph-devstack remove

Specifying a Test Suite

By default, we run the teuthology:no-ceph suite to self-test teuthology. If we wanted to test Ceph itself, we could use the orch:cephadm:smoke-small suite:

export TEUTHOLOGY_SUITE=orch:cephadm:smoke-small

It's possible to skip the automatic suite-scheduling behavior:

export TEUTHOLOGY_SUITE=none

Using testnodes from an existing lab

If you need to use "real" testnodes and have access to a lab, there are a few additonal steps to take. We will use the Sepia lab as an example below:

To give the teuthology container access to your SSH private key (via podman secret):

export SSH_PRIVKEY_PATH=$HOME/.ssh/id_rsa

To lock machines from the lab:

ssh teuthology.front.sepia.ceph.com
~/teuthology/virtualenv/bin/teuthology-lock \
  --lock-many 1 \
  --machine-type smithi \
  --desc "teuthology dev testing"

Once you have your machines locked, you need to provide a list of their hostnames and their machine type:

export TEUTHOLOGY_TESTNODES="smithiXXX.front.sepia.ceph.com,smithiYYY.front.sepia.ceph.com"
export TEUTHOLOGY_MACHINE_TYPE="smithi"

Setup for development

  1. First fork the repo if you have not done so.
  2. Clone your forked repo
git clone https://github.com/<user-name>/ceph-devstack
  1. Setup the remote repo as upstream (this will prevent creating additional branches)
git remote add upstream https://github.com/zmc/ceph-devstack
  1. Create virtual env in the root directory of ceph-devstack & install python dependencies
python3 -m venv venv
./venv/bin/pip3 install -e .
  1. Activate venv
source venv/bin/activate
  1. Run doctor command to check & fix the dependencies that you need for ceph-devstack
ceph-devstack -v doctor --fix
  1. Build, Create and Start the all containers in ceph-devstack
ceph-devstack -v build
ceph-devstack -v create
ceph-devstack -v start
  1. Test the containers by waiting for teuthology to finish and print the logs
ceph-devstack wait teuthology
podman logs -f teuthology

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •