Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add run scripts for SCS HM and Prometheus #137

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
# Generated kubeconfig files
kubeconfig_*

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
Expand Down
29 changes: 29 additions & 0 deletions Containerfile.runner
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
FROM docker.io/library/python:3.12-alpine3.20

# --- root --- #

RUN apk update && \
apk upgrade && \
apk add --update git gcc bash htop tmux vim g++ libffi-dev musl-dev py3-pip

RUN addgroup -g 1000 runner && \
adduser -D -G runner -u 1000 runner



# --- user --- #

USER runner

# You can optionally copy the working directory into the container. Then no outside mount is needed.
#COPY --chown=1000:1000 ./ /data/

COPY ./requirements.txt /data/requirements.txt

WORKDIR /data
ENV KUBECONFIG=/kubeconfig
ENV PATH=/home/runner/.local/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

RUN python3 -m pip install -r requirements.txt

CMD ["bash"]
39 changes: 24 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,19 @@ The SCS Health Monitor project aims to ensure the robustness and reliability of

## Getting Started

If you have Podman (or Docker) available on your system, check out the `runner-podman.sh` and `run-prometheus-in-kind.sh` scripts.
The first script builds a container that contains all tools necessary for running the test framework and then starts a shell in the container.
The second script sets up Prometheus in a local KinD cluster that can then be used by the framework's exporter.

In case you wish to run the tests without a container-based setup, read on.

To get started with the SCS Health Monitor project, follow these steps:

1. Clone this repository to your local machine.
2. Install the required dependencies listed in the `requirements.txt` file.
2. Create a local Python virtual environment and install the required dependencies listed in the `requirements.txt` file.
3. Review the existing Gherkin scenarios in the `features` directory to understand the testing coverage.
4. Create a *clouds.yaml* file int the root of the repository to be able to perform API calls to OpenStack.
5. Create a *env.yaml* file containing configuration needed for performing the tests
4. Create a `clouds.yaml` file int the root of the repository to be able to perform API calls to OpenStack.
5. Create a `env.yaml` file containing configuration needed for performing the tests. See `env.example.yaml`.
6. Execute the tests using Behave library to validate the functionality and performance of your OpenStack environment.

## Usage
Expand Down Expand Up @@ -42,7 +48,7 @@ There is a possibility to run it on the [behavex](https://github.com/hrcorval/be
Here are some basic commands to run the tests:

```bash
behavex # Run all scenarios parallel - not recomended
behavex # Run all scenarios parallel - not recomended
behavex --parallel-scheme feature # Run all of the scenarios, but parallel only the features
behavex features/ # Run scenarios in a specific feature file
behavex -t @tag # Run scenarios with a specific tag
Expand All @@ -61,21 +67,23 @@ For the purposes of gathering information from the test cases being performed ag

[Here](./docs/ObservabilityStack/SetupObservabilityStack.md) you can find a useful quickstart quide on setting up Promethus Stack and Prometheus push gateway locally.

The provided script `run-prometheus-in-kind.sh` runs the Prometheus stack in a local KinD cluster. Port `localhost:30001` is then the correct endpoint for the test runs (see below).

## Exporting metrics to Prometheus Push Gateway
To be able to push the metrics gathered during test executions, you must first configure the prometheus push gateway endpoint. You achieve this by adding these lines to a *env.yaml*:
To be able to push the metrics gathered during test executions, you must first configure the prometheus push gateway endpoint. You achieve this by adding these lines to a `env.yaml`:

``` bash
# Required
# If not present the metrics won't
# Required
# If not present the metrics won't
# be pushed by the test scenarios
PROMETHEUS_ENDPOINT: "localhost:30001"

# Optional (default: "SCS-Health-Monitor")
# Specify the job label value that
# Optional (default: "SCS-Health-Monitor")
# Specify the job label value that
# gets added to the metrics
PROMETHEUS_BATCH_NAME: "SCS-Health-Monitor"

# Required
# Required
# The name of the cloud from clouds.yaml
# that the test scenarios will be ran on
CLOUD_NAME: "gx"
Expand All @@ -85,18 +93,19 @@ CLOUD_NAME: "gx"
APPEND_TIMESTAMP_TO_BATCH_NAME: true
```

This *env.yaml* file must be placed in the root of the repository. This is where you should be also issuing all the *behave <...>* commands to execute the test scenarios.
This `env.yaml` file must be placed in the root of the repository. This is where you should be also issuing all the `behave` commands to execute the test scenarios.

## Collaborators
- Piotr Bigos [@piobig2871](https://github.com/piobig2871)
- Erik Kostelanský [@Erik-Kostelansky-dNation](https://github.com/Erik-Kostelansky-dNation)
- Katharina Trentau [@fraugabel](https://github.com/fraugabel)
- Ľubomír Dobrovodský [@dobrovodskydnation](https://github.com/dobrovodskydnation)
- Tomáš Smädo [@tsmado](https://github.com/tsmado)
- Dominik Pataky [@bitkeks](https://github.com/bitkeks)

## Useful links

### [Openstack python SDK documentation](https://docs.openstack.org/openstacksdk/latest/user/)
### [Openstack CLI tool documentation](https://docs.openstack.org/python-openstackclient/latest/)
### [Parameterisation of tests using scenario outlines](https://jenisys.github.io/behave.example/tutorials/tutorial04.html)
### [Short but concise tutorial on how to setup behave test scenarios](https://behave.readthedocs.io/en/stable/tutorial.html)
* [Openstack python SDK documentation](https://docs.openstack.org/openstacksdk/latest/user/)
* [Openstack CLI tool documentation](https://docs.openstack.org/python-openstackclient/latest/)
* [Parameterisation of tests using scenario outlines](https://jenisys.github.io/behave.example/tutorials/tutorial04.html)
* [Short but concise tutorial on how to setup behave test scenarios](https://behave.readthedocs.io/en/stable/tutorial.html)
19 changes: 19 additions & 0 deletions clouds.example.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# This is a clouds.yaml file, which can be used by OpenStack tools as a source
# of configuration on how to connect to a cloud. If this is your only cloud,
# just put this file in ~/.config/openstack/clouds.yaml and tools like
# python-openstackclient will just work with no further config. (You will need
# to add your password to the auth section)
# If you have more than one cloud account, add the cloud entry to the clouds
# section of your existing file and you can refer to them by name with
# OS_CLOUD=openstack or --os-cloud=openstack
clouds:
openstack:
auth:
auth_url: https://api.myopenstack.cloud:5000
application_credential_id: "some-random-credential-id"
application_credential_secret: "some-random-string-from-clouds-yaml"
region_name: "RegionOne"
interface: "public"
identity_api_version: 3
auth_type: "v3applicationcredential"

6 changes: 3 additions & 3 deletions docs/ObservabilityStack/k8s/nodePorts.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ metadata:
spec:
type: NodePort
selector:
app.kubernetes.io/instance: my-kube-prometheus-stack
app.kubernetes.io/instance: kube-prometheus-stack
app.kubernetes.io/name: grafana
ports:
- port: 3000
Expand All @@ -24,7 +24,7 @@ metadata:
spec:
type: NodePort
selector:
app.kubernetes.io/instance: my-prometheus-pushgateway
app.kubernetes.io/instance: prometheus-pushgateway
app.kubernetes.io/name: prometheus-pushgateway
ports:
- port: 9091
Expand All @@ -44,7 +44,7 @@ spec:
type: NodePort
selector:
app.kubernetes.io/name: prometheus
operator.prometheus.io/name: my-kube-prometheus-stack-prometheus
operator.prometheus.io/name: kube-prometheus-stack-prometheus
ports:
- port: 9090
# By default and for convenience, the `targetPort` is set to
Expand Down
11 changes: 11 additions & 0 deletions docs/ObservabilityStack/kind/kind-config-prometheus.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000
hostPort: 30000
- containerPort: 30001
hostPort: 30001
- containerPort: 30002
hostPort: 30002
25 changes: 25 additions & 0 deletions env.example.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
OS_AUTH_TYPE: v3applicationcredential
OS_AUTH_URL: https://api.myopenstack.cloud:5000
OS_IDENTITY_API_VERSION: 3
OS_REGION_NAME: "RegionOne"
OS_INTERFACE: public
OS_APPLICATION_CREDENTIAL_ID: "some-random-credential-id"
OS_APPLICATION_CREDENTIAL_SECRET: "some-random-string-from-clouds-yaml"

# Name of the cloud section in your clouds.yaml
CLOUD_NAME: "openstack"

# Prefix/suffix that is used in the tests
# TODO: currently some features use hardcoded values, so use "scs-hm"
TESTS_NAME_IDENTIFICATION: "scs-hm"

# A VM image that must exist on your OpenStack platform
VM_IMAGE: "Ubuntu 24.04"

# Flavor name for VMs in your OpenStack platform
FLAVOR_NAME: "SCS-4V-16-100s"

# Prometheus related configuration
PROMETHEUS_ENDPOINT: "localhost:30001"
PROMETHEUS_BATCH_NAME: "SCS-Health-Monitor"
APPEND_TIMESTAMP_TO_BATCH_NAME: true
35 changes: 35 additions & 0 deletions run-prometheus-in-kind.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
#!bash

set -e

SCRIPT_DIR=$(dirname $(readlink -f "$0"))
KUBECONFIG_PATH="$SCRIPT_DIR/kubeconfig_kind_scs-hm-prometheus"
KIND_CLUSTER_CONFIG="$SCRIPT_DIR/docs/ObservabilityStack/kind/kind-config-prometheus.yaml"

if kind get clusters | grep scs-hm-prometheus; then
echo "KinD cluster scs-hm-prometheus already exists, skipping creation"
else
# create KinD cluster with a config that maps the NodePort ports on the host machine
kind create cluster --config "$KIND_CLUSTER_CONFIG" --kubeconfig "$KUBECONFIG_PATH" --name "scs-hm-prometheus"
fi

if [[ ! -f "$KUBECONFIG_PATH" ]]; then
echo "$KUBECONFIG_PATH is not a file, aborting"
exit
fi

export KUBECONFIG="$KUBECONFIG_PATH"

kubectl cluster-info --context kind-scs-hm-prometheus --kubeconfig "$KUBECONFIG_PATH"

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

# default prometheus login: admin/prom-operator
helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --version 57.0.3 -f "$SCRIPT_DIR/docs/ObservabilityStack/Values/PrometheusStackValues.yaml"
helm install prometheus-pushgateway prometheus-community/prometheus-pushgateway --version 2.8.0 -f "$SCRIPT_DIR/docs/ObservabilityStack/Values/PrometheusPushGateway.yaml"

kubectl apply -f "$SCRIPT_DIR/docs/ObservabilityStack/k8s/nodePorts.yaml"

#kubectl port-forward svc/kube-prometheus-stack-prometheus 9090:9090
#kubectl port-forward svc/prometheus-pushgateway 9091:9091
69 changes: 69 additions & 0 deletions runner-podman.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
#!bash

set -e

trap "exit" SIGINT SIGTERM

SCRIPT_DIR=$(dirname $(readlink -f "$0"))

### Build

podman build -t scshm-runner -f "$SCRIPT_DIR/Containerfile.runner" "$SCRIPT_DIR/"



### Run

cd "$SCRIPT_DIR"

if [[ -z "$KUBECONFIG" ]]; then
echo "There is no KUBECONFIG env var set, so skipping Kubernetes kubeconfig mount."
echo "No Kubernetes tests will be able to run, if there's no kubeconfig in the image."
echo
KUBECONFIG_MOUNT=""
else
if [[ ! -f "$KUBECONFIG" ]]; then
echo "$KUBECONFIG is not a file, aborting"
exit
fi

# Make path absolute
KUBECONFIG=$(readlink -f "$KUBECONFIG")

echo "Using $KUBECONFIG as kubeconfig in the container"
echo
KUBECONFIG_MOUNT="-v ${KUBECONFIG}:/kubeconfig:ro,Z"
fi

if [[ -z "$CLOUDS_YAML" ]]; then
echo "CLOUDS_YAML env var is not set, so no OpenStack clouds.yaml is mounted into the container."

if [[ -f "./clouds.yaml" ]]; then
echo "There's a clouds.yaml file in the root directory of the project, so this one will be used."
fi

echo
CLOUDSYAML_MOUNT=""
else
if [[ ! -f "$CLOUDS_YAML" ]]; then
echo "$CLOUDS_YAML is not a file, aborting"
exit
fi

# Make path absolute
CLOUDS_YAML=$(readlink -f "$CLOUDS_YAML")

echo "Using $CLOUDS_YAML as clouds.yaml in the container"
echo
CLOUDSYAML_MOUNT="-v ${CLOUDS_YAML}:/data/clouds.yaml:ro,Z"
fi

if [[ -z "$KUBECONFIG_MOUNT" && ( -z "$CLOUDSYAML_MOUNT" && ! -f "clouds.yaml" ) ]]; then
echo "You have neither a Kubeconfig nor a clouds.yaml for the runner container - no action possible"
exit
fi

podman run -ti ${KUBECONFIG_MOUNT} ${CLOUDSYAML_MOUNT} -v ${SCRIPT_DIR}:/data/:rw,Z \
--userns=keep-id:uid=1000,gid=1000 \
--network=host \
scshm-runner:latest