Setting up an OpenvCloud cluster is done in following steps:
- Meet the prerequisites
- Create the configuration file
- Validate the configuration file
- Configure the switches
- Install operation system on the controller nodes
- Setup the Kubernetes cluster and deploy the OpenvCloud system containers
- Access the management container
- Install the operating systems on the nodes
- Setup the storage nodes
- Install JumpScale services on nodes
- Deploy virtual machine images
- Currently supported G8 size:
- 3 dedicated controller nodes
- 10 dedicated CPU nodes
- 4 dedicated storage nodes
- SSH access credentials for all nodes
- Swap needs to be off on each node
- Each node needs to be able to access each other node in the cluster
- Certificates for SSL verification have to be included in YAML config in
certificates
section. Each certificate object should include fieldskey
andcrt
containing a private key and a certificate correspondingly. The certificates for different cases should be referenced by name inssl
section jsonschema
python library
Installing an OpenvCloud cluster is done based on a single system-config.yaml
configuration file.
This configuration files describes:
- Switch configuration
- Operating system installations
- Kubernetes cluster running on the controllers, which hosts the OpenvCloud master and and all other controller components
- CPU and storage nodes installation
- ...
The system-config.yaml
file needs to be stored and maintained in the root of a Git repository on https://docs.greenitglobe.com
; for each G8 installation there is distinct Git repository on https://docs.greenitglobe.com
The following rules apply:
- The name of the repository should be formatted
env_<<descriptive environment specification>>
, e.g.env_be-g8-4
- The repository needs to be put in an organization that represents the partner or customer, e.g.
gigtech
organization for G8s owned by GIG itself, ordigitalenergy
for our Russian partner
Some example repositories:
- https://docs.greenitglobe.com/gigtech/env_be-g8-4
- https://docs.greenitglobe.com/gigtech/env_se-sto-en01-001
- https://docs.greenitglobe.com/digitalenergy/env_mr4
An example of a system-config.yaml
can be found here: https://github.com/0-complexity/openvcloud_installer/blob/master/scripts/kubernetes/config/system-config.yaml
Important A common technique to create a
system-config.yaml
is to make a copy from another environment and start editing. Please make sure to alter thessh.private-key
setting, and not just leave the copy from the other environment.
Having valid configuration is off course very important. Validating is done with the OpenvCloud environment manager, a.k.a. Meneja (Swahili for 'manager'), available on https://meneja.gig.tech.
In Meneja you can select the environment you are setting up, and click the Validate configuration button. When your configuration is valid, you'll see the following text appear next to the button: "The configuration is valid!"*
On Meneja a USB stick can be downloaded that already has the custom configuration for a specific environment. As shown on the screenshot below, there is a link called "Download usb installer image" which results in a bootable ISO file, that can be used to boot from (via the IPMI or via burning it onto a USB stick):
After booting up the controller node with the boot image, the user gets a screen with the following options:
The rest is extremely simple. Just select the right option depending on the controller node that needs to be installed, and the rest is completely automatic.
Once you see the following screen, the installation of the controller node has finished. Just unplug the installer image, and reboot the machine.
Repeat this procedure for all three controllers.
@TODO
This will create a Kubernetes cluster, and deploy all OpenvCloud system containers needed to manage an OpenvCloud cluster.
One of these containers is the management container, through which you will be able to check the status of all other containers;discussed next.
To connect to the controller use teleport First you need to install docker on the node on which you are beginnning the installation process:
apt-get update
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get install apt-transport-https
apt-get update
apt-get install libltdl7 aufs-tools
apt-get install docker-ce=18.03.0~ce-0~ubuntu
For more details about the ìnstaller
script see Installer Script Details.
Run the following command to start the cluster installation:
docker run -it --rm -e ENV_OVC_VERSION={version} openvcloud/ninstaller
This will result in the following prompt:
Follow the instruction in the prompt to get the keycode:
Copy the keycode and paste it to the prompt and enter the password for that key. Select from the next menu the environment to be installed.
Then choose cluster deploy
to install the cluster.
The management container is used to perform various admin operations on the environment. It is based on the management image and has the kubectl
tool installed that is needed to perform various Kubernetes related operations.
Accessing the management container can be done using 0-access.
From a web browser open the OpenvCloud portal and go to the 0-access page at https://{env name}.demo.greenitglobe.com/cbgrid/0-access
.
Choose management
from the list. You will be directed to a page that will allow you to request access to the pod which will redirect you to a page with instructions about how to access the management container and the remaining time for this session.
In the management container you can check the status of all pods using the following command:
kubectl get pods
If all pods are running continue to the next step.
You need to be in management container to perform this operation.
To prepare the CPU and storage nodes with the necessary OS, first start a tmux session and then run the following command:
installer --config {config file} node action --name all install_os
If a node fails during the installation, use the following command to install the node again:
installer --config {config file} node action --name {node name} install_os
From the management container execute:
ssh -A ovs # this will get you on the Open vStorage pod (specially prepared to have systemd)
export ENVNAME="be-g8-3"
# let's generate the config
cd /opt/code/github/0-complexity/openvcloud_installer/scripts/ovs/
python3 ovs_configurator.py --config_path=/opt/cfg/system/system-config.yaml
# clone Open vStorage autoinstaller
mkdir /opt/code/github/openvstorage/
cd /opt/code/github/openvstorage
git clone [email protected]:openvstorage/dev_ops.git -b 4.1.4
mkdir -p dev_ops/Ansible/openvstorage/playbooks/inventories/$ENVNAME/group_vars
# copy our generated files
cp /opt/code/github/0-complexity/openvcloud_installer/scripts/ovs/output/{inventory,setup.json} /opt/code/github/openvstorage/dev_ops/Ansible/openvstorage/playbooks/inventories/$ENVNAME/
cp /opt/code/github/0-complexity/openvcloud_installer/scripts/ovs/output/all /opt/code/github/openvstorage/dev_ops/Ansible/openvstorage/playbooks/inventories/$ENVNAME/group_vars
# preinstall script which installs Ansible
cd /opt/code/github/openvstorage/dev_ops/Ansible/openvstorage/bin
bash pre-install.sh
cd /opt/code/github/openvstorage/dev_ops/Ansible/openvstorage/playbooks/
ansible-playbook -i inventories/$ENVNAME/inventory preInstall.yml
# this last step is not very bullet proof and might need to be repeated
ansible-playbook -i inventories/$ENVNAME/inventory full_setup.yml
You need to be in management container to perform this operation.
First start a tmux session. The following command will install the JumpScale services on all OpenvCloud cluster nodes (CPU and storage):
installer --config {config file} node jsaction --name all install
If a node fails during the installation, use the following command to install the node again:
installer --config {config file} node jsaction --name {node name} install
For more details about the ìnstaller
script see Installer Script Details.
When done, the environment should be ready to use.
To add images see