-
Notifications
You must be signed in to change notification settings - Fork 6
Getting started
This page is obsolete.
The project OpenVIM, as well as OpenMANO, has been contributed to the open source community project Open Source MANO (OSM), hosted by ETSI.
Go to the URL osm.etsi.org to know more about OSM.
#Table of Contents#
- Introduction
- Requirements
- Installation for end users
- Installation for developers
- Additional components for a full NFV experience
- Configure and run
Next figure shows a general diagram of a datacenter controlled by openvim. It is possible to distinguish the two different components of the openvim SW:
- openvimd (openvim server)
- openvim (openvim CLI client)
External components are managed by openvimd. Compute nodes and image storage are directly managed by openvimd, while switches are indirectly managed by openvimd. There is an Openflow Controller (OFC) controlling the switches, and openvimd interacts with the OFC.
For a simple setup to have openvim, two physical servers, both in the same LAN, are required to create a minimal setup for NFV:
- One server would be the controller node, where openvim will run. In this node, an Openflow controller will also run. No special requirements are needed to run it. It could run in a VM instead of consuming a whole server.
- The other server would be a compute node where the VMs will be deployed. This server must be a Linux system (e.g. RedHat 7.0, CentOS 7.0, Ubuntu Server 14.04), 64bits OS with KVM, qemu and libvirt. In order to get the maximum performance, compute nodes must be configured following these guidelines. Openvim has been tested with servers based on Xeon E5-based Intel processors with Ivy Bridge architecture. Although it might work with Intel Core i3, i5 and i7 families, no tests have been carried out with servers based on those processors.
Note: You can test the entire API and CLI without a compute node nor an Openflow controller, running openvim in 'mode: test'. But take into account that in this mode, no VMs will be actually deployed.
For a full NFV setup, ready for data plane VNFs, extra requirements are needed:
- Compute nodes must have 10GE interfaces with SR-IOV capabilities. Suitable NICs are, for instance, Intel X520 NICs.
- An Openflow switch with 10GE interfaces is required to interconnect the compute nodes, allowing VM interconnection at switch level while using passthrough or SR-IOV.
You can download and run an Ubuntu Server LAMP and run the installation script providing the database user and password. For example:
-
Download a Ubuntu Server 14.04 LTS (ubuntu/reverse). Run the VM and from inside execute the command:
wget https://github.com/nfvlabs/openvim/raw/v0.4/scripts/install-openvim.sh chmod +x install-openvim.sh sudo ./install-openvim.sh [<database-root-user> [<database-root-password>]] #NOTE: you can provide optionally the DB root user and password.
-
Other tested distributions are Ubuntu Desktop 64bits 14.04.2 LTS (osboxes/osboxes.org), CentOS 7 (osboxes/osboxes.org)
-
There is also an scritpt to install floodlight v0.9
wget https://github.com/nfvlabs/openvim/raw/v0.4/scripts/install-floodlight.sh chmod +x install-floodlight.sh sudo ./install-floodlight.sh
-
Install the following packages in the controller node: mysql, python-yaml, python-libvirt, python-bottle, python-mysqldb, python-jsonchema, python-paramiko, python-bs4, python-argcomplete, git, screen.
-
In RedHat/Fedora/CentOS:
#perhaps epel repository need to be installed: 'yum install epel-release; yum repolist' sudo yum install git screen wget PyYAML libvirt-python MySQL-python \ python-jsonschema python-paramiko python-argcomplete python-requests python-devel sudo easy_install bottle
-
In Ubuntu/Debian:
sudo apt-get install git screen wget python-yaml python-libvirt python-bottle \ python-mysqldb python-jsonschema python-paramiko python-argcomplete python-requests
-
-
Clone the git repository and checkout the last release (v0.4):
git clone https://github.com/nfvlabs/openvim.git openvim git checkout v0.4
-
Database creation and initialization
-
Create the two databases
mysqladmin -u root -p create vim_db
-
Grant access privileges from localhost. Go to mysql console and use the following commands to create user vim and grant privileges to the databases:
mysql> CREATE USER 'vim'@'localhost' identified by 'vimpw'; mysql> GRANT ALL PRIVILEGES ON vim_db.* TO 'vim'@'localhost';
-
Edit SQL database VIM files host_ranking.sql and of_ports_correspondence.sql
-
Initialize database:
openvim/database_utils/init_vim_db.sh -uvim -pvimpw
-
-
For normal or OF only openvim modes you will need a openflow controller. You can install e.g. floodlight-0.90. Download it and compile it with ant. Generate a shell variable FLOODLIGHT_PATH with the floodlight installation path:
echo "export FLOODLIGHT_PATH="/home/n2/floodlight-0.90" >> ~/.bashrc
The script openvim/scripts/install-floodlight.sh makes this steps for you. And the script service-floodlight can be used to start/stop it in a screen with logs.
-
It is useful to configure arg-complete and put in the path the openvim client
#creates /home/${USER}/bin folder and links openvim executable files inside that folder mkdir -p ~/bin #this folder is in the PATH for most of Linux distributions ln -s ${PWD}/openvim/openvim /home${USER}/bin/openvim #configure argautocomplete for this user mkdir -p ~/.bash_completion.d activate-global-python-argcomplete --user #execute . .bash_completion.d/python-argcomplete.sh at login. Add to .bashrc echo ". /home/${USER}/bash_completion.d/python-argcomplete.sh" >> ~/.bashrc
You need to follow the same procedure as for the end users. The only change is that you download the installation script from the master branch and it becomes the working branch:
wget https://github.com/nfvlabs/openvim/raw/master/scripts/install-openvim.sh
chmod +x install-openvim.sh
sudo ./install-openvim.sh [<database-root-user> [<database-root-password>]]
#NOTE: you can provide optionally the DB root user and password.
##Manual installation## You need to follow the same procedure as for the end users. The only change is that you don't need to checkout the branch of the last release. When cloning git repository, just do the following: git clone https://github.com/nfvlabs/openvim.git openvim
#Additional components for a full NFV experience#
##Installation of an Openflow Controller##
###Installation of Floodlight###
###Installation of Opendaylight###
##Installation of DHCP server##
-
Install the package dhcp3-server. This package will install the actual DHCP server based on isc-dhcp-server package.
sudo install dhcp3-server
##Floodlight OpenFlow controller configuration and run##
-
Go to scripts folder and edit the file flow.properties setting the appropriate port values
-
Start FloodLight
service-floodlight start #it creates a screen with name "flow" and start on it the openflow controller screen -x flow # goes into floodlight screen [Ctrl+a , d] # goes out of the screen (detaches the screen) less openvim/logs/openflow.log
##Opendaylight controller configuration and run##
-
Start Opendaylight
service-opendaylight start #it creates a screen with name "flow" and starts on it the openflow controller screen -x flow # goes into opendaylight screen [Ctrl+a , d] # goes out of the screen (detaches the screen) less openvim/logs/openflow.log
##Configuration of DHCP server##
-
Edit file /etc/default/isc-dhcp-server to enable DHCP server in the appropriate interface, the one attached to Telco/VNF management network (e.g. eth1).
$ sudo vi /etc/default/isc-dhcp-server INTERFACES="eth1"
-
Edit file /etc/dhcp/dhcpd.conf to specify the subnet, netmask and range of IP addresses to be offered by the server.
$ sudo vi /etc/dhcp/dhcpd.conf ddns-update-style none; default-lease-time 86400; max-lease-time 86400; log-facility local7; option subnet-mask 255.255.0.0; option broadcast-address 10.210.255.255; subnet 10.210.0.0 netmask 255.255.0.0 { range 10.210.1.2 10.210.1.254; }
-
Restart the service:
sudo service isc-dhcp-server restart
-
In case of error messages (e.g. "Job failed to start"), check the configuration because it is easy to forget ";" characters. Check file /var/log/syslog to see logs with the label '''dhcpd'''
##Openvim server configuration##
-
Go to openvim folder and edit openvimd.cfg. Note: by default it runs in 'mode: test' where no real hosts neither openflow controller are needed. You can change to 'mode: normal', or 'mode: host only' (no openflow controller is needed)
-
Start openvim server
service-openvim vim start #it creates a screen with name "vim" and starts inside the "./openvim/openvimd.py" program screen -x vim # goes into openvim screen [Ctrl+a , d] # goes out of the screen (detaches the screen) less openvim/logs/openvim.log
##Openvim client configuration## All actions need to be conveyed through the openvim client.
-
Let's configure the openvim CLI client. Needed if you have changed the openvimd.cfg file
openvim config # show openvim related variables #To change variables run export OPENVIM_HOST=<http_host of openvimd.cfg> export OPENVIM_PORT=<http_port of openvimd.cfg> export OPENVIM_ADMIN_PORT=<http_admin_port of openvimd.cfg> #You can insert at .bashrc for authomatic loading at login: echo "export OPENVIM_HOST=<...>" >> /home/${USER}/.bashrc ...
##Adding compute nodes##
-
Let's attach compute nodes
In 'test' mode we need to provide fake compute nodes with all the necessary information:
openvim host-add test/hosts/host-example0.json openvim host-add test/hosts/host-example1.json openvim host-add test/hosts/host-example2.json openvim host-add test/hosts/host-example3.json openvim host-list #-v,-vv,-vvv for verbosity levels
In 'normal' or 'host only' mode, the process is a bit more complex. First, you need to configure appropriately the host following these guidelines. The current process is manual, although we are working on an automated process. For the moment, follow these instructions:
#copy openvim/scripts/host-add.sh and run at compute host for gather all the information ./host_add.sh <user> <ip_name> >> host.yaml #NOTE: If the host contains interfaces connected to the openflow switch for dataplane, # the switch port where the interfaces are connected must be provided manually, # otherwise these interfaces cannot be used. Follow one of two methods: # 1) Fill openvim/database_utils/of_ports_pci_correspondence.sql ... # ... and load with mysql -uvim -p vim_db < openvim/database_utils/of_ports_pci_correspondence.sql # 2) or add manually this information at generated host.yaml with a 'switch_port: <whatever>' # ... entry at 'host-data':'numas': 'interfaces' # copy this generated file host.yaml to the openvim server, and add the compute host with the command: openvim host-add host.yaml # copy openvim ssh key to the compute node. If openvim user didn't have a ssh key generate it using ssh-keygen ssh-copy-id <compute node user>@<IP address of the compute node>
Note: It must be noted that Openvim has been tested with servers based on Xeon E5 Intel processors with Ivy Bridge architecture. No tests have been carried out with Intel Core i3, i5 and i7 families, so there are no guarantees that the integration will be seamless.
##Adding external networks##
-
Let's list the external networks:
openvim net-list
-
Let's create some external networks in openvim. These networks are public and can be used by any VNF. It must be noticed that these networks must be pre-provisioned in the compute nodes in order to be effectively used by the VNFs. The pre-provision will be skipped since we are in test mode. Four networks will be created:
- default -> default NAT network provided by libvirt. By creating this network, VMs will be able to connect to default network in the same host where they are deployed.
- macvtap:em1 -> macvtap network associated to interface "em1". By creating this network, we allow VMs to connect to a macvtap interface of physical interface "em1" in the same host where they are deployed. If the interface naming scheme is different, use the appropriate name instead of "em1".
- bridge_net -> bridged network intended for VM-to-VM communication. The pre-provision of a Linux bridge in all compute nodes is described in Compute node configuration#pre-provision-of-linux-bridges. By creating this network, VMs will be able to connect to the Linux bridge "virbrMan1" in the same host where they are deployed. In that way, two VMs connected to "virbrMan1", no matter the host, will be able to talk each other.
- data_net -> external data network intended for VM-to-VM communication. By creating this network, VMs will be able to connect to a network element connected behind a physical port in the external switch.
In order to create external networks, use 'openvim net-create', specifying a file with the network information. Now we will create the 4 networks:
openvim net-create test/networks/net-example0.yaml openvim net-create test/networks/net-example1.yaml openvim net-create test/networks/net-example2.yaml openvim net-create test/networks/net-example3.yaml
-
Let's list the external networks:
openvim net-list 2c386a58-e2b5-11e4-a3c9-52540032c4fa data_net 35671f9e-e2b4-11e4-a3c9-52540032c4fa default 79769aa2-e2b4-11e4-a3c9-52540032c4fa macvtap:em1 8f597eb6-e2b4-11e4-a3c9-52540032c4fa bridge_net
You can build your own networks using the template 'templates/network.yaml'. Alternatively, you can use 'openvim net-create' without a file and answer the questions:
openvim net-create
You can delete a network, e.g. "macvtap:em1", using the command:
openvim net-delete macvtap:em1
##Creating a new tenant##
-
Now let's create a new tenant "admin":
$ openvim tenant-create tenant name? admin tenant description (admin)? <uuid> admin Created
-
Take the uuid of the tenant and update the environment variables used by openvim client:
export OPENVIM_TENANT=<obtained uuid> #echo "export OPENVIM_TENANT=<obtained uuid>" >> /home/${USER}/.bashrc openvim config #show openvim env variables