This code represents the Platform deliverable for the vf-OS project.
- Ubuntu 16.04+ although it’s best to use the latest LTS version 18.04.1.
- Node 8+, install through: https://github.com/nodesource/distributions
- Docker 18+, install through: https://docs.docker.com/install/linux/docker-ce/ubuntu/
Two options: binary distribution or based on the platform source (From source is currently adviced.)
The binary distribution consist of a large zip file vfosPlatform.zip
, which includes a copy of the quarantine local Docker registry, which stores the binary images of the platform assets.
To start with the binary distribution, you'll have to unzip the file. Next you can skip the "From Source" section below, and go directly to the common section. You can download the zipfile from the github releases page: (https://github.com/almende/test-platform/releases/download/1.0.0/vfosPlatform.zip)
First step is to build the docker images for the platform itself. This build process will start a reduced version of the platform to provide access to the vf-OS quarantine local Docker registry in which the assets will be stored before installation. See overview above.
user@host:~/platform$ ./build.sh
npm WARN [email protected] No repository field.
npm WARN [email protected] No license field.
audited 48 packages in 1.26s
found 0 vulnerabilities
67b1b27baf7389de1ed7b169dcaeda228508d5d6989a3f4434eef393cfc3a630
Creating network "vfos_default" with the default driver
Creating network "vfos_execution-manager-net" with driver "bridge"
Creating network "vfos_system-dashboard-net" with driver "bridge"
Creating network "vfos_asset-net-00" with driver "bridge"
Creating network "vfos_asset-net-01" with driver "bridge"
Creating network "vfos_asset-net-02" with driver "bridge"
Creating network "vfos_asset-net-03" with driver "bridge"
Creating network "vfos_asset-net-04" with driver "bridge"
Creating network "vfos_asset-net-05" with driver "bridge"
Creating network "vfos_asset-net-06" with driver "bridge"
Creating network "vfos_asset-net-07" with driver "bridge"
Creating network "vfos_asset-net-08" with driver "bridge"
Creating network "vfos_asset-net-09" with driver "bridge"
Creating network "vfos_asset-net-10" with driver "bridge"
Creating network "vfos_asset-net-11" with driver "bridge"
Creating vfos_registry_1 ... done
Started registry.
Sending build context to Docker daemon 146.4kB
Step 1/24 : FROM node:alpine
.... This will take a couple of minutes ....
Stopping vfos_registry_1 ... done
Removing vfos_registry_1 ... done
Removing network vfos_default
Removing network vfos_execution-manager-net
Removing network vfos_system-dashboard-net
Removing network vfos_asset-net-00
Removing network vfos_asset-net-01
Removing network vfos_asset-net-02
Removing network vfos_asset-net-03
Removing network vfos_asset-net-04
Removing network vfos_asset-net-05
Removing network vfos_asset-net-06
Removing network vfos_asset-net-07
Removing network vfos_asset-net-08
Removing network vfos_asset-net-09
Removing network vfos_asset-net-10
Removing network vfos_asset-net-11
vf_os_platform_exec_control
After this script has finished, you can start the platform through the main startup script, following the common steps below.
Before starting you'll need to configure the DNS/IP-addresses of the platform: For this you'll need to edit the "vf-os.sh" script.
user@host:~/platform$ nano vf-os.sh
...
#SET TO TRUE and MODIFY DOMAIN/EMAIL for https
USE_HTTPS=/bin/false
#ACME_DOMAIN_NAME="35.181.109.46.nip.io"
ACME_DOMAIN_NAME="localhost"
ACME_EXTERNAL_IP=127.0.0.1
ACME_EMAIL="[email protected]"
...
In this block, please check and update the ACME_DOMAIN_NAME variable to match your global DNS name pointing to this server. Similarly you need to update the ACME_EXTERNAL_IP as well, to point to the public ipaddress that your browser can reach. The default settings are good for a localhost setup.
To start the platform you need to run the start.sh script. The first time you run this script it will install runtime dependencies, including the platform assets themselves from the local quarantine repository.
user@host:~/platform$ ./start.sh
48071984db7f5ea86ed09403d2cf0e3744494e7b34efd875a092b68d4b494b6c
Creating network "vfos_default" with the default driver
Creating network "vfos_execution-manager-net" with driver "bridge"
Creating network "vfos_system-dashboard-net" with driver "bridge"
Creating network "vfos_asset-net-00" with driver "bridge"
Creating network "vfos_asset-net-01" with driver "bridge"
Creating network "vfos_asset-net-02" with driver "bridge"
Creating network "vfos_asset-net-03" with driver "bridge"
Creating network "vfos_asset-net-04" with driver "bridge"
Creating network "vfos_asset-net-05" with driver "bridge"
Creating network "vfos_asset-net-06" with driver "bridge"
Creating network "vfos_asset-net-07" with driver "bridge"
Creating network "vfos_asset-net-08" with driver "bridge"
Creating network "vfos_asset-net-09" with driver "bridge"
Creating network "vfos_asset-net-10" with driver "bridge"
Creating network "vfos_asset-net-11" with driver "bridge"
Creating vfos_registry_1 ... done
Starting vfos_registry_1 ... done
Creating vfos_reverse-proxy_1 ... done
Creating vfos_dashboard_1 ... done
Creating vfos_deployment_1 ... done
Creating vfos_portal_1 ... done
Creating vfos_execution-manager_1 ... done
Creating vfos_aim_1 ... done
Creating vfos_testserver_1 ... done
usefull links:
- You can check if the platform started correctly through the Portal WebGUI: http://localhost
- The reverseproxy has a dashboard where you can check the URL mapping: http://localhost:8080/dashboard/
- For interacting with the various REST api's I would advice to use the Advanced REST Client Chrome addon
- Our main documentation of the meta-information, used for asset deployment (see below) is found at: vf-OS MetaData format
Now for the good part: How to add your own assets to the running platform?
The platform provides a couple of tools to facilitate the development, distribution and installation of assets. See image below for an overview of these scripts and each function they provide. In the next few paragraphs each script is decribed with some examples.
In the image the following scripts and commands are shown:
- Docker build
- label2manifest.js
- manifest2label.js
- installAsset.js
- reload.js
You can just create you component(s) through any development process you'll like, including using the vf-OS Studio. The only vf-OS specific requirement is how to add the correct meta-information for your component. This is documented in vf-OS MetaData format
Before creating the zipfile, you need to build your docker image with the correct labels on them.
*note: You can install this docker image directly as an asset into the platform, bypassing the whole quarantine registry. This is a good way to test the labels. Skip directly to Install asset locally for this.
To facilitate the creation of the zipfile for distributing the assets, the platform provides a tool, called label2manifest.js
, which you can find in the root folder of the source distribution and/or in the tools folder of the binary distribution.
This script will take the image from your local running docker daemon, so you need to run the script on the same machine as where you created the asset code.
The script is written for node.js, and has some external dependencies. There is a package.json file in the same folder as the script, which you need to install.
user@host:~/platform$ ls package.json label2manifest.js
label2manifest.js package.json
user@host:~/platform$ npm install
npm WARN [email protected] No repository field.
npm WARN [email protected] No license field.
audited 48 packages in 1.212s
found 0 vulnerabilities
user@host:~/platform$
After installing these dependencies, you can run the script to create the zipfile. Below is an example for a dockerimage called asset-c.
user@host:~/platform$ label2manifest.js asset-c true
Exported docker image: asset-c
Got metadata from docker image: asset-c
{ 'vf-OS': 'true',
'vf-OS.depends': 'asset-b',
'vf-OS.icon': 'img/3.png' } { binaryFile: 'asset-c',
'vf-OS': 'true',
depends: 'asset-b',
icon: 'img/3.png' } 'asset-c'
done, deleting artifacts
There are three parameters to this script:
label2manifest <imageid> [<deleteArtifacts>] [<additionalImages>]
- imageid: The name of the main Docker image itself, the one with the labels inside.
- deleteArtifacts: Should the script cleanup the exported image and the new manifest.json file it created after putting them into the zipfile? Simple true or false parameter, defaults to false.
- additionalImages: quoted string containing a space separated list of other docker images you like to include into the asset zipfile. e.g.
"asset-c-backend asset-c-config"
Through the additional images you can create a multiple image asset, but the meta-information should be placed as labels only in the first image of the set. For these cases it is expected that those labels contain a series of: vf-OS.compose.1.* labels to configure these extra images. (and vf-OS.compose.2.* for the second extra image, etc.)
The script will create a zipfile in the folder where you run the script, called imageid.zip, e.g. asset-c.zip.
In the final setup of the platform, all assets will be installed from the vf-OS Store, by downloading, checking, intermediate storing in the quarantine registry, and installation of the asset into the local docker environment. This is a complex, multi-step process that is annoying and slow during development testing. To ease this for developers, a script is provided that bypasses many of these steps and can deploy the asset from the zipfile, directly into the quarantine repository. This script is called manifest2label.js to mimic it's mirror counterpart. This script is provided in the tools folder of the binary distribution, and requires running npm install for it's dependencies.
user@host:~/platform$ manifest2label.js $PWD/asset-c.zip true true
Read manifest: { binaryFile: 'asset-c',
'vf-OS': 'true',
depends: 'asset-b',
icon: 'img/3.png' }
Images: [ [ 'asset-c:latest' ] ]
Labels from:asset-c:latest { 'vf-OS': 'true',
'vf-OS.depends': 'asset-b',
'vf-OS.icon': 'img/3.png' }
Cleaning up behind me.
Done cleanup.
Also cleaned images.
There are four parameters to this script:
manifest2label <fullPath2zipfile> [<deleteArtifacts>] [<push2Repos>] [<registryHost>]
- fullPath2zipfile: The zipfile to unpack, NOTE: Must be a full path at this time. (see example to use the current working dir.)
- deleteArtifacts: Should the script cleanup the unpackaged image and manifest.json file it created after uploading? Simple true or false parameter, defaults to false.
- push2Repos: Should the script push the image from the local Docker daemon to the quarantine registry as well? Simple true or false parameter, defaults to false.
- registryHost: The hostname of the quarantine registry, defaults to localhost which never needs to change.
For deploying assets to the vf-OS Store another script has been provided: ./uploader.js. The basic syntax for that script is as follows:
./uploader.js '{"product_id":142,"zipfile":"opc_ua.zip","major":"1.0","version":20.5,"product_names_en-us":"opc_ua_driver","access_token":"abcabcabc"}'
The product_id field is optional, if none is given a new product will be generated. The version and price fields must be numeric, a string will let the upload fail.
Through the System Dashboard you can install assets from the vf-OS marketplace. For this you login to the System Dashboard: e.g. http://localhost/systemdashboard
On the right side of the screen is an overview of earlier installations, showing the progress of their installation.
To get the asset from the local quarantine registry into the actual running platform, requires the generation of the docker-compose file for this asset. This can be done through the REST API of the platform, but to simplify this a script installAsset.js is provided that does this. This script is provided in the tools folder of the binary distribution, and requires running npm install for it's dependencies.
*note: You can also install an Asset directly from your local docker daemon, bypassing the whole quarantine registry. In the example below, you don't include the 'localhost:5000/' part for such images. (Just use the local imageid)
Before running the script the platform should be running, to provide access to the registry. Run the script like:
user@host:~/platform$ installAsset.js localhost:5000/asset-a true
Got metadata from docker image: localhost:5000/asset-c
Platform reloaded.
user@host:~/platform$ ls .compose
0_platform_compose.yml 1_networks_compose.yml 3_asset-c_compose.yml docker-compose
user@host:~/platform$ docker container list
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ccb18f1062d8 localhost:5000/vfos/deploy "/usr/src/app/entryp…" 23 seconds ago Up 19 seconds 9000/tcp vfos_deployment_1
7cda48ac56a4 localhost:5000/vfos/system-dashboard "npm start" 23 seconds ago Up 18 seconds 9000/tcp vfos_dashboard_1
067dd4ec6b0a localhost:5000/vfos/test-server "npm start" 23 seconds ago Up 15 seconds 9000/tcp vfos_testserver_1
0afa60d16778 localhost:5000/vfos/exec-manager "npm start" 23 seconds ago Up 17 seconds 9000/tcp vfos_execution-manager_1
f16f1a26a87c localhost:5000/asset-c "npm start" 23 seconds ago Up 15 seconds 9001/tcp vfos_asset-c_1
789b38d393a8 localhost:5000/vfos/aim "/opt/jboss/tools/do…" 23 seconds ago Up 20 seconds 8080/tcp vfos_aim_1
91187164867b localhost:5000/vfos/portal "npm start" 23 seconds ago Up 21 seconds 9000/tcp vfos_portal_1
c06837648ca6 traefik:latest "/traefik --api --do…" 23 seconds ago Up 12 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:8080->8080/tcp vfos_reverse-proxy_1
f53130f4f819 registry:2 "/entrypoint.sh /etc…" About an hour ago Up About an hour 0.0.0.0:5000->5000/tcp vfos_registry_1
2bfa81621399 docker/compose:1.22.0 "/bin/sh -c 'cat /de…" About an hour ago Up About an hour vf_os_platform_exec_control
As you can see in the example, the script creates a docker-compose file, called 3_assetId_compose.yml which is included by the reload of the platform.
There are four parameters to this script:
installAsset.js <imageUrl> [<assetName>] [<reload>] [<targetFolder>] [<volumesFolder>]
- imageUrl: The imageId or url to the registry imageId.
- assetName: The assetName can be overruled during installation
- reload: Should the platform reload its configfiles? (Basically running docker-compose up) Simple true or false parameter, defaults to false.
- targetFolder: Path to the folder where the compose file needs to be generated. Defaults to $PWD/.compose.
- volumesFolder: Absolute path in the host to the folder where the host side of volume mounts needs to be placed. Defaults to $PWD/.persist.