Note
|
This repository contains the guide documentation source. To view the guide in published form, view it on the Open Liberty website. |
Explore how to deploy microservices to a local OpenShift cluster running with OpenShift Local (formerly known as CodeReady Containers)
You’ll learn how to deploy two microservices in Open Liberty containers to an OpenShift 4 cluster that is running locally on your computer by using OpenShift Local. To learn how to deploy to an OpenShift 4 cluster by using operators, see the Deploying microservices to OpenShift by using Kubernetes Operators guide.
Different cloud-based solutions are available for running your Kubernetes workloads. With a cloud-based infrastructure, you can focus on developing your microservices without worrying about low-level infrastructure details for deployment. By using the cloud, you can easily scale and manage your microservices in a high-availability setup.
Kubernetes is an open source container orchestrator that automates many tasks that are involved in deploying, managing, and scaling containerized applications. To learn more about Kubernetes, check out the Deploying microservices to Kubernetes guide.
Red Hat OpenShift Local is a tool that you can use to quickly build and run a minimal OpenShift 4 cluster on your local computer. OpenShift Local simplifies setup and testing while providing all of tools that are needed to develop container-based applications.
The two microservices that you’ll deploy are called system
and inventory
. The system
microservice returns the JVM system properties of the running container. It also returns the pod name in the HTTP header, which makes pod replicas more distinguishable from each other. The inventory
microservice adds the properties from the system
microservice to the inventory. This process demonstrates how communication can be established between pods inside a cluster.
Before you begin, you must install OpenShift Local to run OpenShift 4 locally on your computer. For installation instructions, refer to the official OpenShift Local documentation.
Run the following command to set up your host machine for OpenShift Local:
crc setup
Next, run the following command to start the OpenShift Local virtual machine and OpenShift cluster:
crc start
Supply your user pull secret at the prompt. You can find the pull secret on the page where you downloaded the latest release of OpenShift Local.
If the cluster starts successfully, you can see output similar to the following example:
Started the OpenShift cluster. The server is accessible via web console at: https://console-openshift-console.apps-crc.testing Log in as administrator: Username: kubeadmin Password: jPvDv-jgRZB-qhYP4-Hmkmj Log in as user: Username: developer Password: developer
Save this output as it might be required later in this guide.
To interact with the OpenShift cluster, you need to use the oc
commands. To build containers, you need a containerization software such as podman. OpenShift Local already includes the oc
and podman
binary. To use the oc
and podman
commands, run the following command to see instructions on how to configure your PATH:
crc podman-env
The resulting output differs based on your OS and environment, but the output is similar to the following example:
export PATH="/Users/developer/.bin/oc:$PATH" export CONTAINER_SSHKEY="/Users/developer/.crc/machines/crc/id_ecdsa" export CONTAINER_HOST="ssh://[email protected]:2222/run/user/1000/podman/podman.sock" export DOCKER_HOST="unix:///Users/developer/.crc/machines/crc/docker.sock" # Run this command to configure your shell session: # eval $(crc podman-env)
Run the command that is specified in the output to configure your shell session:
eval $(crc podman-env)
alias podman=podman-remote
export PATH="/Users/developer/.bin/oc:$PATH" export CONTAINER_SSHKEY="/Users/developer/.crc/machines/crc/id_ecdsa" export CONTAINER_HOST="ssh://[email protected]:2222/run/user/1000/podman/podman.sock" export DOCKER_HOST="unix:///Users/developer/.crc/machines/crc/docker.sock" # Run this command to configure your shell session: # eval $(crc podman-env)
Run the appropriate command specified in the output to configure your environment:
eval $(crc podman-env)
SET PATH=C:\Users\developer\.crc\bin\oc;%PATH% SET CONTAINER_SSHKEY=C:\Users\developer\.crc\machines\crc\id_ecdsa SET CONTAINER_HOST=ssh://[email protected]:2222/run/user/1000/podman/podman.sock SET DOCKER_HOST=npipe:////./pipe/crc-podman REM Run this command to configure your shell: REM @FOR /f "tokens=*" %i IN ('crc podman-env') DO @call %i
Run the appropriate command specified in the output to configure your environment:
@FOR /f "tokens=*" %i IN ('crc podman-env') DO @call %i
Run the following command to log in and gain access to your OpenShift cluster:
oc login -u developer https://api.crc.testing:6443
If prompted, enter the developer
password from the output for the crc start
command that you ran in the Starting the virtual machine section. Next, create a new OpenShift project with the name my-project
by running the following command:
oc new-project my-project
You’re now ready to build and deploy microservices.
In this section, you’ll learn how to deploy two microservices in Open Liberty containers to a Kubernetes cluster on OpenShift. You’ll build and containerize the system
and inventory
microservices, push them to a container registry, and then deploy them to your Kubernetes cluster.
The first step of deploying to Kubernetes is to build your microservices and containerize them.
The starting Java project, which is located in the start directory, is a multi-module Maven project. It’s made up of the system
and inventory
microservices. Each microservice is located in its own directory: start/system
for the system
microservice and start/inventory
for the inventory
microservice. Each of these directories contains a Dockerfile, which is necessary for building the Docker images. See the Containerizing microservices guide if you’re unfamiliar with Dockerfiles.
If you’re familiar with Maven and Docker, you might be tempted to run a Maven build first and then use the .war
file to build a Docker image. However, these projects are set up so that this process is automated as a part of a single Maven build.
Go to the start
directory and build these microservices by running the following commands:
cd start
mvn package
Next, run the podman build
commands to build container images for your application:
podman build -t system:1.0-SNAPSHOT system/.
podman build -t inventory:1.0-SNAPSHOT inventory/.
During the build, you see various Docker messages that describe what images are being downloaded and built. When the build finishes, run the following command to list all local Docker images:
podman images
Verify that the system:1.0-SNAPSHOT
and inventory:1.0-SNAPSHOT
images are listed among them, for example:
REPOSITORY TAG localhost/system 1.0-SNAPSHOT localhost/inventory 1.0-SNAPSHOT icr.io/appcafe/open-liberty kernel-slim-java11-openj9-ubi
If you don’t see the system:1.0-SNAPSHOT
and inventory:1.0-SNAPSHOT
images, check the Maven build log for any potential errors.
In order to run the microservices on the cluster, you need to push the microservice images to a container image registry. You’ll use OpenShift Container Registry (OCR), which is the OpenShift integrated container image registry. After your images are pushed to the registry, you can use them in the pods that you create later in the guide.
Run the following command to login your OCR.:
oc registry login --insecure=true
You can view the registry address by running the following command:
oc registry info
The output is similar to the following:
default-route-openshift-image-registry.apps-crc.testing
Ensure that you’re logged in to OpenShift and the registry, and run the following commands to tag your applications:
podman tag system:1.0-SNAPSHOT $(oc registry info)/$(oc project -q)/system:1.0-SNAPSHOT
podman tag inventory:1.0-SNAPSHOT $(oc registry info)/$(oc project -q)/inventory:1.0-SNAPSHOT
Run the following commands:
oc registry info
oc project -q
Replace the square brackets in the following podman tag
commands with the results from the previous commands:
podman tag system:1.0-SNAPSHOT [oc registry info]/[oc project -q]/system:1.0-SNAPSHOT
podman tag inventory:1.0-SNAPSHOT [oc registry info]/[oc project -q]/inventory:1.0-SNAPSHOT
Finally, push your images to the registry:
podman push $(oc registry info)/$(oc project -q)/system:1.0-SNAPSHOT
podman push $(oc registry info)/$(oc project -q)/inventory:1.0-SNAPSHOT
Run the following commands:
oc registry info
oc project -q
Replace the square brackets in the following podman push
commands with the results from the previous commands:
podman push [oc registry info]/[oc project -q]/system:1.0-SNAPSHOT
podman push [oc registry info]/[oc project -q]/inventory:1.0-SNAPSHOT
After you push the images, run the following command to list the images that you pushed to the internal OCR:
oc get imagestream
Verify that the system
and inventory
images are listed among them, for example:
NAME IMAGE REPOSITORY TAGS UPDATED inventory default-route-openshift-image-registry.apps.us-west-1.starter.openshift-online.com/guide/inventory 1.0-SNAPSHOT 3 seconds ago system default-route-openshift-image-registry.apps.us-west-1.starter.openshift-online.com/guide/system 1.0-SNAPSHOT 17 seconds ago
Now that your container images are built, deploy them by using a Kubernetes object configuration file.
Kubernetes objects can be configured in a YAML file that contains a description of all your deployments, services, or any other objects that you want to deploy. All objects can also be deleted from the cluster by using the same YAML file that you used to deploy them. If you’re interested in learning more about using and configuring Kubernetes clusters, check out the Deploying microservices to Kubernetes guide.
kubernetes.yaml
link:finish/kubernetes.yaml[role=include]
Create thekubernetes.yaml
file in thestart
directory.kubernetes.yaml
In this file, the image
is the name and tag of the container image that is used. The image address is the same OCR address that you logged in to in the previous section.
Run the following commands to deploy the objects as defined in kubernetes.yaml file:
oc apply -f kubernetes.yaml
You see an output similar to the following example:
deployment.apps/system-deployment created deployment.apps/inventory-deployment created service/system-service created service/inventory-service created route.route.openshift.io/system-route created route.route.openshift.io/inventory-route created
When the apps are deployed, run the following command to check the status of your pods:
oc get pods
If all the pods are healthy and running, you see an output similar to the following example:
NAME READY STATUS RESTARTS AGE
system-deployment-6bd97d9bf6-4ccds 1/1 Running 0 15s
inventory-deployment-645767664f-nbtd9 1/1 Running 0 15s
Routes are used to access the services and the application. A route in OpenShift exposes a service at a hostname such as www.your-web-app.com
so external users can access the application.
kubernetes.yaml
link:finish/kubernetes.yaml[role=include]
Both the system
and inventory
routes are configured in the kubernetes.yaml
file, and running the oc apply -f kubernetes.yaml
command exposed both services.
Your microservices can now be accessed through the hostnames that you can find by running the following command:
oc get routes
The routes can also be found in the web console. The web console URL is similar to console-openshift-console.apps-crc.testing
and was given as part of the output of the crc start
command at the start of the guide. When you access the web console, log in using the administrator account credentials. After logging in, you can view the routes by navigating to the Networking > Routes
page. Hostnames are in the Location
columnn in the inventory-route-my-project.apps-crc.testing
format. Ensure that you’re in your project, not the default
project, which is shown in the Project:
field for the console.
Enter the following URLs into your web browser to access your microservices. Substitute the hostnames that you obtained from the oc get routes
command for the system
and inventory
services:
-
http://[system-hostname]/system/properties
-
http://[inventory-hostname]/inventory/systems
In the first URL, you see the system properties of the container JVM in JSON format. The second URL returns an empty list, which is expected because no system properties are stored in the inventory yet.
Point your browser to the http://[inventory-hostname]/inventory/systems/[system-hostname]
URL. When you go to this URL, the system properties that are taken from the system
service are automatically stored in the inventory. Revisit the http://[inventory-hostname]/inventory/systems
URL and you see a new entry.
pom.xml
link:finish/inventory/pom.xml[role=include]
A few tests are included for you to test the basic functions of the microservices. If a test failure occurs, then you might have introduced a bug into the code. To run the tests, wait for all pods to be in the ready state before you proceed further. The default properties that are defined in the pom.xml
file are:
Property | Description |
---|---|
|
IP or hostname of the |
|
IP or hostname of the |
Use the following command to run the integration tests against your cluster:
mvn verify \
-Dsystem.ip=system-route-my-project.apps-crc.testing \
-Dinventory.ip=inventory-route-my-project.apps-crc.testing
-
Replace the
system.ip
parameter with the appropriate hostname to access your system microservice. -
Replace the
inventory.ip
parameter with the appropriate hostname to access your inventory microservice.
If the tests pass, you see an output for each service similar to the following examples:
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.system.SystemEndpointIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.673 sec - in it.io.openliberty.guides.system.SystemEndpointIT
Results:
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.inventory.InventoryEndpointIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.222 sec - in it.io.openliberty.guides.inventory.InventoryEndpointIT
Results:
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0
When you no longer need your deployed microservices, you can delete the Kubernetes deployments, services, and routes by running the following command:
oc delete -f kubernetes.yaml
To delete the pushed images, run the following commands:
oc delete imagestream/inventory
oc delete imagestream/system
Next, you can delete the project by running the following command:
oc delete project my-project
Finally, you can stop and delete the OpenShift Local virtual machine by running the following commands:
crc stop
crc delete
You just deployed two microservices running in Open Liberty to an OpenShift cluster by using OpenShift Local. You also learned how to use oc
to deploy your microservices on a Kubernetes cluster.