Skip to content

Latest commit

 

History

History
 
 

basic

Camel K Kafka Basic Quickstart

This example demonstrates how to get started with Camel K and Apache Kafka. We will show how to quickly set up a Kafka Topic via Red Hat OpenShift Streams for Apache Kafka and be able to use it in a simple Producer/Consumer pattern Integration.

The quickstart is based on the Apache Camel K upstream Kafka example.

Before you begin

Make sure you check-out this repository from git and open it with VSCode.

Instructions are based on VSCode Didact, so make sure it's installed from the VSCode extensions marketplace.

From the VSCode UI, right-click on the readme.didact.md file and select "Didact: Start Didact tutorial from File". A new Didact tab will be opened in VS Code.

Make sure you've opened this readme file with Didact before jumping to the next section.

Preparing the Kafka instance

The quickstart can run with any Kafka instance. However we want to focus how to connect a Red Hat OpenShift Streams for Apache Kafka to Camel K. In order to use it, you must login to the Red Hat Cloud (Beta). Please, consider the due limitations of this offering at the time of writing this tutorial.

Create an instance and a topic

In order to setup an instance and your first topic, you can follow up the UI quickstart or use the rhoas CLI.

Note: we assume in this tutorial that you are creating a topic named test. You can use any other name, just make sure to reflect the name chosen in the application.properties configuration file.

Prepare Kafka credentials

Once you've setup your first topic, you must create a set of credentials that you will be using during this quickstart:

At this stage you should have the following credentials: a kafka bootstrap URL, a service account id and a service account secret. You may also want to take note of the Token endpoint URL if you choose to use "SASL/OAUTHBEARER" instead of "SASL/Plain" authentication method.

Topic Authorisation

To access RHOAS Kafka topics, you need to authorise the service account to access certain topics, i.e:

rhoas kafka acl grant-access --producer --consumer --service-account $CLIENT_ID --topic $TOPIC_NAME --group all

For more details, please consult the RHOAS Documentation.

Preparing the cluster

This example can be run on any OpenShift 4.3+ cluster or a local development instance (such as CRC). Ensure that you have a cluster available and login to it using the OpenShift oc command line tool.

You need to create a new project named camel-k-kafka for running this example. This can be done directly from the OpenShift web console or by executing the command oc new-project camel-k-kafka on a terminal window.

oc new-project camel-k-kafka

(^ execute{.didact})

You need to install the Camel K operator in the camel-k-kafka project. To do so, go to the OpenShift 4.x web console, login with a cluster admin account and use the OperatorHub menu item on the left to find and install "Red Hat Integration - Camel K". You will be given the option to install it globally on the cluster or on a specific namespace. If using a specific namespace, make sure you select the camel-k-kafka project from the dropdown list. This completes the installation of the Camel K operator (it may take a couple of minutes).

Upon successful creation, you should ensure that the Camel K operator is installed:

oc get csv

(^ execute{.didact})

When Camel K is installed, you should find an entry related to red-hat-camel-k-operator in phase Succeeded.

When the operator is installed, from the OpenShift Help menu ("?") at the top of the WebConsole, you can access the "Command Line Tools" page, where you can download the "kamel" CLI, that is required for running this example. The CLI must be installed in your system path.

Refer to the "Red Hat Integration - Camel K" documentation for a more detailed explanation of the installation steps for the operator and the CLI.

You can use the following section to check if your environment is configured properly.

Requirements

Validate all Requirements at Once!

OpenShift CLI ("oc")

The OpenShift CLI tool ("oc") will be used to interact with the OpenShift cluster.

Check if the OpenShift CLI ("oc") is installed{.didact}

Status: unknown{#oc-requirements-status}

Connection to an OpenShift cluster

In order to execute this demo, you will need to have an OpenShift cluster with the correct access level, the ability to create projects and install operators as well as the Apache Camel K CLI installed on your local system.

Check if you're connected to an OpenShift cluster{.didact}

Status: unknown{#cluster-requirements-status}

Apache Camel K CLI ("kamel")

Apart from the support provided by the VS Code extension, you also need the Apache Camel K CLI ("kamel") in order to access all Camel K features.

Check if the Apache Camel K CLI ("kamel") is installed{.didact}

Status: unknown{#kamel-requirements-status}

Optional Requirements

The following requirements are optional. They don't prevent the execution of the demo, but may make it easier to follow.

VS Code Extension Pack for Apache Camel

The VS Code Extension Pack for Apache Camel by Red Hat provides a collection of useful tools for Apache Camel K developers, such as code completion and integrated lifecycle management. They are recommended for the tutorial, but they are not required.

You can install it from the VS Code Extensions marketplace.

Check if the VS Code Extension Pack for Apache Camel by Red Hat is installed{.didact}

Status: unknown{#extension-requirement-status}

1. Preparing the project

We'll connect to the camel-k-kafka project and check the installation status.

To change project, open a terminal tab and type the following command:

oc project camel-k-kafka

(^ execute{.didact})

2. Secret preparation

You will have 2 different authentication method available in the next sections: SASL/Plain or SASL/OAUTHBearer.

SASL/Plain authentication method

You can take back the secret credentials provided earlier (kafka bootstrap URL,service account id and service account secret). Edit application.properties file filling those configuration. Now you can create a secret to contain the sensitive properties in the application.properties file that we will pass later to the running Integrations:

oc create secret generic kafka-props --from-file application.properties

(^ execute{.didact})

SASL/OAUTHBearer authentication method

You can take back the secret credentials provided earlier (kafka bootstrap URL,service account id, service account secret and Token endpoint URL). Edit application-oauth.properties file filling those configuration. Now you can create a secret to contain the sensitive properties in the application-oauth.properties file that we will pass later to the running Integrations:

oc create secret generic kafka-props --from-file application-oauth.properties

(^ execute{.didact})

3. Running a Kafka Producer integration

At this stage, run a producer integration. This one will fill the topic with a message, every second:

kamel run --config secret:kafka-props SaslSSLKafkaProducer.java --dev

(^ execute{.didact})

The producer will create a new message and push into the topic and log some information.

...
[2] 2021-05-06 15:08:53,231 INFO  [FromTimer2Kafka] (Camel (camel-1) thread #7 - KafkaProducer[test]) Message correctly sent to the topic!
[2] 2021-05-06 15:08:54,155 INFO  [FromTimer2Kafka] (Camel (camel-1) thread #9 - KafkaProducer[test]) Message correctly sent to the topic!
...

Note: Both SaslSSLKafkaProducer.java and SaslSSLKafkaConsumer.java files specify a runtime kafka-clients maven dependency needed in version 1.3.x of Camel K. It may not be needed to specify it in future Camel K releases.

Note: The integration files specify a runtime kafka-oauth-client maven dependency provided by Strimzi. This is only needed if you run the SASL/OAUTHBearer authentication method. You may be using a different service or provide your own implementation.

4. Running a Kafka Consumer integration

Now, open another shell and run the consumer integration using the command:

kamel run --config secret:kafka-props SaslSSLKafkaConsumer.java --dev

(^ execute{.didact})

A consumer will start logging the events found in the Topic:

[1] 2021-05-06 15:09:47,466 INFO  [FromKafka2Log] (Camel (camel-1) thread #0 - KafkaConsumer[test]) Message #133
[1] 2021-05-06 15:09:48,280 INFO  [FromKafka2Log] (Camel (camel-1) thread #0 - KafkaConsumer[test]) Message #134
[1] 2021-05-06 15:09:49,264 INFO  [FromKafka2Log] (Camel (camel-1) thread #0 - KafkaConsumer[test]) Message #135

Note: When you terminate a "dev mode" execution, also the remote integration will be deleted. This gives the experience of a local program execution, but the integration is actually running in the remote cluster. To keep the integration running and not linked to the terminal, you can run it without "dev mode" (--dev flag)

5. Uninstall

To cleanup everything, execute the following command:

oc delete project camel-k-kafka

(^ execute{.didact})