You will find some examples that help show how to use the connectors in various ways:
- ping-pong: This is a very simple example to showcase the interaction between Zeebe and Kafka using Kafka Connect and the Zeebe source and sink connectors
- microservices-orchestration: This example showcases how Zeebe could orchestrate a payment microservice from within an order fulfillment microservice when Kafka is used as transport.
All examples will require you to build the project and run the Kafka Connect via docker. While docker is not the only way to run the examples, it provides the quickest get started experience and thus is the only option described here. Refer to Kafka Connect Installation for more options on running Kafka Connect.
You will leverage Camunda Platform 8 - SaaS to use a managed Zeebe instance You can use Confluent Cloud to use a managed Kafka installation (recommended) - or you start Kafka via Docker as described below.
You need the following tools on your system:
- docker-compose to run Kafka
- Java and maven to build the connector
- Camunda Modeler to visually inspect the process models
- Login to https://camunda.io/
- Create a new Zeebe cluster
- When the new cluster appears in the console, create a new set of client credentials to be used in the connector properties.
- Login to https://login.confluent.io/login
- Create a new Kafka cluster
- When the new cluster appears in the console, create a new set of client credentials
- Enter these credentials in
docker/docker-compose-confluent-cloud.yml
, three times that look like:
CONNECT_SASL_JAAS_CONFIG: "org.apache.kafka.common.security.plain.PlainLoginModule required \
username=\"7XLMBPRBYSG4PTMS\" password=\"hmS0wgnCc2gzQcPVOtcH78kIhJjbR4qzuilzLbmBgQeDJwR/YURUeZl3ocgZrgLS\";"
To build the connector, simply run the following from the root project directory:
mvn clean install -DskipTests
The resulting artifact is an uber JAR, e.g. target/kafka-connect-zeebe-*-uber.jar
, where the asterisk is replaced by the current project version. For example, for version 1.0.0-SNAPSHOT
, then the artifact is located at: target/kafka-connect-zeebe-1.0.0-SNAPSHOT-uber.jar
.
Copy this JAR to docker/connectors/
, e.g. docker/connectors/kafka-connect-zeebe-1.0.0-SNAPSHOT-uber.jar
.
cd docker
docker-compose -f docker-compose-confluent-cloud.yml up
You need at least 6,5 GB of RAM dedicated to Docker, otherwise Kafka might not come up. If you experience problems try to increase memory first, as Docker has relatively little memory in default.
If you don't use Confluent Cloud you can start Kafka Connect alongside a whole Kafka cluster locally by
cd docker
docker-compose -f docker-compose-local-kafka.yml up
This will start:
- Kafka, on port
9092
.- Zookeeper, on port
2081
.
- Zookeeper, on port
- Kafka Schema Registry, on port
8081
. - Kafka Connect, on port
8083
: http://localhost:8083/ - Confluent Control Center, on port
9021
. This will be our tool to monitor the Kafka cluster, create connectors, visualize Kafka topics, etc.: : http://localhost:9021/
Of course you can customize the Docker Compose file to your needs. This Docker Compose file is also just based on the examples provided by Confluent.
Of course, you can also run without Docker. For development purposes or just to try it out, you can simply grab the uber JAR after the Maven build and place it in your Kafka Connect plugin path.