This project provides a simplified way to evaluate performance differences between Apache Kafka and RabbitMQ streams. The current version only compares the publishing throughput of RabbitMQ and Kafka using Spring Batch.
For years, RabbitMQ was not considered for very high throughput requirements. RabbitMQ stream (introduced in RabbitMQ version 3.9) now allows RabbitMQ to compete for high throughput use cases. The goal of this project is for developers and architectures to explore if RabbitMQ streams has a comparable performance throughput to Apache Kafka. Also see RabbitMQ vs Kafka: How to Choose an Event-Streaming Broker.
The following is an example report of the Transactions Per Second (TPS) using the example Spring Batch application to publish 2 millions records. The experiments were executed on a Mac OS laptop, with 32 GB memory, SSD drive, and 10 CPU cores (Apple M1 Max). It uses RabbitMQ version 12.2 and Kafka version 2.13-3.5.1
Note: totalCount is the total number of Spring Batch job executions.
- Java Version 17
- RabbitMQ Version 3.11 and highers
- Apache Kafka version 3.5 and higher
- Postgres version 14 and higher (used for the Spring Batch job repository.)
Use the maven ./mvnw to build the solution
./mvnw package
Example Kafka Home directory
export KAFKA_HOME=/Users/devtools/integration/messaging/apacheKafka/kafka_2.13-3.5.1
Step | Activity | Examples/Script |
---|---|---|
1 | RabbitMQ - Setup Download/Install | brew install rabbitmq |
2 | RabbitMQ -Enable Stream Plugin | rabbitmq-plugins enable rabbitmq_stream |
3 | Kafka -Download Apache Kafka | See https://kafka.apache.org/quickstart |
4 | Kafka - Start Zookeeper | cd $KAFKA_HOME && bin/zookeeper-server-start.sh config/zookeeper.properties& |
5 | Kafka - Start Kafka Broker | cd $KAFKA_HOME && bin/kafka-server-start.sh config/server.properties & |
6 | Postgres - Download/Install Postgres | brew install postgresql@14 |
Generate Input file with 2 Million Records
cd scripts/generate_batch_file
python generate_transaction_file.py
Publish 2 million records
Example
java -Xms1g -Xmx1g -jar applications/rabbit-vs-kafka-batch/target/rabbit-vs-kafka-batch-0.0.1-SNAPSHOT.jar --spring.profiles.active=rabbit --spring.rabbitmq.stream.uri=rabbitmq-stream://localhost:5552 --spring.rabbitmq.stream.name=transactions --spring.rabbitmq.stream.username=guest --spring.rabbitmq.stream.password=guest --spring.datasource.url=jdbc:postgresql://localhost:5432/postgres --spring.datasource.username=postgres --spring.datasource.password=
Example
java -Xms1g -Xmx1g -jar applications/rabbit-vs-kafka-batch/target/rabbit-vs-kafka-batch-0.0.1-SNAPSHOT.jar --spring.profiles.active=kafka --bootstrap.servers=localhost:9092 --kafka.producer.topic=transaction --spring.datasource.url=jdbc:postgresql://localhost:5432/postgres --spring.datasource.username=postgres --spring.datasource.password=
You can use the rabbit-vs-kafka-report-app to view the results.
Example
java -jar applications/rabbit-vs-kafka-report-app/target/rabbit-vs-kafka-report-app-0.0.1-SNAPSHOT.jar --spring.datasource.url=jdbc:postgresql://localhost:5432/postgres --spring.datasource.username=postgres --spring.datasource.password=
Open Browser
open http://localhost:8080
$KAFKA_HOME/bin/kafka-topics.sh --bootstrap-server=localhost:9092 --delete --topic transactions
rabbitmqctl --node rabbit delete_queue transactions
psql -U postgres -d postgres -c 'DROP SCHEMA evt_stream CASCADE'