diff --git a/docs/advanced-usage/before-callbacks.md b/docs/advanced-usage/before-callbacks.md
index eed278f..36e9153 100644
--- a/docs/advanced-usage/before-callbacks.md
+++ b/docs/advanced-usage/before-callbacks.md
@@ -21,3 +21,7 @@ $consumer = \Junges\Kafka\Facades\Kafka::consumer()
These callbacks are not middlewares, so you can not interact with the consumed message.
You can add as many callback as you need, so you can divide different tasks into
different callbacks.
+
+```+parse
+
+```
\ No newline at end of file
diff --git a/docs/advanced-usage/custom-committers.md b/docs/advanced-usage/custom-committers.md
index 960f014..6f3fa60 100644
--- a/docs/advanced-usage/custom-committers.md
+++ b/docs/advanced-usage/custom-committers.md
@@ -5,6 +5,10 @@ weight: 4
By default, the committers provided by the `DefaultCommitterFactory` are provided.
+```+parse
+
+```
+
To set a custom committer on your consumer, add the committer via a factory that implements the `CommitterFactory` interface:
```php
diff --git a/docs/advanced-usage/custom-loggers.md b/docs/advanced-usage/custom-loggers.md
index 694df35..d460d02 100644
--- a/docs/advanced-usage/custom-loggers.md
+++ b/docs/advanced-usage/custom-loggers.md
@@ -5,6 +5,10 @@ weight: 7
Sometimes you need more control over your logging setup. From `v1.10.1` of this package, you can define your own `Logger` implementation. This means that you have the flexibility to log to different types of storage, such as file or cloud-based logging service.
+```+parse
+
+```
+
This can be useful for organizations that need to comply with data privacy regulations, such as the General Data Protection Regulation (GDPR). For example, if an exception occurs and gets logged, it might contain sensitive information such as personally identifiable information (PII). Implementing a custom logger, you can now configure it to automatically redact this information before it gets written to the log.
A `Logger` is any class that implements the `\Junges\Kafka\Contracts\Logger` interface, and it only require that you define a `error` method.
diff --git a/docs/advanced-usage/graceful-shutdown.md b/docs/advanced-usage/graceful-shutdown.md
index b5f6b81..9374b98 100644
--- a/docs/advanced-usage/graceful-shutdown.md
+++ b/docs/advanced-usage/graceful-shutdown.md
@@ -7,6 +7,10 @@ Stopping consumers is very useful if you want to ensure you don't kill a process
Consumers automatically listen to the `SIGTERM`, `SIGINT` and `SIQUIT` signals, which means you can easily stop your consumers using those signals.
+```+parse
+
+```
+
### Running callbacks when the consumer stops
If your app requires that you run sum sort of processing when the consumers stop processing messages, you can use the `onStopConsume` method, available on the `\Junges\Kafka\Contracts\CanConsumeMessages` interface. This method accepts a `Closure` that will run once your consumer stops consuming.
diff --git a/docs/advanced-usage/middlewares.md b/docs/advanced-usage/middlewares.md
index 49c370a..9308761 100644
--- a/docs/advanced-usage/middlewares.md
+++ b/docs/advanced-usage/middlewares.md
@@ -13,4 +13,8 @@ $consumer = \Junges\Kafka\Facades\Kafka::consumer()
});
```
-You can add as many middlewares as you need, so you can divide different tasks into different middlewares.
\ No newline at end of file
+You can add as many middlewares as you need, so you can divide different tasks into different middlewares.
+
+```+parse
+
+```
\ No newline at end of file
diff --git a/docs/advanced-usage/replacing-default-serializer.md b/docs/advanced-usage/replacing-default-serializer.md
index ad11d1c..ffb52dd 100644
--- a/docs/advanced-usage/replacing-default-serializer.md
+++ b/docs/advanced-usage/replacing-default-serializer.md
@@ -5,6 +5,10 @@ weight: 1
The default Serializer is resolved using the `MessageSerializer` and `MessageDeserializer` contracts. Out of the box, the `Json` serializers are used.
+```+parse
+
+```
+
To set the default serializer you can bind the `MessageSerializer` and `MessageDeserializer` contracts to any class which implements this interfaces.
Open your `AppServiceProvider` class and add this lines to the `register` method:
diff --git a/docs/advanced-usage/sasl-authentication.md b/docs/advanced-usage/sasl-authentication.md
index 5cd21cc..8ec2a9d 100644
--- a/docs/advanced-usage/sasl-authentication.md
+++ b/docs/advanced-usage/sasl-authentication.md
@@ -3,6 +3,10 @@ title: SASL Authentication
weight: 3
---
+```+parse
+
+```
+
SASL allows your producers and your consumers to authenticate to your Kafka cluster, which verifies their identity.
It's also a secure way to enable your clients to endorse an identity. To provide SASL configuration, you can use the `withSasl` method,
passing a `Junges\Kafka\Config\Sasl` instance as the argument:
diff --git a/docs/advanced-usage/sending-multiple-messages-with-the-same-producer.md b/docs/advanced-usage/sending-multiple-messages-with-the-same-producer.md
index 2cafc43..dd786ac 100644
--- a/docs/advanced-usage/sending-multiple-messages-with-the-same-producer.md
+++ b/docs/advanced-usage/sending-multiple-messages-with-the-same-producer.md
@@ -17,3 +17,8 @@ Sometimes you may want to send multiple messages without having to create the co
```
Now, you can call `\Junges\Kafka\Facades\Kafka::myProducer()`, which will always apply the configs you defined in your service provider.
+
+
+```+parse
+
+```
\ No newline at end of file
diff --git a/docs/advanced-usage/setting-global-configuration.md b/docs/advanced-usage/setting-global-configuration.md
index 75d3909..5e6e3de 100644
--- a/docs/advanced-usage/setting-global-configuration.md
+++ b/docs/advanced-usage/setting-global-configuration.md
@@ -18,3 +18,8 @@ to achieve that. Here's an example:
```
Now, you can call `\Junges\Kafka\Facades\Kafka::myProducer()`, which will always apply the configs you defined in your service provider.
+
+
+```+parse
+
+```
\ No newline at end of file
diff --git a/docs/advanced-usage/stop-consumer-after-last-message.md b/docs/advanced-usage/stop-consumer-after-last-message.md
index 3b77327..b5c4262 100644
--- a/docs/advanced-usage/stop-consumer-after-last-message.md
+++ b/docs/advanced-usage/stop-consumer-after-last-message.md
@@ -18,4 +18,8 @@ $consumer = \Junges\Kafka\Facades\Kafka::consumer(['topic'])
->build();
$consumer->consume();
+```
+
+```+parse
+
```
\ No newline at end of file
diff --git a/docs/advanced-usage/stopping-a-consumer.md b/docs/advanced-usage/stopping-a-consumer.md
index 5e0a23a..5976f4b 100644
--- a/docs/advanced-usage/stopping-a-consumer.md
+++ b/docs/advanced-usage/stopping-a-consumer.md
@@ -22,4 +22,8 @@ $consumer = \Junges\Kafka\Facades\Kafka::consumer(['topic'])
$consumer->consume();
```
-The `onStopConsuming` callback will be executed before stopping your consumer.
\ No newline at end of file
+The `onStopConsuming` callback will be executed before stopping your consumer.
+
+```+parse
+
+```
\ No newline at end of file
diff --git a/docs/consuming-messages/assigning-partitions.md b/docs/consuming-messages/assigning-partitions.md
index 855c7c8..19a52ee 100644
--- a/docs/consuming-messages/assigning-partitions.md
+++ b/docs/consuming-messages/assigning-partitions.md
@@ -5,6 +5,10 @@ weight: 3
Kafka clients allows you to implement your own partition assignment strategies for consumers.
+```+parse
+
+```
+
If you have a topic with multiple consumers and want to assign a consumer to a specific partition topic, you can
use the `assignPartitions` method, available on the `ConsumerBuilder` instance:
diff --git a/docs/consuming-messages/class-structure.md b/docs/consuming-messages/class-structure.md
index 10d0129..193a839 100644
--- a/docs/consuming-messages/class-structure.md
+++ b/docs/consuming-messages/class-structure.md
@@ -37,6 +37,10 @@ class MyTopicConsumer extends Command
Now, to keep this consumer process running permanently in the background, you should use a process monitor such as [supervisor](http://supervisord.org/) to ensure that the consumer does not stop running.
+```+parse
+
+```
+
## Supervisor configuration
In production, you need a way to keep your consumer processes running. For this reason, you need to configure a process monitor that can detect when your consumer processes exit and automatically restart them. In addition, process monitors can allow you to specify how many consumer processes you would like to run concurrently. Supervisor is a process monitor commonly used in Linux environments and we will discuss how to configure it in the following documentation.
diff --git a/docs/consuming-messages/configuring-consumer-options.md b/docs/consuming-messages/configuring-consumer-options.md
index da8acb3..d6e01f5 100644
--- a/docs/consuming-messages/configuring-consumer-options.md
+++ b/docs/consuming-messages/configuring-consumer-options.md
@@ -3,7 +3,11 @@ title: Configuring consumer options
weight: 6
---
-The `ConsumerBuilder` offers you some few configuration options:
+The `ConsumerBuilder` offers you some few configuration options.
+
+```+parse
+
+```
### Configuring a dead letter queue
In kafka, a Dead Letter Queue (or DLQ), is a simple kafka topic in the kafka cluster which acts as the destination for messages that were not
diff --git a/docs/consuming-messages/consumer-groups.md b/docs/consuming-messages/consumer-groups.md
index 1dc662f..6dda1b4 100644
--- a/docs/consuming-messages/consumer-groups.md
+++ b/docs/consuming-messages/consumer-groups.md
@@ -5,6 +5,10 @@ weight: 4
Kafka consumers belonging to the same consumer group share a group id. The consumers in a group divides the topic partitions as fairly amongst themselves as possible by establishing that each partition is only consumed by a single consumer from the group.
+```+parse
+
+```
+
To attach your consumer to a consumer group, you can use the method `withConsumerGroupId` to specify the consumer group id:
```php
diff --git a/docs/consuming-messages/consuming-messages.md b/docs/consuming-messages/consuming-messages.md
index 4388ab2..25b6aee 100644
--- a/docs/consuming-messages/consuming-messages.md
+++ b/docs/consuming-messages/consuming-messages.md
@@ -15,4 +15,8 @@ After building the consumer, you must call the `consume` method to consume the m
```php
$consumer->consume();
+```
+
+```+parse
+
```
\ No newline at end of file
diff --git a/docs/consuming-messages/creating-consumer.md b/docs/consuming-messages/creating-consumer.md
index d6f8e5a..28d5baa 100644
--- a/docs/consuming-messages/creating-consumer.md
+++ b/docs/consuming-messages/creating-consumer.md
@@ -21,4 +21,8 @@ use Junges\Kafka\Facades\Kafka;
$consumer = Kafka::consumer(['topic-1', 'topic-2'], 'group-id', 'broker');
```
-This method returns a `Junges\Kafka\Consumers\ConsumerBuilder::class` instance, and you can use it to configure your consumer.
\ No newline at end of file
+This method returns a `Junges\Kafka\Consumers\ConsumerBuilder::class` instance, and you can use it to configure your consumer.
+
+```+parse
+
+```
\ No newline at end of file
diff --git a/docs/consuming-messages/custom-deserializers.md b/docs/consuming-messages/custom-deserializers.md
index f9a985a..4138cf0 100644
--- a/docs/consuming-messages/custom-deserializers.md
+++ b/docs/consuming-messages/custom-deserializers.md
@@ -6,6 +6,10 @@ weight: 7
To create a custom deserializer, you need to create a class that implements the `\Junges\Kafka\Contracts\MessageDeserializer` contract.
This interface force you to declare the `deserialize` method.
+```+parse
+
+```
+
To set the deserializer you want to use, use the `usingDeserializer` method:
```php
diff --git a/docs/consuming-messages/handling-message-batch.md b/docs/consuming-messages/handling-message-batch.md
index ac8bbd3..6550931 100644
--- a/docs/consuming-messages/handling-message-batch.md
+++ b/docs/consuming-messages/handling-message-batch.md
@@ -7,6 +7,10 @@ If you want to handle multiple messages at once, you can build your consumer ena
The `enableBatching` method enables the batching feature, and you can use `withBatchSizeLimit` to set the maximum size of a batch.
The `withBatchReleaseInterval` sets the interval in which the batch of messages will be released after timer exceeds given interval.
+```+parse
+
+```
+
The example below shows that batch is going to be handled if batch size is greater or equals to 1000 or every 1500 milliseconds.
Batching feature could be helpful when you work with databases like ClickHouse, where you insert data in large batches.
diff --git a/docs/consuming-messages/message-handlers.md b/docs/consuming-messages/message-handlers.md
index 291635a..71c7a2c 100644
--- a/docs/consuming-messages/message-handlers.md
+++ b/docs/consuming-messages/message-handlers.md
@@ -37,3 +37,8 @@ The `ConsumerMessage` contract gives you some handy methods to get the message p
- `getBody()`: Returns the body of the message
- `getOffset()`: Returns the offset where the message was published
+
+```+parse
+
+```
+
diff --git a/docs/consuming-messages/queueable-handlers.md b/docs/consuming-messages/queueable-handlers.md
index 213b649..5de1487 100644
--- a/docs/consuming-messages/queueable-handlers.md
+++ b/docs/consuming-messages/queueable-handlers.md
@@ -5,6 +5,10 @@ weight: 11
Queueable handlers allow you to handle your kafka messages in a queue. This will put a job into the Laravel queue system for each message received by your Kafka consumer.
+```+parse
+
+```
+
This only requires you to implements the `Illuminate\Contracts\Queue\ShouldQueue` interface in your Handler.
This is how a queueable handler looks like:
diff --git a/docs/consuming-messages/subscribing-to-kafka-topics.md b/docs/consuming-messages/subscribing-to-kafka-topics.md
index 4ab1407..7e7ccbc 100644
--- a/docs/consuming-messages/subscribing-to-kafka-topics.md
+++ b/docs/consuming-messages/subscribing-to-kafka-topics.md
@@ -3,6 +3,10 @@ title: Subscribing to kafka topics
weight: 2
---
+```+parse
+
+```
+
With a consumer created, you can subscribe to a kafka topic using the `subscribe` method:
```php
diff --git a/docs/consuming-messages/using-regex-to-subscribe-to-kafka-topics.md b/docs/consuming-messages/using-regex-to-subscribe-to-kafka-topics.md
index 24a7e08..83d6d60 100644
--- a/docs/consuming-messages/using-regex-to-subscribe-to-kafka-topics.md
+++ b/docs/consuming-messages/using-regex-to-subscribe-to-kafka-topics.md
@@ -4,6 +4,10 @@ weight: 2
---
Kafka allows you to subscribe to topics using regex, and regex pattern matching is automatically performed for topics prefixed with `^` (e.g. `^myPfx[0-9]_.*`).
+
+```+parse
+
+```
The consumer will see the new topics on its next periodic metadata refresh which is controlled by the `topic.metadata.refresh.interval.ms`
diff --git a/docs/installation-and-setup.md b/docs/installation-and-setup.md
index d5100ee..3c44660 100644
--- a/docs/installation-and-setup.md
+++ b/docs/installation-and-setup.md
@@ -15,6 +15,10 @@ You need to publish the configuration file using
php artisan vendor:publish --tag=laravel-kafka-config
```
+```+parse
+
+```
+
This is the default content of the configuration file:
```php
diff --git a/docs/producing-messages/configuring-message-payload.md b/docs/producing-messages/configuring-message-payload.md
index 5b89bf7..59665e6 100644
--- a/docs/producing-messages/configuring-message-payload.md
+++ b/docs/producing-messages/configuring-message-payload.md
@@ -5,6 +5,10 @@ weight: 3
In kafka, you can configure your payload with a message, message headers and message key. All these configurations are available within ProducerBuilder class.
+```+parse
+
+```
+
### Configuring message headers
To configure the message headers, use the `withHeaders` method:
diff --git a/docs/producing-messages/configuring-producers.md b/docs/producing-messages/configuring-producers.md
index 3f27288..de40803 100644
--- a/docs/producing-messages/configuring-producers.md
+++ b/docs/producing-messages/configuring-producers.md
@@ -5,6 +5,10 @@ weight: 2
The producer builder, returned by the `publish` call, gives you a series of methods which you can use to configure your kafka producer options.
+```+parse
+
+```
+
### Defining configuration options
The `withConfigOption` method sets a `\RdKafka\Conf::class` option. You can check all available options [here][rdkafka_config].
diff --git a/docs/producing-messages/custom-serializers.md b/docs/producing-messages/custom-serializers.md
index fda61d7..2264a77 100644
--- a/docs/producing-messages/custom-serializers.md
+++ b/docs/producing-messages/custom-serializers.md
@@ -5,6 +5,10 @@ weight: 4
Serialization is the process of converting messages to bytes. Deserialization is the inverse process - converting a stream of bytes into and object. In a nutshell, it transforms the content into readable and interpretable information.
+```+parse
+
+```
+
Basically, in order to prepare the message for transmission from the producer we use serializers. This package supports three serializers out of the box:
- NullSerializer / NullDeserializer
diff --git a/docs/producing-messages/producing-message-batch-to-kafka.md b/docs/producing-messages/producing-message-batch-to-kafka.md
index 84de8b2..fdfe0e5 100644
--- a/docs/producing-messages/producing-message-batch-to-kafka.md
+++ b/docs/producing-messages/producing-message-batch-to-kafka.md
@@ -19,6 +19,10 @@ Then create as many messages as you want and push them to the `MesageBatch` inst
Finally, create your producer and call the `sendBatch`, passing the `MessageBatch` instance as a parameter.
This is helpful when you persist messages in storage before publishing (e.g. TransactionalOutbox Pattern).
+```+parse
+
+```
+
By using message batch, you can send multiple messages using the same producer instance, which is way faster than the default `send` method, which flushes the producer after each produced message.
Messages are queued for asynchronous sending, and there is no guarantee that it will be sent immediately. The `sendBatch` is recommended for a system with high throughput.
diff --git a/docs/producing-messages/producing-messages.md b/docs/producing-messages/producing-messages.md
index 0a1096c..4ee1c55 100644
--- a/docs/producing-messages/producing-messages.md
+++ b/docs/producing-messages/producing-messages.md
@@ -23,7 +23,11 @@ Kafka::asyncPublish('broker')->onTopic('topic-name')
```
The main difference is that the Async Producer is a singleton and will only flush the producer when the application is shutting down, instead of after each send or batch send.
-This reduces the overhead when you want to send a lot of messages in your request handlers.
+This reduces the overhead when you want to send a lot of messages in your request handlers.
+
+```+parse
+
+```
When doing async publishing, the builder is stored in memory during the entire request. If you need to use a fresh producer, you may use the `fresh` method
available on the `Kafka` facade (added in v2.2.0). This method will return a fresh Kafka Manager, which you can use to produce messages with a newly created producer builder.
diff --git a/docs/producing-messages/publishing-to-kafka.md b/docs/producing-messages/publishing-to-kafka.md
index 85801ff..6ac5345 100644
--- a/docs/producing-messages/publishing-to-kafka.md
+++ b/docs/producing-messages/publishing-to-kafka.md
@@ -19,4 +19,8 @@ $producer->send();
```
If you want to send multiple messages, consider using the batch producer. The default `send` method is recommended for low-throughput systems only, as it
-flushes the producer after every message that is sent.
\ No newline at end of file
+flushes the producer after every message that is sent.
+
+```+parse
+
+```
\ No newline at end of file
diff --git a/docs/requirements.md b/docs/requirements.md
index 6688111..7395287 100644
--- a/docs/requirements.md
+++ b/docs/requirements.md
@@ -5,4 +5,8 @@ weight: 2
Laravel Kafka requires **PHP 8.1+** and **Laravel 9+**
-This package also requires the `rdkafka` php extension, which you can install by following [this documentation](https://github.com/edenhill/librdkafka#installation)
\ No newline at end of file
+This package also requires the `rdkafka` php extension, which you can install by following [this documentation](https://github.com/edenhill/librdkafka#installation)
+
+```+parse
+
+```
\ No newline at end of file
diff --git a/docs/testing/assert-nothing-published.md b/docs/testing/assert-nothing-published.md
index 8eb8f98..316037b 100644
--- a/docs/testing/assert-nothing-published.md
+++ b/docs/testing/assert-nothing-published.md
@@ -29,3 +29,7 @@ class MyTest extends TestCase
}
}
```
+
+```+parse
+
+```
diff --git a/docs/testing/assert-published-on-times.md b/docs/testing/assert-published-on-times.md
index 6d570ea..931e422 100644
--- a/docs/testing/assert-published-on-times.md
+++ b/docs/testing/assert-published-on-times.md
@@ -28,4 +28,8 @@ class MyTest extends TestCase
Kafka::assertPublishedOnTimes('some-kafka-topic', 2);
}
}
+```
+
+```+parse
+
```
\ No newline at end of file
diff --git a/docs/testing/assert-published-on.md b/docs/testing/assert-published-on.md
index c59c0a5..5574dd1 100644
--- a/docs/testing/assert-published-on.md
+++ b/docs/testing/assert-published-on.md
@@ -3,6 +3,10 @@ title: Assert published On
weight: 3
---
+```+parse
+
+```
+
If you want to assert that a message was published in a specific kafka topic, you can use the `assertPublishedOn` method:
```php
diff --git a/docs/testing/assert-published-times.md b/docs/testing/assert-published-times.md
index 205e01b..f4ce7fa 100644
--- a/docs/testing/assert-published-times.md
+++ b/docs/testing/assert-published-times.md
@@ -29,4 +29,8 @@ class MyTest extends TestCase
Kafka::assertPublishedTimes(2);
}
}
+```
+
+```+parse
+
```
\ No newline at end of file
diff --git a/docs/testing/assert-published.md b/docs/testing/assert-published.md
index fea64c7..ae7ca8a 100644
--- a/docs/testing/assert-published.md
+++ b/docs/testing/assert-published.md
@@ -48,4 +48,8 @@ class MyTest extends TestCase
Kafka::assertPublished();
}
}
+```
+
+```+parse
+
```
\ No newline at end of file
diff --git a/docs/testing/fake.md b/docs/testing/fake.md
index 93931da..ea53866 100644
--- a/docs/testing/fake.md
+++ b/docs/testing/fake.md
@@ -7,5 +7,9 @@ When testing your application, you may wish to "mock" certain aspects of the app
This package provides convenient helpers for mocking the kafka producer out of the box. These helpers primarily provide a convenience layer over Mockery
so you don't have to manually make complicated Mockery method calls.
+```+parse
+
+```
+
The Kafka facade also provides methods to perform assertions over published messages, such as `assertPublished`, `assertPublishedOn` and `assertNothingPublished`.
diff --git a/docs/testing/mocking-your-kafka-consumer.md b/docs/testing/mocking-your-kafka-consumer.md
index 341939a..a4b76cf 100644
--- a/docs/testing/mocking-your-kafka-consumer.md
+++ b/docs/testing/mocking-your-kafka-consumer.md
@@ -6,6 +6,11 @@ weight: 7
If you want to test that your consumers are working correctly, you can mock and execute the consumer to
ensure that everything works as expected.
+```+parse
+
+```
+
+
You just need to tell kafka which messages the consumer should receive and then start your consumer. This package will
run all the specified messages through the consumer and stop after the last message, so you can perform whatever
assertions you want to.