Skip to content

Commit

Permalink
wip
Browse files Browse the repository at this point in the history
  • Loading branch information
mateusjunges committed Oct 17, 2024
1 parent a4d39de commit c4c522c
Show file tree
Hide file tree
Showing 38 changed files with 163 additions and 7 deletions.
4 changes: 4 additions & 0 deletions docs/advanced-usage/before-callbacks.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,3 +21,7 @@ $consumer = \Junges\Kafka\Facades\Kafka::consumer()
These callbacks are not middlewares, so you can not interact with the consumed message.
You can add as many callback as you need, so you can divide different tasks into
different callbacks.

```+parse
<x-sponsors.request-sponsor/>
```
4 changes: 4 additions & 0 deletions docs/advanced-usage/custom-committers.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,10 @@ weight: 4

By default, the committers provided by the `DefaultCommitterFactory` are provided.

```+parse
<x-sponsors.request-sponsor/>
```

To set a custom committer on your consumer, add the committer via a factory that implements the `CommitterFactory` interface:

```php
Expand Down
4 changes: 4 additions & 0 deletions docs/advanced-usage/custom-loggers.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,10 @@ weight: 7

Sometimes you need more control over your logging setup. From `v1.10.1` of this package, you can define your own `Logger` implementation. This means that you have the flexibility to log to different types of storage, such as file or cloud-based logging service.

```+parse
<x-sponsors.request-sponsor/>
```

This can be useful for organizations that need to comply with data privacy regulations, such as the General Data Protection Regulation (GDPR). For example, if an exception occurs and gets logged, it might contain sensitive information such as personally identifiable information (PII). Implementing a custom logger, you can now configure it to automatically redact this information before it gets written to the log.

A `Logger` is any class that implements the `\Junges\Kafka\Contracts\Logger` interface, and it only require that you define a `error` method.
Expand Down
4 changes: 4 additions & 0 deletions docs/advanced-usage/graceful-shutdown.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,10 @@ Stopping consumers is very useful if you want to ensure you don't kill a process

Consumers automatically listen to the `SIGTERM`, `SIGINT` and `SIQUIT` signals, which means you can easily stop your consumers using those signals.

```+parse
<x-sponsors.request-sponsor/>
```

### Running callbacks when the consumer stops
If your app requires that you run sum sort of processing when the consumers stop processing messages, you can use the `onStopConsume` method, available on the `\Junges\Kafka\Contracts\CanConsumeMessages` interface. This method accepts a `Closure` that will run once your consumer stops consuming.

Expand Down
6 changes: 5 additions & 1 deletion docs/advanced-usage/middlewares.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,8 @@ $consumer = \Junges\Kafka\Facades\Kafka::consumer()
});
```

You can add as many middlewares as you need, so you can divide different tasks into different middlewares.
You can add as many middlewares as you need, so you can divide different tasks into different middlewares.

```+parse
<x-sponsors.request-sponsor/>
```
4 changes: 4 additions & 0 deletions docs/advanced-usage/replacing-default-serializer.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,10 @@ weight: 1

The default Serializer is resolved using the `MessageSerializer` and `MessageDeserializer` contracts. Out of the box, the `Json` serializers are used.

```+parse
<x-sponsors.request-sponsor/>
```

To set the default serializer you can bind the `MessageSerializer` and `MessageDeserializer` contracts to any class which implements this interfaces.

Open your `AppServiceProvider` class and add this lines to the `register` method:
Expand Down
4 changes: 4 additions & 0 deletions docs/advanced-usage/sasl-authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@ title: SASL Authentication
weight: 3
---

```+parse
<x-sponsors.request-sponsor/>
```

SASL allows your producers and your consumers to authenticate to your Kafka cluster, which verifies their identity.
It's also a secure way to enable your clients to endorse an identity. To provide SASL configuration, you can use the `withSasl` method,
passing a `Junges\Kafka\Config\Sasl` instance as the argument:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,3 +17,8 @@ Sometimes you may want to send multiple messages without having to create the co
```

Now, you can call `\Junges\Kafka\Facades\Kafka::myProducer()`, which will always apply the configs you defined in your service provider.


```+parse
<x-sponsors.request-sponsor/>
```
5 changes: 5 additions & 0 deletions docs/advanced-usage/setting-global-configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,3 +18,8 @@ to achieve that. Here's an example:
```

Now, you can call `\Junges\Kafka\Facades\Kafka::myProducer()`, which will always apply the configs you defined in your service provider.


```+parse
<x-sponsors.request-sponsor/>
```
4 changes: 4 additions & 0 deletions docs/advanced-usage/stop-consumer-after-last-message.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,4 +18,8 @@ $consumer = \Junges\Kafka\Facades\Kafka::consumer(['topic'])
->build();

$consumer->consume();
```

```+parse
<x-sponsors.request-sponsor/>
```
6 changes: 5 additions & 1 deletion docs/advanced-usage/stopping-a-consumer.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,8 @@ $consumer = \Junges\Kafka\Facades\Kafka::consumer(['topic'])
$consumer->consume();
```

The `onStopConsuming` callback will be executed before stopping your consumer.
The `onStopConsuming` callback will be executed before stopping your consumer.

```+parse
<x-sponsors.request-sponsor/>
```
4 changes: 4 additions & 0 deletions docs/consuming-messages/assigning-partitions.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,10 @@ weight: 3

Kafka clients allows you to implement your own partition assignment strategies for consumers.

```+parse
<x-sponsors.request-sponsor/>
```

If you have a topic with multiple consumers and want to assign a consumer to a specific partition topic, you can
use the `assignPartitions` method, available on the `ConsumerBuilder` instance:

Expand Down
4 changes: 4 additions & 0 deletions docs/consuming-messages/class-structure.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,10 @@ class MyTopicConsumer extends Command

Now, to keep this consumer process running permanently in the background, you should use a process monitor such as [supervisor](http://supervisord.org/) to ensure that the consumer does not stop running.

```+parse
<x-sponsors.request-sponsor/>
```

## Supervisor configuration
In production, you need a way to keep your consumer processes running. For this reason, you need to configure a process monitor that can detect when your consumer processes exit and automatically restart them. In addition, process monitors can allow you to specify how many consumer processes you would like to run concurrently. Supervisor is a process monitor commonly used in Linux environments and we will discuss how to configure it in the following documentation.

Expand Down
6 changes: 5 additions & 1 deletion docs/consuming-messages/configuring-consumer-options.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,11 @@ title: Configuring consumer options
weight: 6
---

The `ConsumerBuilder` offers you some few configuration options:
The `ConsumerBuilder` offers you some few configuration options.

```+parse
<x-sponsors.request-sponsor/>
```

### Configuring a dead letter queue
In kafka, a Dead Letter Queue (or DLQ), is a simple kafka topic in the kafka cluster which acts as the destination for messages that were not
Expand Down
4 changes: 4 additions & 0 deletions docs/consuming-messages/consumer-groups.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,10 @@ weight: 4

Kafka consumers belonging to the same consumer group share a group id. The consumers in a group divides the topic partitions as fairly amongst themselves as possible by establishing that each partition is only consumed by a single consumer from the group.

```+parse
<x-sponsors.request-sponsor/>
```

To attach your consumer to a consumer group, you can use the method `withConsumerGroupId` to specify the consumer group id:

```php
Expand Down
4 changes: 4 additions & 0 deletions docs/consuming-messages/consuming-messages.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,4 +15,8 @@ After building the consumer, you must call the `consume` method to consume the m

```php
$consumer->consume();
```

```+parse
<x-sponsors.request-sponsor/>
```
6 changes: 5 additions & 1 deletion docs/consuming-messages/creating-consumer.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,4 +21,8 @@ use Junges\Kafka\Facades\Kafka;
$consumer = Kafka::consumer(['topic-1', 'topic-2'], 'group-id', 'broker');
```

This method returns a `Junges\Kafka\Consumers\ConsumerBuilder::class` instance, and you can use it to configure your consumer.
This method returns a `Junges\Kafka\Consumers\ConsumerBuilder::class` instance, and you can use it to configure your consumer.

```+parse
<x-sponsors.request-sponsor/>
```
4 changes: 4 additions & 0 deletions docs/consuming-messages/custom-deserializers.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,10 @@ weight: 7
To create a custom deserializer, you need to create a class that implements the `\Junges\Kafka\Contracts\MessageDeserializer` contract.
This interface force you to declare the `deserialize` method.

```+parse
<x-sponsors.request-sponsor/>
```

To set the deserializer you want to use, use the `usingDeserializer` method:

```php
Expand Down
4 changes: 4 additions & 0 deletions docs/consuming-messages/handling-message-batch.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,10 @@ If you want to handle multiple messages at once, you can build your consumer ena
The `enableBatching` method enables the batching feature, and you can use `withBatchSizeLimit` to set the maximum size of a batch.
The `withBatchReleaseInterval` sets the interval in which the batch of messages will be released after timer exceeds given interval.

```+parse
<x-sponsors.request-sponsor/>
```

The example below shows that batch is going to be handled if batch size is greater or equals to 1000 or every 1500 milliseconds.
Batching feature could be helpful when you work with databases like ClickHouse, where you insert data in large batches.

Expand Down
5 changes: 5 additions & 0 deletions docs/consuming-messages/message-handlers.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,3 +37,8 @@ The `ConsumerMessage` contract gives you some handy methods to get the message p
- `getBody()`: Returns the body of the message
- `getOffset()`: Returns the offset where the message was published


```+parse
<x-sponsors.request-sponsor/>
```

4 changes: 4 additions & 0 deletions docs/consuming-messages/queueable-handlers.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,10 @@ weight: 11

Queueable handlers allow you to handle your kafka messages in a queue. This will put a job into the Laravel queue system for each message received by your Kafka consumer.

```+parse
<x-sponsors.request-sponsor/>
```

This only requires you to implements the `Illuminate\Contracts\Queue\ShouldQueue` interface in your Handler.

This is how a queueable handler looks like:
Expand Down
4 changes: 4 additions & 0 deletions docs/consuming-messages/subscribing-to-kafka-topics.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@ title: Subscribing to kafka topics
weight: 2
---

```+parse
<x-sponsors.request-sponsor/>
```

With a consumer created, you can subscribe to a kafka topic using the `subscribe` method:

```php
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,10 @@ weight: 2
---

Kafka allows you to subscribe to topics using regex, and regex pattern matching is automatically performed for topics prefixed with `^` (e.g. `^myPfx[0-9]_.*`).

```+parse
<x-sponsors.request-sponsor/>
```

The consumer will see the new topics on its next periodic metadata refresh which is controlled by the `topic.metadata.refresh.interval.ms`

Expand Down
4 changes: 4 additions & 0 deletions docs/installation-and-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,10 @@ You need to publish the configuration file using
php artisan vendor:publish --tag=laravel-kafka-config
```

```+parse
<x-sponsors.request-sponsor/>
```

This is the default content of the configuration file:

```php
Expand Down
4 changes: 4 additions & 0 deletions docs/producing-messages/configuring-message-payload.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,10 @@ weight: 3

In kafka, you can configure your payload with a message, message headers and message key. All these configurations are available within ProducerBuilder class.

```+parse
<x-sponsors.request-sponsor/>
```

### Configuring message headers
To configure the message headers, use the `withHeaders` method:

Expand Down
4 changes: 4 additions & 0 deletions docs/producing-messages/configuring-producers.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,10 @@ weight: 2

The producer builder, returned by the `publish` call, gives you a series of methods which you can use to configure your kafka producer options.

```+parse
<x-sponsors.request-sponsor/>
```

### Defining configuration options

The `withConfigOption` method sets a `\RdKafka\Conf::class` option. You can check all available options [here][rdkafka_config].
Expand Down
4 changes: 4 additions & 0 deletions docs/producing-messages/custom-serializers.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,10 @@ weight: 4

Serialization is the process of converting messages to bytes. Deserialization is the inverse process - converting a stream of bytes into and object. In a nutshell, it transforms the content into readable and interpretable information.

```+parse
<x-sponsors.request-sponsor/>
```

Basically, in order to prepare the message for transmission from the producer we use serializers. This package supports three serializers out of the box:

- NullSerializer / NullDeserializer
Expand Down
4 changes: 4 additions & 0 deletions docs/producing-messages/producing-message-batch-to-kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,10 @@ Then create as many messages as you want and push them to the `MesageBatch` inst
Finally, create your producer and call the `sendBatch`, passing the `MessageBatch` instance as a parameter.
This is helpful when you persist messages in storage before publishing (e.g. TransactionalOutbox Pattern).

```+parse
<x-sponsors.request-sponsor/>
```

By using message batch, you can send multiple messages using the same producer instance, which is way faster than the default `send` method, which flushes the producer after each produced message.
Messages are queued for asynchronous sending, and there is no guarantee that it will be sent immediately. The `sendBatch` is recommended for a system with high throughput.

Expand Down
6 changes: 5 additions & 1 deletion docs/producing-messages/producing-messages.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,11 @@ Kafka::asyncPublish('broker')->onTopic('topic-name')
```

The main difference is that the Async Producer is a singleton and will only flush the producer when the application is shutting down, instead of after each send or batch send.
This reduces the overhead when you want to send a lot of messages in your request handlers.
This reduces the overhead when you want to send a lot of messages in your request handlers.

```+parse
<x-sponsors.request-sponsor/>
```

When doing async publishing, the builder is stored in memory during the entire request. If you need to use a fresh producer, you may use the `fresh` method
available on the `Kafka` facade (added in v2.2.0). This method will return a fresh Kafka Manager, which you can use to produce messages with a newly created producer builder.
Expand Down
6 changes: 5 additions & 1 deletion docs/producing-messages/publishing-to-kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,8 @@ $producer->send();
```

If you want to send multiple messages, consider using the batch producer. The default `send` method is recommended for low-throughput systems only, as it
flushes the producer after every message that is sent.
flushes the producer after every message that is sent.

```+parse
<x-sponsors.request-sponsor/>
```
6 changes: 5 additions & 1 deletion docs/requirements.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,8 @@ weight: 2

Laravel Kafka requires **PHP 8.1+** and **Laravel 9+**

This package also requires the `rdkafka` php extension, which you can install by following [this documentation](https://github.com/edenhill/librdkafka#installation)
This package also requires the `rdkafka` php extension, which you can install by following [this documentation](https://github.com/edenhill/librdkafka#installation)

```+parse
<x-sponsors.request-sponsor/>
```
4 changes: 4 additions & 0 deletions docs/testing/assert-nothing-published.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,3 +29,7 @@ class MyTest extends TestCase
}
}
```

```+parse
<x-sponsors.request-sponsor/>
```
4 changes: 4 additions & 0 deletions docs/testing/assert-published-on-times.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,4 +28,8 @@ class MyTest extends TestCase
Kafka::assertPublishedOnTimes('some-kafka-topic', 2);
}
}
```

```+parse
<x-sponsors.request-sponsor/>
```
4 changes: 4 additions & 0 deletions docs/testing/assert-published-on.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@ title: Assert published On
weight: 3
---

```+parse
<x-sponsors.request-sponsor/>
```

If you want to assert that a message was published in a specific kafka topic, you can use the `assertPublishedOn` method:

```php
Expand Down
4 changes: 4 additions & 0 deletions docs/testing/assert-published-times.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,4 +29,8 @@ class MyTest extends TestCase
Kafka::assertPublishedTimes(2);
}
}
```

```+parse
<x-sponsors.request-sponsor/>
```
4 changes: 4 additions & 0 deletions docs/testing/assert-published.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,4 +48,8 @@ class MyTest extends TestCase
Kafka::assertPublished();
}
}
```

```+parse
<x-sponsors.request-sponsor/>
```
Loading

0 comments on commit c4c522c

Please sign in to comment.