Releases: fd4s/fs2-kafka
Releases · fd4s/fs2-kafka
fs2-kafka v0.19.9
fs2-kafka v0.19.8
Additions
- Add initial debug logging using SLF4J. Thanks @backuitist! (#108, #113)
Changes
- Change to only pause/resume partitions when necessary. (#112)
- Change to only start polling after streaming has started. Thanks @Krever! (#110, #114)
- Change to revoke previous duplicate fetch and issue warning log. Thanks @backuitist! (#107)
- Fix race condition which could cause duplicate records. (#111)
Released on 2019-04-02.
fs2-kafka v0.19.7
Changes
- Fix to include state changes during poll when handling records. (#109)
Released on 2019-03-29.
fs2-kafka v0.19.6
Changes
- Fix a race condition which could result in duplicate records. Thanks @backuitist! (#105, #106)
Released on 2019-03-28.
fs2-kafka v0.19.5
Changes
- Fix
Acks#toString
andAutoOffsetReset#toString
. (#103)
Updates
Miscellaneous
- There is now a Gitter room for the library.
Released on 2019-03-27.
fs2-kafka v0.19.4
Additions
- Add improved support for unkeyed records. Thanks @ranjanibrickx! (#96, #97)
- Add
Deserializer#option
, andDeserializer.option
andunit
. - Add
HeaderDeserializer#option
, andHeaderDeserializer.option
andunit
. - Add
Serializer#option
, andSerializer.option
,asNull
,empty
andunit
. - Add
HeaderSerializer#option
, andHeaderSerializer.option
,asNull
,empty
andunit
.
- Add
Released on 2019-03-01.
fs2-kafka v0.19.3
Additions
- Add functions for working with consumer offsets. Thanks @backuitist! (#92, #93)
- Add
KafkaConsumer#assignment
. - Add
KafkaConsumer#position
. - Add
KafkaConsumer#seekToBeginning
. - Add
KafkaConsumer#seekToEnd
.
- Add
- Add
Attempt[A]
aliases for deserializers. (#95)- Add
Deserializer.Attempt[A] = Deserializer[Either[Throwable, A]]
. - Add
HeaderDeserializer.Attempt[A] = HeaderDeserializer[Either[Throwable, A]]
.
- Add
Released on 2019-02-27.
fs2-kafka v0.19.2
Additions
- Add
describeCluster
andcreateTopics
toKafkaAdminClient
. Thanks @danxmoran! (#88) - Add
maxPrefetchBatches
toConsumerSettings
. (#83)- Controls prefetching behaviour before backpressure kicks in.
- Use
withMaxPrefetchBatches
to change the default setting.
- Add several constructs for working with record headers. (#85)
- Add
HeaderDeserializer
for deserialization of record header values. - Add
HeaderSerializer
for serializing values to use as header values. - Add
Header.serialize
for serializing a value and creating aHeader
. - Add
Header#headers
for creating aHeaders
with a singleHeader
. - Add
Header#as
andattemptAs
for deserializing header values. - Add
Headers#withKey
and aliasapply
for extracting a singleHeader
. - Add
Headers#concat
for concatenating anotherHeaders
instance. - Add
Headers#asJava
for converting to Java Kafka-compatible headers. - Add
Headers.fromIterable
to createHeaders
fromIterable[Header]
. - Add
Headers.fromSeq
to createHeaders
fromSeq[Header]
.
- Add
- Add several constructs for working with record serialization. (#85)
- Add a custom
Serializer
to make it easier to create and compose serializers. - Add a custom
Deserializer
to make it easier to create and compose deserializers. - Add
ProducerSettings.apply
for using implicitSerializer
s for the key and value. - Add
ConsumerSettings.apply
for using implicitDeserializer
s for the key and value.
- Add a custom
Changes
- Change to make
fs2.kafka.Id
public. Thanks @chenharryhua! (#86, #87)
Updates
- Update Kafka to 2.1.1. Thanks @sebastianvoss! (#90, #91)
Documentation
- Add a technical details section explaining backpressure. Thanks @backuitist! (#82, #84)
Released on 2019-02-22.
fs2-kafka v0.19.1
fs2-kafka v0.19.0
Changes
- Add
KafkaProducer#producePassthrough
for only keeping the passthrough after producing. (#74) - Change
KafkaConsumer#stream
to be an alias forpartitionedStream.parJoinUnbounded
. (#78)- This also removes
ConsumerSettings#fetchTimeout
as it is now unused.
- This also removes
- Change to improve type inference of
ProducerMessage
. (#74, #76)- To support better type inference, a custom
fs2.kafka.ProducerRecord
has been added. - If you were using the Java
ProducerRecord
, change tofs2.kafka.ProducerRecord
.
- To support better type inference, a custom
- Change to replace
Sink
s withPipe
s, and usage ofStream#to
withStream#through
. (#73) - Remove
ProducerMessage#single
,multiple
, andpassthrough
. (#74)- They have been replaced with
ProducerMessage#apply
andProducerMessage#one
. - If you were previously using
single
in isolation, then you can now useone
. - For all other cases, you can now use
ProducerMessage#apply
instead.
- They have been replaced with
- Rename
KafkaProducer#produceBatched
toproduce
. (#74) - Remove the previous
KafkaProducer#produce
.- For previous behavior,
flatten
the result fromproduce
. (#74)
- For previous behavior,
Miscellaneous
- Change to include current year in license notices. (#72)
Released on 2019-01-18.