You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current implementation that converts to and from the Java GenericDatumReader/GenericDatumWriter's representation is unsatisfactory for reasons that have been previously discussed - instead it's been suggested that we reimplement Avro using scodec. Instead I think the best way forward would be to continue using the Java Decoder and Encoder (which handle the low-level mechanics of converting between byte streams and JVM primitives, while abstracting over the binary and json encodings) but have our own implementations of DatumReader and DatumWriter, which orchestrate the Decoder/Encoder operations to work with complete data structures. That gives us complete control of the representations we work with and avoids any extra indirection at runtime, while giving us the benefit of encoders/decoders that are likely to be better tested and more performant than anything we write ourselves, and will interoperate smoothly with the Confluent Kafka serdes.
Easiest to do after Codec has been converted to a data structure per #437 - we can then add functionality, separate from the Codec, to compile it (together with an optional schema) to a DatumReader or DatumWriter.
The text was updated successfully, but these errors were encountered:
The current implementation that converts to and from the Java
GenericDatumReader
/GenericDatumWriter
's representation is unsatisfactory for reasons that have been previously discussed - instead it's been suggested that we reimplement Avro using scodec. Instead I think the best way forward would be to continue using the JavaDecoder
andEncoder
(which handle the low-level mechanics of converting between byte streams and JVM primitives, while abstracting over the binary and json encodings) but have our own implementations ofDatumReader
andDatumWriter
, which orchestrate theDecoder
/Encoder
operations to work with complete data structures. That gives us complete control of the representations we work with and avoids any extra indirection at runtime, while giving us the benefit of encoders/decoders that are likely to be better tested and more performant than anything we write ourselves, and will interoperate smoothly with the Confluent Kafka serdes.Easiest to do after Codec has been converted to a data structure per #437 - we can then add functionality, separate from the Codec, to compile it (together with an optional schema) to a DatumReader or DatumWriter.
The text was updated successfully, but these errors were encountered: