You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there something inherently different about the existing Protocol Buffer format such that you couldn't have adopted that as your standard format? Many companies have existing tooling to author or generate protobuf format already. Introducing a new format makes it harder to support.
Is the subset of Protobuf that Bebop supports the reason for the large performance benefits? Would adopting the functionality not supported in Bebop (nested defs, repeated properties) be a reason why a new format had to be invented?
The text was updated successfully, but these errors were encountered:
(I didn't write this bebop implementation, but I am writing a second implementation)
There are a number of core differences that justify a different protocol:
bebop has structs where all fields are always present-- protobuf has no equivalent. Records with optional components inherently take more time to parse and take more memory to store, as you must encode somehow which fields are present and a reader must somehow detect which fields are present-- this implies bits that must exist and conditionals to check them.
bebop has baked in support for timestamps and guid/uuids, where protobuf must encode these as separate custom message types.
bebop supports more integer types (int16, uint16), and encodes integers differently (fixed size vs varint). The use of fixed size integers implies a trade off of larger messages for faster read/write times (although this test could be challenged). See also: reddit discussion
See also in that above discussion, bebop has no inherent compression.
I would imagine also that, if bebop would be just as fast if it used a protobuf encoding, then there would be no need for this project as protobuf would already be using whatever techniques were needed to make it that fast.
Is there something inherently different about the existing Protocol Buffer format such that you couldn't have adopted that as your standard format? Many companies have existing tooling to author or generate protobuf format already. Introducing a new format makes it harder to support.
Is the subset of Protobuf that Bebop supports the reason for the large performance benefits? Would adopting the functionality not supported in Bebop (nested defs, repeated properties) be a reason why a new format had to be invented?
The text was updated successfully, but these errors were encountered: