You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be nice if DBeam supported more flexible output formats. A few examples other than Avro could be: Parquet, CSV, Proto, ...
DBeam is build on top of Beam SDK and should support more formats and runners. But so far its main use case has been writing Avro files to GCS via DataflowRunner. Only recently there has been better support for Parquet and other columnar formats on GCS.
I think it is worth to look into supporting Parquet in the coming months or years.
A few open questions when designing parquet support:
Should there be equivalent JdbcParquetJob, JdbcParquetIO, etc classes?
Should it be part of dbeam-core? Or a separate package? Or a separate project/repository?
Can Parquet support be built without the need for parquet-mr and hadoop? Those libraries bring many dependencies, some with vulnerabilities. It would be interesting to see if parquet support could be built using Arrow, some Trino libraries, or something else.
Is there any plan to add Apache Parquet file format besides Avro?
The text was updated successfully, but these errors were encountered: