Skip to content

Commit

Permalink
runtime: hard code materialize-bigquery in ser_policy
Browse files Browse the repository at this point in the history
BigQuery actually has a limit of 20MiB on an individual row that can be returned
via its Query API. Although we can store a row with a `flow_document` that
large, we then can't read it back out using the conventional query mechanism the
materialization connector currently uses.

This adds `materialize-bigquery` to the list of connectors that use
`ser_policy`, which should generally reduce how often we store rows that are
excessively large.
  • Loading branch information
williamhbaker committed Sep 5, 2024
1 parent d6289c6 commit a8c36f1
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions crates/runtime/src/materialize/task.rs
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ impl Task {
// that don't handle large strings very well. This should be negotiated via connector protocol.
// See go/runtime/materialize.go:135
let ser_policy = if [
"ghcr.io/estuary/materialize-bigquery",
"ghcr.io/estuary/materialize-snowflake",
"ghcr.io/estuary/materialize-redshift",
"ghcr.io/estuary/materialize-sqlite",
Expand Down

0 comments on commit a8c36f1

Please sign in to comment.