-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Option to provide partition column and partition expiry time #313
Comments
I'm open to a pull request that adds this. Ideally, I'd like to see Example: pandas_gbq.to_gbq(
df,
configuration={
"load": {
"timePartitioning": {
"type": "DAY",
"expirationMs": str(1000*60*60*24*30), # 30 days
"field": "my_timestamp_col"
}
}
}
) One problem with the |
Yeah, I had something like configuration thing in mind. Why does the |
It's a historical artifact of the BigQuery REST endpoint using an older JSON parsing implementation that only had JavaScript Number (floating point) available. Encoding it as a string allows the BigQuery REST endpoint to interpret the value as a 64-bit integer without loss of precision. I believe an integer will be accepted, but you might lose precision for large values. |
Are you still open to a PR that does the refactoring? If so, I'd be interested to work on it. |
I referenced this issue from #425, but will keep this open in case we don't decide to rely on the load job for table creation. |
Any news about this feature ? could be very useful ! thanks ! |
Is this feature implemented already? |
While creating a new table using pandas, it would be nice if it can partition the table and set an partition expiry time. The python bigquery library already supports it
https://cloud.google.com/bigquery/docs/creating-column-partitions
I can create a pull request, if people feel like it's something they find useful. At least in my work, we create lot of monitoring tables on bigquery using pandas, and push data to it. These tables keep growing and since we can't set the partition when a table has already been created, these tables just become too big, and expensive.
The text was updated successfully, but these errors were encountered: