Specify default dataset when invoking read_gbq #474
Labels
api: bigquery
Issues related to the googleapis/python-bigquery-pandas API.
type: feature request
‘Nice-to-have’ improvement, new feature or different behavior or design.
Is your feature request related to a problem? Please describe.
We're currently using SqlAlchemy to read data from BQ. I've tested read_gbq, and it's roughly an order of magnitude faster. So we definitely want to switch, but we ran into one issue. When using SqlAlchemy, we can specify a default dataset when constructing an engine, and our SQL queries of course can then leverage that. I can't find a way to specify a default dataset when using read_gbq, which means we need to rework how we construct our SQL queries.
Describe the solution you'd like
I'm hoping that read_gbq could eventually support a "dataset" property and use that in the same way that SqlAlchemy does when we construct an engine.
Describe alternatives you've considered
Our alternative is to rework how we generate SQL queries so that they do not assume that a default dataset has been set.
Additional context
Lack of a "dataset" argument is not a deal breaker, it just seems like something that I'd expect to exist given that BQ offers the concept of a default dataset.
The text was updated successfully, but these errors were encountered: