-
Notifications
You must be signed in to change notification settings - Fork 228
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pull request made in error, sorry! #1134
Closed
LoudCloudCrowd
wants to merge
56
commits into
dbt-labs:main
from
CybercentreCanada:feature/custom-python-model-location
Closed
Pull request made in error, sorry! #1134
LoudCloudCrowd
wants to merge
56
commits into
dbt-labs:main
from
CybercentreCanada:feature/custom-python-model-location
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Found a way to identify iceberg tables given that spark returns an error when trying to execute "SHOW TABLE EXTENDED..." See https://issues.apache.org/jira/browse/SPARK-33393 Instead of show table extended a "DESCRIBE EXTENDED" is performed to retrieve the provider information. This allows for identification of iceberg through an is_iceberg member variable. Allow for multiple join conditions to allow for mutliple columns to make a row distinct Use is_iceberg everywhere handling iceberg tables differs from other sources of data.
[CT-276] Apache Iceberg Support dbt-labs#294 The _schema variable was used for non-iceberg tables but was being overridden by work for iceberg v2 tables. I've made it so the iceberg condition will set _schema rather than blanket changing the schema for all providers.
On second look I wasn't happy with my name choices for macro name and method, hopefully what I have now makes more sense. [CT-276] Apache Iceberg Support dbt-labs#294
Upon further investigation this check is not needed since self.database will not be set.
Feature/pipeline
added missing iceberg check
_get_columns_for_catalog was not returning right info
Fix for datahub recipe
Cccs/1.3.0 change pipeline
…-yuyu Port over Yuyu's changes from 1.6.0 to 1.3.0
…alog return is bad
Removes v1 table operations and allows throwing of exception when catalog return is bad
* Setuptools breaks on version 71+, force 70.1.0 * Need to run dev requirements before main * Typo * Try setting version directly in pipeline file for now
* Bump version to 1.4.9 * Adds partitioning branch for Python dbt models (#9) --------- Co-authored-by: LoudCloudCrowd <[email protected]>
Thank you for your pull request and welcome to our community. We could not parse the GitHub identity of the following contributors: cccs-jc.
|
Sorry I PR'd the wrong place! I'll get this closed |
LoudCloudCrowd
changed the title
Feature/custom python model location
Please close this Pull Request :(
Oct 31, 2024
LoudCloudCrowd
changed the title
Please close this Pull Request :(
Pull request made in error, sorry!
Oct 31, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This implements the ability for Python dbt models to use the location clause as specified in project settings so that it can write to a different location on disk from the one that is registered to a catalog