-
Notifications
You must be signed in to change notification settings - Fork 228
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature/decouple adapters from core #972
Conversation
Thank you for your pull request! We could not find a changelog entry for this change. For details on how to document a change, see the dbt-spark contributing guide. |
"retry_all": bool(os.getenv("DBT_DATABRICKS_RETRY_ALL", False)), | ||
"connect_retries": 3, | ||
"connect_timeout": 5, | ||
"retry_all": False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
with the below conftest fixture I think we don't need to have all these retries anymore
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand this correctly, this will result in failure in more scenarios, not less. So if we're wrong, the tests simply won't pass. I'm fine with that since we'd be alerted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yea most of the time when these fail it's because something is actually broken
"retry_all": bool(os.getenv("DBT_DATABRICKS_RETRY_ALL", False)), | ||
"connect_retries": 3, | ||
"connect_timeout": 5, | ||
"retry_all": False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand this correctly, this will result in failure in more scenarios, not less. So if we're wrong, the tests simply won't pass. I'm fine with that since we'd be alerted.
def start_databricks_cluster(project, request): | ||
global _DB_CLUSTER_STARTED | ||
profile_type = request.config.getoption("--profile") | ||
with _db_start_lock: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this fixture gets used for all tests, including non-Databricks tests, does this force tests to be run entirely in series?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it shouldn't since the lock would be released almost instantly (if not databricks or if _DB_CLUSTER_STARTED
is True
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it should only block the first databricks test that runs and even then only if the cluster isn't yet available
resolves #
docs dbt-labs/docs.getdbt.com/#
Problem
Solution
Checklist