- Added serverless support to spark fixture (#91. This release introduces serverless support to the
spark
fixture, enabling the creation of a Databricks Connect Spark session with serverless capabilities by leveragingDATABRICKS_SERVERLESS_COMPUTE_ID
environment variable, which when set toauto
enables the Spark session to run on serverless compute.
- Added
make_run_as
fixture (#82). A new pytest fixture,make_run_as
, has been added to create an account service principal via theacc
fixture and assign it to a workspace with default permissions, which is removed after the test is complete. The fixture creates a service principal with a random display name and assigns it to the workspace. Users can optionally assign the service principal to account groups for specific actions using theaccount_groups
argument. The returned object contains properties for the workspace client, SQL backend, and SQL execution functions, as well as the display name and application ID of the ephemeral service principal. If desired, thews
fixture can be overridden to make all workspace fixtures provided by the plugin run as the ephemeral service principal, allowing for testing with lower privilege ephemeral service principals and improving security and isolation. This feature is not currently supported with Databricks Metadata Service authentication on Azure Databricks. - Bump codecov/codecov-action from 4 to 5 (#85). In this release, the Codecov GitHub Action has been updated from version 4 to 5, introducing several new features and changes. The new version uses the Codecov Wrapper to encapsulate the CLI, allowing for quicker updates to the Action. Additionally, version 5 includes an opt-out feature for tokens in public repositories, enabling contributors and other members to upload coverage reports without requiring access to the Codecov token. This can be accomplished by setting the ability for Codecov to receive a coverage report from any source in the Global Upload Token section of the settings page on codecov.io. Furthermore, the updated version introduces several new arguments, including
binary
,gcov_args
,gcov_executable
,gcov_ignore
,gcov_include
,report_type
, andskip_validation
, and changes thefile
andplugin
arguments tofiles
andplugins
, respectively. - Bump databrickslabs/sandbox from acceptance/v0.3.1 to 0.4.2 (#80). In this release, the
databrickslabs/sandbox
dependency has been updated from versionacceptance/v0.3.1
to0.4.2
. This update includes the addition of install instructions, more go-git libraries, and modifications to the README to explain how to use the library with thedatabricks labs sandbox
command. Dependency updates include golang.org/x/crypto from version 0.16.0 to 0.17.0. TheRun nightly tests
job in the workflow has also been updated to use the new version of thedatabrickslabs/sandbox/acceptance
image. The commit history for this release shows several commits, including the creation of "[TODO] XXX" issues and a full diff comparison. The pull request includes instructions for triggering Dependabot actions through comments. Reviewers are encouraged to thoroughly examine the changelog and commit history for more detailed information on the changes in this release. - Force keyword argument in
make_query
fixture (#81). In the latest update, themake_query
fixture in theredash.py
file has undergone changes to enhance code readability and maintainability. Thesql_query
parameter is now required to be passed as a keyword argument, preventing the possibility of it being mistakenly passed as a positional argument. Furthermore, thecreate
function's signature has been revised to include an explicit * before thesql_query
parameter. It is important to note that these changes have not affected the functionality of the method, but have instead altered the parameter passing style for improved clarity. This minor update is intended to elevate the overall quality of the codebase and promote best practices. - Renamed internal
notebooks
module toworkspace
(#86). In this release, the internalnotebooks
module has been renamed toworkspace
to better reflect its current functionality of managing and interacting with notebooks and other resources in a Databricks workspace. This renaming applies to the import statements inplugin.py
andtest_notebooks.py
, which has been renamed totest_workspace.py
. Additionally, import statements for making cluster policy and instance pool permissions have been added. The functionality of the imported functions, such asmake_directory
,make_workspace_file
,make_notebook
, andmake_repo
, remains unchanged. This change is part of the fix for issue #59 and aims to improve the clarity and consistency of the codebase. Software engineers adopting this project should update any imports or references to thenotebooks
module to use the newworkspace
module instead.
Dependency updates:
- Bump databrickslabs/sandbox from acceptance/v0.3.1 to 0.4.2 (#80).
- Bump codecov/codecov-action from 4 to 5 (#85).
- Added Volume Fixture (#72). This commit introduces a Managed Volume fixture,
make_volume
, to the Unity Catalog in the test suite, facilitating the creation and use of a random volume for testing purposes. The fixture, when called without arguments, generates a volume with a random name. Alternatively, specifying thename
argument creates a volume with the given name, using theMANAGED
volume type, and associating it with a randomly generated catalog and schema. This promotes test isolation and prevents unintended interference. Additionally, this PR resolves issue #70 and includes unit and integration tests that have been manually verified to ensure the fixture's proper functioning. The commit also demonstrates a bug fix related to table retrieval in thetest_remove_after_property_table
test case.
- Updated databrickslabs/sandbox requirement to acceptance/v0.3.1 (#64). In this pull request, we are updating the requirement for the
databrickslabs/sandbox
package to versionacceptance/v0.3.1
. This update is necessary to resolve any conflicts and ensure compatibility with the latest version of the package. The update includes several bug fixes, dependency updates, and the addition of install instructions in the changelog. The package also includes new git-related libraries and modifications to the README to explain how to use the package with thedatabricks labs sandbox
command. Additionally, there are dependency updates forgolang.org/x/crypto
in thego-libs
andruntime-packages
directories. Keeping dependencies up-to-date is important for taking advantage of new features and bug fixes, so it is recommended to merge this pull request and address any conflicts or issues that may arise. This update is generated by Dependabot, a tool used to keep dependencies up-to-date, and it will handle any conflicts as long as the pull request is not modified. Thedatabrickslabs/sandbox
package is used for building and testing the project. - Updating
make_schema
fixture to include location to create managed schema (#66). In this update, themake_schema
fixture has been enhanced to include an optionallocation
parameter, enabling users to specify the location for creating a managed schema. This change addresses issue #6 - [chore] update acceptance.yml and remove circular downstreads. In this release, the
acceptance.yml
file in the.github/workflows
directory has been updated to use theacceptance/v0.3.1
Docker image for theRun integration tests
job, replacing the previousacceptance/v0.2.2
version. This change ensures that the latest version of the integration tests are being run, which may include bug fixes, new features, and other improvements. Thetimeout
parameter has been removed from the job, which may result in longer test execution times but ensures that the tests have enough time to complete. Additionally, theARM_CLIENT_ID
andGITHUB_TOKEN
secrets are now passed as environment variables to the job. Thedownstreams.yml
file has also been updated to remove circular dependencies and streamline the build process, while maintaining the use of theubuntu-latest
runtime andfail-fast
option. TheRun nightly tests
job has been updated to use theacceptance/v0.3.1
Docker image, which may improve test execution during the nightly build due to potential bug fixes, new functionality, or better handling of timeout scenarios. Thecreate_issues
parameter remains set to true, ensuring that any test failures will result in the creation of GitHub issues to track them. These changes aim to improve the reliability and efficiency of the build and test processes.
Dependency updates:
- Updated databrickslabs/sandbox requirement to acceptance/v0.3.1 (#64).
- Documentation: fix
make_query()
parameter name (#61). Themake_query()
fixture's documentation has been updated to correct the name of thequery
parameter tosql_query
. Thesql_query
parameter is used to specify the SQL query stored in the fixture, with the default value being "SELECT * FROM ". This change aims to enhance clarity and consistency in the naming of the argument, making it easier for users of themake_query()
fixture to comprehend its purpose and usage. By correcting the parameter name, the documentation provides a clearer and more consistent user experience. - Removed references to UCX (#56). This release includes changes to remove references to UCX in fixture names and descriptions within the testing process. The
create
function incatalog.py
now appends a random string to "dummy_t", "dummy_s", ordummy_c
for table, schema, and catalog names, respectively, instead of using "ucx_t", "ucx_S", and "ucx_C". Thetest_catalog_fixture
function has also been updated to replacedummy
withdummy_c
anddummy_s
for catalogs and schemas. Additionally, the description of a test query inredash.py
has been updated to remove the reference to UCX. Lastly, fixture names in the unit tests for a catalog have been updated to usedummy
instead of "ucx". These changes improve the independence of the testing process by removing technology-specific references, without affecting functionality. - Store watchdog tags in storage credentials comment (#57). In this release, the watchdog's behavior has been modified to retain properly tagged credentials when deleting them, as previously all credentials were removed without discrimination. This change introduces tagging for preserving specific credentials, and the
watchdog_remove_after
fixture has been added to the README file for documentation. Themake_storage_credential
fixture has been updated to include a new parameter,watchdog_remove_after
, which specifies the time at which the storage credential should be removed by the watchdog. Thecreate
function has been updated to accept this parameter and adds it as a comment to the storage credential. Theremove
function remains unmodified. The related fixtures section has been updated to include the newwatchdog_remove_after
fixture. This change was co-authored by Eric Vergnaud, but please note that it has not been tested yet. - [FEATURE] Extend
make_job
to runSparkPythonTask
(#60). Themake_job
fixture has been extended to support runningSparkPythonTask
in addition to notebook tasks. A newmake_workspace_file
fixture has been added to create and manage Python files in the workspace. Themake_job
fixture now supports SQL notebooks and files and includes atask_type
parameter to specify the type of task to run and aninstance_pool_id
parameter to reuse an instance pool for faster job execution during integration tests. Additionally, unit and integration tests have been added to ensure the proper functioning of the new and modified fixtures. These changes allow for more flexible and efficient testing of Databricks jobs with different task types and configurations. Themake_notebook
fixture has also been updated to accept acontent
parameter for creating notebooks with custom content. TheLanguage
enum from thedatabricks.sdk.service.workspace
module is used to specify the language of a notebook or workspace file.
- Fixed PyPI metadata (#54). In this commit, the PyPI metadata for the pytester project has been updated with the new repository location at https://github.com/databrickslabs/pytester. The URLs for issues and source have been changed to point to the new repository, with the
issues
URL now directing to https://github.com/databrickslabs/pytester/issues and thesource
URL to https://github.com/databrickslabs/pytester. Furthermore, the versioning toolhatch
has been configured to manage the version number in the "src/databricks/labs/pytester/about.py" file. This ensures accurate and consistent versioning for the pytester project moving forward. - Improve
make_group
/make_acc_group
fixture consistency (#50). This PR introduces improvements to themake_group
andmake_acc_group
fixtures, designed for managing Databricks workspace groups. The enhancements include a double-check approach to ensure group visibility by requiring the group to be retrievable via both.get()
and.list()
calls. This mitigates, but does not entirely eliminate, consistency issues with the APIs used for managing groups. Thewait_for_provisioning
argument has been removed and replaced with an internal wait mechanism. The argument is still accepted but triggers a deprecation warning. Internal unit-test plumbing has been updated to use mock fixtures tailored for each test, ensuring double-check implementation testability. New and updated unit tests are included in thetest_iam.py
file, along with the introduction of the_setup_groups_api
function, which mocks specific clients to ensure group visibility when created. These changes improve consistency and reliability when working with Databricks workspace groups, making it easier for users to adopt the project.
- Support providing name in
make_catalog
fixture (#52). Themake_catalog
fixture in our open-source library has been updated to allow users to specify a name for the catalog using a newname
parameter. Previously, the catalog was given a random name, but now users can have more control and customization over catalog names in their tests. This change includes updates to the docstring and the addition of unit tests to ensure the fixture behaves as expected with the new parameter. Additionally, the underlyingcall_stateful
function was updated to expect a callable that returns a generator of callables, enabling the support for providing a name. Thetest_make_catalog_creates_catalog_with_name
andtest_make_catalog
tests have been added to verify the behavior of the fixture with the newname
parameter.
- Use watchdog timeout to catalog properties (#48). This pull request introduces a new
RemoveAfter
property for catalogs, which allows for marking them for skipping by the watchdog. This change addresses the current implementation gap, which does not explicitly indicate when catalogs are being used. The new property will specify the time from which objects can be purged. A corresponding fixturewatchdog_remove_after
has been added to the list of available fixtures, and themake_catalog
fixture has been updated to include this new property. Additionally, a timeout mechanism for catalogs has been implemented, which improves the system's efficiency and safety by marking catalogs as in use. A test for themake_catalog
function has been included to ensure that theRemoveAfter
entry is correctly added to the catalog properties. However, the specific call parameters for thecatalogs.create
method cannot be accurately determined in the test. - use tags instead of name suffix for queries (#47). This release introduces updates to the testing library for Databricks, enhancing the naming conventions for queries to improve readability and comprehension. The previous implementation used name suffixes, which have been replaced with watchdog query tags. The
watchdog_purge_suffix
fixture has been renamed towatchdog_remove_after
, and the newmake_query
fixture has been added to the documentation. In addition, themake_query
andcreate
functions now accept an optionaltags
argument, and the query name is generated with a unique identifier. Iftags
are provided, theRemoveAfter
tag is added. Theoriginal_query_tag
is no longer hardcoded in thecreate
function and has been removed. These changes improve the overall user experience and maintainability of the project.
- Moved remaining UCX integration tests and fixtures (#45). In this release, we have made significant changes to the UCX integration tests and fixtures, as indicated by multiple commit messages. Firstly, we have moved remaining UCX integration tests and fixtures, introducing a new PyTest fixture called
Installation
in the README.md file, providing instructions on how to adddatabricks-labs-pytester
as a test-time dependency when usinghatch
as the build system. Additionally, we have added themake_feature_table
fixture, which creates a Databricks feature table and cleans it up after the test, taking optional parameters for customization. We have also modified themypy
configuration in thepyproject.toml
file to allow untyped imports during the type-checking process. In thecompute.py
file, we have updated themake_job
fixture to return a function that creates adatabricks.sdk.service.jobs.Job
instance, and modified thecreate
function to return thedatabricks.sdk.service.jobs.Job
instance directly. We have also added a new fixture calledmake_feature_table
in the plugin file, which simulates the lifecycle of a feature table in the machine learning service, with functions to generate a unique name and create/remove the feature table. In thetest_catalog.py
file, we have made changes to clean up the file and ensure proper logging of test events and errors. Overall, these changes aim to refactor, expand functionality, and improve user-friendliness for the adopters of the project, ensuring proper logging and debugging capabilities. - [internal] port over existing UCX integration tests (#44). Three new integration tests have been added to the UCX project to verify the functionality of the
RemoveAfter
property for tables and schemas. Thetest_remove_after_property_table
andtest_remove_after_property_schema
tests create new tables and schemas, respectively, and check if theRemoveAfter
property is included in their properties. However, these tests are still marked asTODO
due to existing issues with thetables.get
andschemas.get
functions. In addition, existing UCX integration tests have been ported over, which include new functions for testing the removal of resources based on theRemoveAfter
tag. These tests are located in thetests/integration/fixtures/test_compute.py
file and test the removal of various types of resources, including jobs, clusters, warehouses, and instance pools. The tests ensure that the time until purge is less than theTEST_RESOURCE_PURGE_TIMEOUT
value plus one hour and import thedatetime
module and theTEST_RESOURCE_PURGE_TIMEOUT
constant from thewatchdog
fixture, as well as thelogging
anddatabricks.sdk.service.iam
modules.
- Added
acc
andmake_acc_group
fixtures (#42). In this release, we have added two new fixtures,acc
andmake_acc_group
, to the open-source library. Theacc
fixture provides a Databricks AccountClient object for use in tests, which can interact with the Databricks account API and automatically determines the account host from theDATABRICKS_HOST
environment variable. Themake_acc_group
fixture is used for managing Databricks account groups, creating them with specified members and roles, and automatically deleting them after the test is complete. This fixture mirrors the behavior of themake_group
fixture but interacts with the account client instead of the workspace client. These fixtures enable more comprehensive integration tests for theacc
object and its various methods, enhancing the testing and management of Databricks account groups.
- Fixed nightly CI builds (#40). In this release, we have removed the
no-cheat
GitHub Actions workflow that checked for disables pylint directives in new code. We have also updated the pytest requirement version to ~8.3.3 and added badges for Python version support and lines of code to the README file. Thepermissions.py
file in thedatabricks/labs/pytester/fixtures
directory has been updated to fix nightly CI builds by improving import statements and updating types. TheSqlPermissionLevel
class has been imported from thedatabricks.sdk.service.sql
module, and an existing test case has been updated to use this new permission level for SQL-specific queries. Additionally, we have updated the version constraints for three dependencies in thepyproject.toml
file to allow for more flexibility in selecting compatible library versions. These changes may simplify the project's GitHub Actions workflows, reduce maintenance overhead, and enhance the testing process and code quality.
- Added Databricks Connect fixture. A new fixture named
spark
has been added to the codebase, providing a Databricks Connect Spark session for testing purposes. The fixture requires thedatabricks-connect
package to be installed and takes aWorkspaceClient
object as an argument. It first checks if acluster_id
is present in the environment, and if not, it skips the test and raises a message. The fixture then ensures that the cluster is running and attempts to import theDatabricksSession
class from thedatabricks.connect
module. If the import fails, it skips the test and raises a message. This new fixture enables easier testing of Databricks Connect functionality, reducing boilerplate code required to set up a Spark session within tests. Additionally, a newis_in_debug
fixture has been added, although there is no further documentation or usage examples provided for it. - Added
make_*_permissions
fixtures. In this release, we have added new fixtures to the pytester plugin for managing permissions in Databricks. These fixtures includemake_alert_permissions
,make_authorization_permissions
,make_cluster_permissions
,make_cluster_policy_permissions
,make_dashboard_permissions
,make_directory_permissions
,make_instance_pool_permissions
,make_job_permissions
,make_notebook_permissions
,make_pipeline_permissions
,make_query_permissions
,make_registered_model_permissions
,make_repository_permissions
,make_serving_endpoint_permissions
,make_warehouse_permissions
,make_workspace_file_permissions
, andmake_workspace_file_path_permissions
. These fixtures allow for easier testing of functionality that requires managing permissions in Databricks, and are used for managing permissions for various Databricks resources such as alerts, authorization, clusters, cluster policies, dashboards, directories, instance pools, jobs, notebooks, pipelines, queries, registered models, repositories, serving endpoints, warehouses, and workspace files. Additionally, a newmake_notebook_permissions
fixture has been introduced in thetest_permissions.py
file for integration tests, which allows for more comprehensive testing of the IAM system's behavior when handling notebook permissions. - Added
make_catalog
fixture. A new fixture,make_catalog
, has been added to the codebase to facilitate testing with specific catalogs, ensuring isolation and reproducibility. This fixture creates a catalog, returns its information, and removes the catalog after the test is complete. It can be used in conjunction with other fixtures such asws
,sql_backend
, andmake_random
. The fixture is utilized in the updatedtest_catalog_fixture
integration test function, which now includes new argumentsmake_catalog
,make_schema
, andmake_table
. These fixtures create catalog, schema, and table objects, enabling more comprehensive testing of the catalog, schema, and table creation functionality. Please note that catalogs created using this fixture are not currently protected from being deleted by the watchdog. - Added
make_catalog
,make_schema
, andmake_table
fixtures (#33). In this release, we have updated thedatabricks-labs-blueprint
package dependency todatabricks-labs-lsql~=0.10
and added several fixtures to the codebase to improve the reliability and maintainability of the test suite. We have introduced three new fixturesmake_catalog
,make_schema
, andmake_table
that are used for creating and managing test catalogs, schemas, and tables, respectively. These fixtures enable the creation of arbitrary test data and simplify testing by allowing predictable and consistent setup and teardown of test data for integration tests. Additionally, we have added several debugging fixtures, includingdebug_env_name
,debug_env
,env_or_skip
, andsql_backend
, to aid in testing DataBricks features related to SQL, environments, and more. Themake_udf
fixture has also been added for testing user-defined functions in DataBricks. These new fixtures and methods will assist in testing the project's functionality and ensure that the code is working as intended, making the tests more maintainable and easier to understand. - Added
make_cluster
documentation. Themake_cluster
fixture has been updated with new functionality and improvements. It now creates a Databricks cluster with specified configurations, waits for it to start, and cleans it up after the test, returning a function to create clusters. Thecluster_id
attribute is accessible from the returned object. The fixture accepts several keyword arguments:single_node
to create a single-node cluster,cluster_name
to specify a cluster name,spark_version
to set the Spark version, andautotermination_minutes
to determine when the cluster should be automatically terminated. Thews
andmake_random
parameters have been removed. The commit also introduces a new test function,test_cluster
, that creates a single-node cluster and outputs a message indicating the creation. Documentation for themake_cluster
function has been added, and themake_cluster_policy
function remains unchanged. - Added
make_experiment
fixture. In this release, we introduce themake_experiment
fixture in thedatabricks.labs.pytester.fixtures.ml
module, facilitating the creation and cleanup of Databricks Experiments for testing purposes. This fixture accepts optionalpath
andexperiment_name
parameters and returns adatabricks.sdk.service.ml.CreateExperimentResponse
object. Additionally,make_experiment_permissions
has been added for managing experiment permissions. In thepermissions.py
file, the_make_permissions_factory
function replaces the previous_make_redash_permissions_factory
, enhancing the code's maintainability and extensibility. Furthermore, amake_experiment
fixture has been added to theplugin.py
file for creating experiments with custom names and descriptions. Lastly, atest_experiments
function has been included in thetests/integration/fixtures
directory, utilizingmake_group
,make_experiment
, andmake_experiment_permissions
fixtures to create experiments and assign group permissions. - Added
make_instance_pool
documentation. In this release, themake_instance_pool
fixture has been updated with added documentation, and the usage example has been slightly modified. The fixture now accepts optional keyword arguments for the instance pool name and node type ID, with default values set for each. Themake_random
fixture is still required for generating unique names. Additionally, a new function,log_workspace_link
, has been updated to accept a new parameteranchor
for controlling the inclusion of an anchor (#
) in the generated URL. New test functionstest_instance_pool
andtest_cluster_policy
have been added to enhance the integration testing of the compute system, providing more comprehensive coverage for instance pools and cluster policies. Furthermore, documentation has been added for themake_instance_pool
fixture. Lastly, three test functions,test_cluster
,test_instance_pool
, andtest_job
, have been removed, but the setup functions for these tests are retained, indicating a possible streamlining of the codebase. - Added
make_job
documentation. Themake_job
fixture has been updated with additional arguments and improved documentation. It now acceptsnotebook_path
,name
,spark_conf
, andlibraries
as optional keyword arguments, and can accept any additional arguments to be passed to theWorkspaceClient.jobs.create
method. If nonotebook_path
ortasks
argument is provided, a random notebook is created and a single task with a notebook task is run using the latest Spark version and a single worker cluster. The fixture has been improved to manage Databricks jobs and clean them up after testing. Additionally, documentation has been added for themake_job
function and thetest_job
function in the test fixtures file. Thetest_job
function, which created a job and logged its creation, has been removed, and thetest_cluster
andtest_pipeline
functions remain unchanged. Theos
module is no longer imported in this file. - Added
make_model
fixture. A new pytest fixture,make_model
, has been added to the codebase for the open-source library. This fixture facilitates the creation and automatic cleanup of Databricks Models during tests, returning aGetModelResponse
object. The optionalmodel_name
parameter allows for customization, with a default value ofdummy-*
. Themake_model
fixture can be utilized in conjunction with other fixtures such asws
,make_random
, andmake_registered_model_permissions
, streamlining the testing of model-related functionality. Additionally, a new test function,test_models
, has been introduced, utilizingmake_model
,make_group
, andmake_registered_model_permissions
fixtures to test model management within the system. This new feature enhances the library's testing capabilities, making it easier to create, configure, and manage models and related resources during test execution. - Added
make_pipeline
fixture. A new fixture namedmake_pipeline
has been added to the project, which facilitates the creation and cleanup of a Delta Live Tables Pipeline after testing. This fixture is added to thecompute.py
file and takes optional keyword arguments such asname
,libraries
, andclusters
. It generates a random name, creates a disposable notebook with random libraries, and creates a single node cluster with 16GB memory and local disk if these arguments are not provided. The fixture returns a function to create pipelines, resulting in aCreatePipelineResponse
instance. Additionally, a new integration test has been added to test the functionality of this fixture, and it logs information about the created pipeline for debugging and inspection purposes. This new fixture improves the testing capabilities of the project, allowing for more robust and flexible tests of pipeline creation and management. - Added
make_query
fixture. In this release, we have added a new fixture calledmake_query
to the plugin module for the Redash integration. This fixture creates aLegacyQuery
object for testing query-related functionality in a controlled environment. It can be used in conjunction with themake_user
andmake_query_permissions
fixtures to test query permissions for a specific user. Themake_query
fixture generates a random query name, creates a table, and uses thews.queries_legacy.create
method to create the query. The query is then deleted using thews.queries_legacy.delete
method after the test is completed. This fixture is utilized in thetest_permissions_for_redash
function, which creates a user and a query, and then sets the permission level for the query for the created user using themake_query_permissions
fixture. This enhancement improves the testing capabilities of the Pytester framework for projects that utilize Redash. - Added
make_schema
fixture. A newmake_schema
fixture has been added to the open-source library to improve schema management and testing. This fixture creates a schema with an optional catalog name and a schema name, which defaults to a random string. The fixture cleans up the schema after the test is complete and returns an instance ofSchemaInfo
. It can be used in conjunction with other fixtures such asmake_table
andmake_udf
for easier testing and setup of schemas. Additionally, themake_schema
fixture includes a new keyword-only argumentlog_workspace_link
to log a link to the created schema in the Databricks workspace. Themake_catalog
fixture has also been updated to include thelog_workspace_link
argument for logging links to created catalogs. These changes enhance the testability of the code and provide better catalog and schema management in the Databricks workspace. - Added
make_serving_endpoint
fixture. A newmake_serving_endpoint
fixture has been added to the codebase, located inbaseline.py
,ml.py
, andplugin.py
files, andtests/integration/fixtures/test_ml.py
. This fixture enables the creation and deletion of Databricks Serving Endpoints, handling any potential DatabricksError exceptions during teardown. It also creates a model for a small workload size and returns aServingEndpointDetailed
object. Themake_serving_endpoint_permissions
fixture is introduced as well, creating serving endpoint permissions for a specified object ID, permission level, and group name. New tests have been implemented to demonstrate the usage of these fixtures, showing how to create serving endpoints, grant query permissions to a group, and test the endpoint. Additionally, updates have been made to the README.md file to include documentation for the new fixtures. - Added
make_storage_credential
fixture. In this release, we have added a new fixture calledmake_storage_credential
to our testing utilities. This fixture creates a storage credential with configurable parameters such as credential name, Azure service principal information, AWS IAM role ARN, and read-only status. It can be used to create either an Azure or AWS storage credential, depending on the provided parameters, and removes the created credential after the test. This fixture is implemented inplugin.py
and is added to the existing list of fixtures for consistent and easy-to-use testing setup. Additionally, we have introduced an integration test calledtest_storage_credential
in the test catalog for fixtures. This test utilizes the newmake_storage_credential
fixture and verifies the functionality of creating a storage credential and the integration between the system and storage services. These new additions will make it easier to write tests that require access to storage resources and improve the efficiency and ease of testing and developing new features in the codebase. - Added
make_table
fixture. In this release, we've added themake_table
fixture to simplify testing operations on tables and catalogs. This fixture creates a table with a given catalog and schema name, CTAS statement, and properties. It can create the table as a non-delta or delta table, external table with CSV or Delta location, or a view, and allows overriding the storage location. Additionally, we've updated the fixture to include new parameters and functionality, such as logging a workspace link for the created table and specifying the catalog and schema where the table will be created. The fixture now also includes new functions for creating and casting columns in the table. After the test, the fixture automatically removes the created table. This release aims to provide a more customizable and convenient way to test table operations. - Added
make_udf
fixture. Themake_udf
fixture has been added to facilitate the creation and removal of User-Defined Functions (UDFs) for testing purposes. This fixture creates a UDF with optional parameters to specify catalog, schema, name, and Hive UDF creation. It returns an instance ofdatabricks.sdk.service.catalog.FunctionInfo
. The UDF is removed after the test. This feature is utilized in the newtest_make_some_udfs
integration test, where it creates two UDFs in a schema within the Hive metastore, one with and one without Hive support. Additionally, thetest_create_view
test is now skipped, and thetest_table_fixture
test remains unchanged. This change improves the ability to test UDFs within the Hive metastore, and allows for more comprehensive testing by creating UDFs programmatically. - Added
make_warehouse
fixture. A newmake_warehouse
fixture has been added to the test suite, which allows for the creation and customization of a Databricks warehouse for testing purposes. The fixture accepts optional keyword arguments such aswarehouse_name
,warehouse_type
,cluster_size
,max_num_clusters
, andenable_serverless_compute
, allowing users to configure the warehouse's properties. It returns a function that creates a warehouse using the provided parameters and handles cleanup after the test is complete. Additionally, a corresponding test functiontest_warehouse_has_remove_after_tag
has been added to verify that a newly created warehouse has the expectedRemoveAfter
tag, facilitating automated testing and resource management. This enhancement expands the testing capabilities of the plugin and provides a more streamlined approach to testing functionality related to Databricks warehouses. - Added ability to specify custom SQL in
make_query
. Themake_query
fixture has been updated to allow for greater customization in testing, with the addition of a newquery
keyword argument. This parameter enables users to specify a custom SQL query to be stored and executed, with the default value beingSELECT * FROM <newly created random table>
. The fixture continues to create and remove theLegacyQuery
object, making it user-friendly. With this enhancement, users have increased flexibility to tailor their tests to specific needs, providing more targeted and precise testing outcomes. - Added documentation for
make_cluster_policy
. In this release, we introduce new features to enhance testing and managing Databricks cluster policies and workspace link logging in your project. We've added themake_cluster_policy
fixture, which simplifies the creation and deletion of cluster policies using a specified workspace. This fixture returns aCreatePolicyResponse
instance and can be used within test functions. Additionally, we've developed thelog_workspace_link
fixture, which constructs and logs a workspace link for debugging and tracking purposes. Themake_cluster_policy
function is also introduced in theplugin.py
file, enabling users to manage and test Databricks cluster policies using the pytester framework. To ensure proper functionality, thetest_compute.py
file includes a test function formake_cluster_policy
. These improvements will help streamline testing processes and enhance the overall user experience. - Added documentation for
make_group
andmake_user
. In this release, we have introduced themake_group
andmake_user
fixtures to manage Databricks workspace groups and users, respectively. Themake_group
fixture allows you to create groups with specified members, roles, and entitlements, handling eventual consistency issues and waiting for group provisioning if required. Themake_user
fixture creates a user and deletes it after the test, handling naming conflicts by retrying the creation process for 30 seconds. Both fixtures return instances ofGroup
andUser
, respectively, and have been documented in the README.md with usage examples. Additionally, we have introduced a built-in logger that traces entity creation and deletion through links in the Databricks Workspace UI, and added documentation for themake_group
andmake_user
functions using thegen-readme.py
script. The release also includes updates to theconftest.py
file in thetests/integration
directory, importing thefixture
function frompytest
and theinstall_logger
andlogging
modules fromdatabricks.labs.blueprint.logger
to improve documentation and configure logging for the project. - Added documentation for
make_notebook
,make_directory
, andmake_repo
. Themake_notebook
,make_directory
, andmake_repo
fixtures have been updated with new functionality and improved documentation in this release. These fixtures are used in tests to manage Databricks notebooks, directories, and repos respectively, and they now return functions that create resources with specified parameters. Themake_notebook
fixture now includes optional keyword arguments forpath
,content
,language
,format
, andoverwrite
, and returns anos.PathLike
object that will be automatically deleted after the test is complete. Themake_directory
fixture now includes an optional keyword argument forpath
, and themake_repo
fixture now includes optional keyword arguments forurl
,provider
, andpath
. These fixtures simplify the process of creating and managing Databricks resources in tests and help ensure that resources are properly cleaned up after each test is complete. The commit also includes documentation for the new functionality and integration tests for these fixtures. - Added documentation for
make_secret_scope
andmake_secret_scope_acl
. In this release, documentation has been added for two new functions,make_secret_scope
andmake_secret_scope_acl
, which are used for creating and managing secret scopes and their associated access control lists (ACLs) in a Databricks Workspace. Themake_secret_scope
function creates a new secret scope with a unique name generated using a random name generator, and automatically deletes the scope after the test is complete. Themake_secret_scope_acl
function manages ACLs for secret scopes, defining permissions for principals (users or groups) on specific secret scopes. Three new test functions have also been added to test the functionality of creating secret scopes and managing their ACLs using these new functions. Additionally, type hints have been added to the package to support PEP 561. Overall, these changes improve the documentation and testing of the project, making it easier for developers to understand and use these new functions for managing secret scopes and their ACLs in a Databricks Workspace. - Added documentation update on
make fmt
(#34). In this release, themake fmt
command in the documentation has been updated to include an additional step that runs thegen-readme.py
script before executinghatch run fmt
. This new script generates or updates the README file with detailed documentation on various PyTest fixtures available in the Python Testing for Databricks project. A newFixture
dataclass has been introduced to represent a fixture's metadata, and thedatabricks.labs.pytester.fixtures.plugin
module is used to discover all fixtures. TheFIXTURES
section in the README.md file has been updated with the new documentation, which includes information on the purpose, parameters, return values, and usage examples for each fixture. Thetest
andlint
targets in the Makefile remain unchanged. Please note that this project is not officially supported by Databricks. - Added downstream testing. In this enhancement, we have implemented downstream testing in our CI/CD pipeline through the introduction of a new GitHub Actions workflow called "downstreams.yml." This workflow runs tests when pull requests are opened, synchronized, or checked during a merge group, and on pushes to the main branch. The job compatibility is set up to run on the latest version of Ubuntu, and it includes steps to checkout the code with a full fetch depth, install Python, install the toolchain, and run the downstreams test suite using the databrickslabs/sandbox/downstreams action. The downstreams matrix includes the blueprint, lsql, ucx, and remorph repositories in the databrickslabs organization. The GITHUB_TOKEN environment variable is used for authentication. This improvement will help ensure that our codebase remains stable and functional as we continue to develop and release new features.
- Added note on UCX project. In the 2024 release, the open-source library has undergone significant updates, incorporating the UCX project into its ecosystem. UCX, an open-source project providing a unified communication layer for various high-performance computing (HPC) platforms, enhances the library's functionality, particularly in automated migrations and static code analysis. The library, developed as part of the Unity Catalog Automated Migrations project, has also added new authors and maintainers, including Vuong Nguyen, Lars George, Cor Zuurmond, Andrew Snare, Pritish Pai, and removed Liran Bareket and Vuong Nguyen, indicating potential new contributions and teams involved. The logging section has also been improved, based on years of debugging integration tests for Databricks and its ecosystem, simplifying integration testing with Databricks for other projects.
- Added support for
.env
files (#36). In this change, we have added support for.env
files to the open-source library, allowing for local debugging and integration tests in IDEs. A newdebug_env_name
fixture has been introduced, which enables specifying the name of the debug environment with a default value of.env
. If there are security concerns about using.env
files, a~/.databricks/debug-env.json
file can be used instead. Additionally, we have updated thegen-readme.py
script and theFixture
class to improve documentation and provide information about the relationships between fixtures and.env
files. Thedebug_env
fixture has been added to read adebug-env.json
file if the code is running in debug mode, and theenv_or_skip
fixture has been updated to skip tests if required environment variables are not set. These changes improve the testing capabilities of the library, allowing for easier management and integration of environment variables in tests. - Added supporting documents. In this release, we introduce a new changelog file for the project, versioned at 0.0.0, to record notable changes over time. Additionally, we have added a CODEOWNERS file, designating @nfx as the default code owner for all files in the repository, and a CONTRIBUTING.md file that provides detailed guidelines for contributing to the project. The CONTRIBUTING.md file covers a wide range of topics, including first principles, change management, code organization, adding new fixtures, common mypy error fixes, integration testing infrastructure, local setup, first contribution, and troubleshooting. These additions aim to improve code quality, maintainability, and collaboration for the project's developers and users.
- Added telemetry tracking. A new telemetry tracking feature has been implemented in the project with the addition of the
with_user_agent_extra
method in the "init.py" file. This method, sourced from the "databricks.sdk.core" package, enables the attachment of an extra user agent string to HTTP requests, which includes the version of thepytester
project. The "about.py" file's__version__
variable is utilized to ensure the specific version of thepytester
project is incorporated in the user agent string. This enhancement allows for the tracking of project usage and statistics through user agents, providing valuable insights for future development and improvements. - Added unit testing for test fixtures. In this release, we have added comprehensive unit tests for various entities in our codebase, such as alerts, authorization permissions, catalog, cluster, cluster policies, dashboard permissions, directories, experiments, feature table permissions, groups, instance pools, instance pool permissions, jobs, job permissions, lakeview dashboard permissions, models, notebooks, notebook permissions, pipelines, pipeline permissions, queries, query permissions, registered model permissions, repos, repo permissions, secret scopes, secret scope ACLs, serving endpoints, serving endpoint permissions, storage credentials, UDFs, users, warehouses, warehouse permissions, workspace file path permissions, and workspace file permissions. Additionally, we have updated fixtures such as sql_backend, workspace_library, debug_env, and product_info with tests and provided examples on how to use these fixtures in the code. We have also updated our configuration files to improve code quality, maintainability, and reliability, including updating the version of mypy, adding the unit package to the known-first-party modules in isort configuration, and updating the ignore list for pylint. Furthermore, we have added a new
unwrap.py
file to thedatabricks/labs/pytester/fixtures
directory to support unit testing of pytest fixtures. We have also added unit tests for test fixtures in various files, ensuring that the fixtures behave as expected, thereby improving the reliability and stability of the codebase. Lastly, we have added a new unit test file for testing catalog functionality, specifically for themake_table
function, which creates a new managed table with a specified schema and table type. - Bump unit testing coverage. This commit enhances the unit testing coverage and improves the overall code quality of the open-source library. Several changes have been introduced, including the addition of new fixtures
sql_backend
,sql_exec
, andsql_fetch_all
for testing SQL-related functionality in the Databricks platform. These fixtures are demonstrated in the newly addedrandom_string
test case. The commit also introduces a new sectionexclude_also
under the "[tool.mypy]" section in the pyproject.toml file, which provides more precise control over the lines checked during mypy type checking. Furthermore, the environment.py file has been removed, and several SQL backend and test resource purge time-related fixtures have been deleted, resulting in increased unit testing coverage. Additionally, thecatalog.py
andcompute.py
files in thedatabricks/labs/pytester/fixtures
directory have been updated to improve resource management and ensure proper handling after tests are executed. Thepermissions.py
file has been modified to remove thesql/
prefix from permission paths for dashboards, alerts, and queries, simplifying the permission hierarchy in the tests. Theplugin.py
file has been updated to reorganize SQL and environment-related functions, making them more modular and maintainable. Finally, new utility fixtureswatchdog_remove_after
andwatchdog_purge_suffix
have been added in thewatchdog.py
file to manage and purge test objects as needed, and a new file,.env
, has been added to thetests/unit/fixtures/
directory to provide consistent testing conditions. These changes contribute to a better testing environment and improved overall project quality. - Prettify fixture documentation (#35). In this release, the documentation of the
ws
fixture in the Databricks testing project has been significantly enhanced in the README file. Thews
fixture now has more comprehensive documentation, including its purpose, usage example, and the fact that it is built on top of other fixtures. Additionally, the Fixture class in the gen-readme.py script has been improved for better readability and clarity. Themake_random
function in the baseline.py file has been refactored for improved documentation and clarity, with updated usage examples and the removal of a deprecatedReturns
section. These changes aim to provide clearer and more comprehensive documentation for users, making it easier to understand and utilize the features effectively. - Updated README.md. In this update, we have added several PyTest fixtures to enhance testing capabilities in the Databricks workspace. These fixtures include
make_warehouse_permissions
,make_lakeview_dashboard_permissions
,log_workspace_link
,make_dashboard_permissions
,make_alert_permissions
,make_query_permissions
,make_experiment_permissions
,make_registered_model_permissions
,make_serving_endpoint_permissions
, andmake_feature_table_permissions
. These additions enable easier testing of various functionalities and linking within the workspace. Furthermore, we have included themake_authorization_permissions
fixture to facilitate testing of authorization functionalities. To aid in debugging, we have updated theLogging
section with thedebug_env_name
anddebug_env
fixtures. Lastly, we have added theworkspace_library
fixture for testing library-related functionalities in the workspace. These changes improve the overall testing experience and enable more comprehensive testing within the Databricks workspace. - Updated pytest requirement from ~=8.1.0 to ~=8.3.3 (#31). In this pull request, we update the pytest requirement from version 8.1.0 to 8.3.3 in our pyproject.toml file. This update includes several bug fixes and improvements for our testing framework, such as avoiding the calling of properties during fixture discovery, fixing the issue of not displaying assertion failure differences with the
--import-mode=importlib
option in pytest 8.1 and above, and addressing a regression that caused mypy to fail. Additionally, we fix typing compatibility with Python 3.9 or less by replacingtyping.Self
withtyping_extensions.Self
. This update also ensures consistent path handling across environments by fixing an issue with backslashes being incorrectly converted in nodeid paths on Windows.
Dependency updates:
- Updated pytest requirement from ~=8.1.0 to ~=8.3.3 (#31).