From 17c91a679966fddec5fd08096dabf03c01eaccca Mon Sep 17 00:00:00 2001 From: Renaud Hartert Date: Wed, 21 Aug 2024 10:18:01 +0200 Subject: [PATCH] [Release] Release v0.31.0 ### Bug Fixes * Fixed regression introduced in v0.30.0 causing `ValueError: Invalid semantic version: 0.33.1+420240816190912` ([#729](https://github.com/databricks/databricks-sdk-py/pull/729)). ### Internal Changes * Escape single quotes in regex matchers ([#727](https://github.com/databricks/databricks-sdk-py/pull/727)). ### API Changes: * Added [w.policy_compliance_for_clusters](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/policy_compliance_for_clusters.html) workspace-level service. * Added [w.policy_compliance_for_jobs](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/policy_compliance_for_jobs.html) workspace-level service. * Added [w.resource_quotas](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/resource_quotas.html) workspace-level service. * Added `databricks.sdk.service.catalog.GetQuotaRequest`, `databricks.sdk.service.catalog.GetQuotaResponse`, `databricks.sdk.service.catalog.ListQuotasRequest`, `databricks.sdk.service.catalog.ListQuotasResponse` and `databricks.sdk.service.catalog.QuotaInfo` dataclasses. * Added `databricks.sdk.service.compute.ClusterCompliance`, `databricks.sdk.service.compute.ClusterSettingsChange`, `databricks.sdk.service.compute.EnforceClusterComplianceRequest`, `databricks.sdk.service.compute.EnforceClusterComplianceResponse`, `databricks.sdk.service.compute.GetClusterComplianceRequest`, `databricks.sdk.service.compute.GetClusterComplianceResponse`, `databricks.sdk.service.compute.ListClusterCompliancesRequest` and `databricks.sdk.service.compute.ListClusterCompliancesResponse` dataclasses. * Added `databricks.sdk.service.jobs.EnforcePolicyComplianceForJobResponseJobClusterSettingsChange`, `databricks.sdk.service.jobs.EnforcePolicyComplianceRequest`, `databricks.sdk.service.jobs.EnforcePolicyComplianceResponse`, `databricks.sdk.service.jobs.GetPolicyComplianceRequest`, `databricks.sdk.service.jobs.GetPolicyComplianceResponse`, `databricks.sdk.service.jobs.JobCompliance`, `databricks.sdk.service.jobs.ListJobComplianceForPolicyResponse` and `databricks.sdk.service.jobs.ListJobComplianceRequest` dataclasses. * Added `fallback` field for `databricks.sdk.service.catalog.CreateExternalLocation`. * Added `fallback` field for `databricks.sdk.service.catalog.ExternalLocationInfo`. * Added `fallback` field for `databricks.sdk.service.catalog.UpdateExternalLocation`. * Added `job_run_id` field for `databricks.sdk.service.jobs.BaseRun`. * Added `job_run_id` field for `databricks.sdk.service.jobs.Run`. * Added `include_metrics` field for `databricks.sdk.service.sql.ListQueryHistoryRequest`. * Added `statement_ids` field for `databricks.sdk.service.sql.QueryFilter`. * Removed `databricks.sdk.service.sql.ContextFilter` dataclass. * Removed `context_filter` field for `databricks.sdk.service.sql.QueryFilter`. * Removed `pipeline_id` and `pipeline_update_id` fields for `databricks.sdk.service.sql.QuerySource`. OpenAPI SHA: 3eae49b444cac5a0118a3503e5b7ecef7f96527a, Date: 2024-08-21 --- .codegen/_openapi_sha | 2 +- CHANGELOG.md | 33 ++ databricks/sdk/__init__.py | 26 +- databricks/sdk/errors/overrides.py | 8 + databricks/sdk/errors/platform.py | 5 + databricks/sdk/service/catalog.py | 193 +++++++++++ databricks/sdk/service/compute.py | 272 ++++++++++++++++ databricks/sdk/service/dashboards.py | 33 +- databricks/sdk/service/jobs.py | 306 +++++++++++++++++- databricks/sdk/service/sql.py | 95 +----- databricks/sdk/version.py | 2 +- docs/dbdataclasses/catalog.rst | 27 ++ docs/dbdataclasses/compute.rst | 24 ++ docs/dbdataclasses/jobs.rst | 24 ++ docs/dbdataclasses/sql.rst | 4 - docs/workspace/catalog/external_locations.rst | 12 +- docs/workspace/catalog/index.rst | 1 + docs/workspace/catalog/resource_quotas.rst | 45 +++ docs/workspace/compute/index.rst | 1 + .../policy_compliance_for_clusters.rst | 71 ++++ docs/workspace/dashboards/lakeview.rst | 10 +- docs/workspace/jobs/index.rst | 3 +- .../jobs/policy_compliance_for_jobs.rst | 66 ++++ docs/workspace/sql/query_history.rst | 11 +- 24 files changed, 1152 insertions(+), 122 deletions(-) create mode 100644 docs/workspace/catalog/resource_quotas.rst create mode 100644 docs/workspace/compute/policy_compliance_for_clusters.rst create mode 100644 docs/workspace/jobs/policy_compliance_for_jobs.rst diff --git a/.codegen/_openapi_sha b/.codegen/_openapi_sha index fef6f268b..8b01a2422 100644 --- a/.codegen/_openapi_sha +++ b/.codegen/_openapi_sha @@ -1 +1 @@ -f98c07f9c71f579de65d2587bb0292f83d10e55d \ No newline at end of file +3eae49b444cac5a0118a3503e5b7ecef7f96527a \ No newline at end of file diff --git a/CHANGELOG.md b/CHANGELOG.md index 278eec3e2..ee73d57f9 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,38 @@ # Version changelog +## [Release] Release v0.31.0 + +### Bug Fixes + + * Fixed regression introduced in v0.30.0 causing `ValueError: Invalid semantic version: 0.33.1+420240816190912` ([#729](https://github.com/databricks/databricks-sdk-py/pull/729)). + + +### Internal Changes + + * Escape single quotes in regex matchers ([#727](https://github.com/databricks/databricks-sdk-py/pull/727)). + + +### API Changes: + + * Added [w.policy_compliance_for_clusters](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/policy_compliance_for_clusters.html) workspace-level service. + * Added [w.policy_compliance_for_jobs](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/policy_compliance_for_jobs.html) workspace-level service. + * Added [w.resource_quotas](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/resource_quotas.html) workspace-level service. + * Added `databricks.sdk.service.catalog.GetQuotaRequest`, `databricks.sdk.service.catalog.GetQuotaResponse`, `databricks.sdk.service.catalog.ListQuotasRequest`, `databricks.sdk.service.catalog.ListQuotasResponse` and `databricks.sdk.service.catalog.QuotaInfo` dataclasses. + * Added `databricks.sdk.service.compute.ClusterCompliance`, `databricks.sdk.service.compute.ClusterSettingsChange`, `databricks.sdk.service.compute.EnforceClusterComplianceRequest`, `databricks.sdk.service.compute.EnforceClusterComplianceResponse`, `databricks.sdk.service.compute.GetClusterComplianceRequest`, `databricks.sdk.service.compute.GetClusterComplianceResponse`, `databricks.sdk.service.compute.ListClusterCompliancesRequest` and `databricks.sdk.service.compute.ListClusterCompliancesResponse` dataclasses. + * Added `databricks.sdk.service.jobs.EnforcePolicyComplianceForJobResponseJobClusterSettingsChange`, `databricks.sdk.service.jobs.EnforcePolicyComplianceRequest`, `databricks.sdk.service.jobs.EnforcePolicyComplianceResponse`, `databricks.sdk.service.jobs.GetPolicyComplianceRequest`, `databricks.sdk.service.jobs.GetPolicyComplianceResponse`, `databricks.sdk.service.jobs.JobCompliance`, `databricks.sdk.service.jobs.ListJobComplianceForPolicyResponse` and `databricks.sdk.service.jobs.ListJobComplianceRequest` dataclasses. + * Added `fallback` field for `databricks.sdk.service.catalog.CreateExternalLocation`. + * Added `fallback` field for `databricks.sdk.service.catalog.ExternalLocationInfo`. + * Added `fallback` field for `databricks.sdk.service.catalog.UpdateExternalLocation`. + * Added `job_run_id` field for `databricks.sdk.service.jobs.BaseRun`. + * Added `job_run_id` field for `databricks.sdk.service.jobs.Run`. + * Added `include_metrics` field for `databricks.sdk.service.sql.ListQueryHistoryRequest`. + * Added `statement_ids` field for `databricks.sdk.service.sql.QueryFilter`. + * Removed `databricks.sdk.service.sql.ContextFilter` dataclass. + * Removed `context_filter` field for `databricks.sdk.service.sql.QueryFilter`. + * Removed `pipeline_id` and `pipeline_update_id` fields for `databricks.sdk.service.sql.QuerySource`. + +OpenAPI SHA: 3eae49b444cac5a0118a3503e5b7ecef7f96527a, Date: 2024-08-21 + ## [Release] Release v0.30.0 ### New Features and Improvements diff --git a/databricks/sdk/__init__.py b/databricks/sdk/__init__.py index 48fe1beb6..50069c315 100755 --- a/databricks/sdk/__init__.py +++ b/databricks/sdk/__init__.py @@ -17,7 +17,8 @@ GrantsAPI, MetastoresAPI, ModelVersionsAPI, OnlineTablesAPI, QualityMonitorsAPI, - RegisteredModelsAPI, SchemasAPI, + RegisteredModelsAPI, + ResourceQuotasAPI, SchemasAPI, StorageCredentialsAPI, SystemSchemasAPI, TableConstraintsAPI, TablesAPI, @@ -27,6 +28,7 @@ GlobalInitScriptsAPI, InstancePoolsAPI, InstanceProfilesAPI, LibrariesAPI, + PolicyComplianceForClustersAPI, PolicyFamiliesAPI) from databricks.sdk.service.dashboards import GenieAPI, LakeviewAPI from databricks.sdk.service.files import DbfsAPI, FilesAPI @@ -38,7 +40,7 @@ GroupsAPI, PermissionMigrationAPI, PermissionsAPI, ServicePrincipalsAPI, UsersAPI, WorkspaceAssignmentAPI) -from databricks.sdk.service.jobs import JobsAPI +from databricks.sdk.service.jobs import JobsAPI, PolicyComplianceForJobsAPI from databricks.sdk.service.marketplace import ( ConsumerFulfillmentsAPI, ConsumerInstallationsAPI, ConsumerListingsAPI, ConsumerPersonalizationRequestsAPI, ConsumerProvidersAPI, @@ -214,6 +216,8 @@ def __init__(self, self._permission_migration = PermissionMigrationAPI(self._api_client) self._permissions = PermissionsAPI(self._api_client) self._pipelines = PipelinesAPI(self._api_client) + self._policy_compliance_for_clusters = PolicyComplianceForClustersAPI(self._api_client) + self._policy_compliance_for_jobs = PolicyComplianceForJobsAPI(self._api_client) self._policy_families = PolicyFamiliesAPI(self._api_client) self._provider_exchange_filters = ProviderExchangeFiltersAPI(self._api_client) self._provider_exchanges = ProviderExchangesAPI(self._api_client) @@ -234,6 +238,7 @@ def __init__(self, self._recipients = RecipientsAPI(self._api_client) self._registered_models = RegisteredModelsAPI(self._api_client) self._repos = ReposAPI(self._api_client) + self._resource_quotas = ResourceQuotasAPI(self._api_client) self._schemas = SchemasAPI(self._api_client) self._secrets = SecretsAPI(self._api_client) self._service_principals = ServicePrincipalsAPI(self._api_client) @@ -499,6 +504,16 @@ def pipelines(self) -> PipelinesAPI: """The Delta Live Tables API allows you to create, edit, delete, start, and view details about pipelines.""" return self._pipelines + @property + def policy_compliance_for_clusters(self) -> PolicyComplianceForClustersAPI: + """The policy compliance APIs allow you to view and manage the policy compliance status of clusters in your workspace.""" + return self._policy_compliance_for_clusters + + @property + def policy_compliance_for_jobs(self) -> PolicyComplianceForJobsAPI: + """The compliance APIs allow you to view and manage the policy compliance status of jobs in your workspace.""" + return self._policy_compliance_for_jobs + @property def policy_families(self) -> PolicyFamiliesAPI: """View available policy families.""" @@ -561,7 +576,7 @@ def queries_legacy(self) -> QueriesLegacyAPI: @property def query_history(self) -> QueryHistoryAPI: - """A service responsible for storing and retrieving the list of queries run against SQL endpoints, serverless compute, and DLT.""" + """A service responsible for storing and retrieving the list of queries run against SQL endpoints and serverless compute.""" return self._query_history @property @@ -594,6 +609,11 @@ def repos(self) -> ReposAPI: """The Repos API allows users to manage their git repos.""" return self._repos + @property + def resource_quotas(self) -> ResourceQuotasAPI: + """Unity Catalog enforces resource quotas on all securable objects, which limits the number of resources that can be created.""" + return self._resource_quotas + @property def schemas(self) -> SchemasAPI: """A schema (also called a database) is the second layer of Unity Catalog’s three-level namespace.""" diff --git a/databricks/sdk/errors/overrides.py b/databricks/sdk/errors/overrides.py index 492b2caad..840bdcfcb 100644 --- a/databricks/sdk/errors/overrides.py +++ b/databricks/sdk/errors/overrides.py @@ -22,4 +22,12 @@ message_matcher=re.compile(r'Job .* does not exist'), custom_error=ResourceDoesNotExist, ), + _ErrorOverride(debug_name="Job Runs InvalidParameterValue=>ResourceDoesNotExist", + path_regex=re.compile(r'^/api/2\.\d/jobs/runs/get'), + verb="GET", + status_code_matcher=re.compile(r'^400$'), + error_code_matcher=re.compile(r'INVALID_PARAMETER_VALUE'), + message_matcher=re.compile(r'(Run .* does not exist|Run: .* in job: .* doesn\'t exist)'), + custom_error=ResourceDoesNotExist, + ), ] diff --git a/databricks/sdk/errors/platform.py b/databricks/sdk/errors/platform.py index df25fad4b..0d923a75c 100755 --- a/databricks/sdk/errors/platform.py +++ b/databricks/sdk/errors/platform.py @@ -47,6 +47,10 @@ class DeadlineExceeded(DatabricksError): """the deadline expired before the operation could complete""" +class InvalidState(BadRequest): + """unexpected state""" + + class InvalidParameterValue(BadRequest): """supplied value for a parameter was invalid""" @@ -99,6 +103,7 @@ class DataLoss(InternalError): } ERROR_CODE_MAPPING = { + 'INVALID_STATE': InvalidState, 'INVALID_PARAMETER_VALUE': InvalidParameterValue, 'RESOURCE_DOES_NOT_EXIST': ResourceDoesNotExist, 'ABORTED': Aborted, diff --git a/databricks/sdk/service/catalog.py b/databricks/sdk/service/catalog.py index 0e81d239f..5c3702daf 100755 --- a/databricks/sdk/service/catalog.py +++ b/databricks/sdk/service/catalog.py @@ -849,7 +849,10 @@ class ConnectionInfoSecurableKind(Enum): """Kind of connection securable.""" CONNECTION_BIGQUERY = 'CONNECTION_BIGQUERY' + CONNECTION_BUILTIN_HIVE_METASTORE = 'CONNECTION_BUILTIN_HIVE_METASTORE' CONNECTION_DATABRICKS = 'CONNECTION_DATABRICKS' + CONNECTION_EXTERNAL_HIVE_METASTORE = 'CONNECTION_EXTERNAL_HIVE_METASTORE' + CONNECTION_GLUE = 'CONNECTION_GLUE' CONNECTION_MYSQL = 'CONNECTION_MYSQL' CONNECTION_ONLINE_CATALOG = 'CONNECTION_ONLINE_CATALOG' CONNECTION_POSTGRESQL = 'CONNECTION_POSTGRESQL' @@ -864,6 +867,8 @@ class ConnectionType(Enum): BIGQUERY = 'BIGQUERY' DATABRICKS = 'DATABRICKS' + GLUE = 'GLUE' + HIVE_METASTORE = 'HIVE_METASTORE' MYSQL = 'MYSQL' POSTGRESQL = 'POSTGRESQL' REDSHIFT = 'REDSHIFT' @@ -1023,6 +1028,11 @@ class CreateExternalLocation: encryption_details: Optional[EncryptionDetails] = None """Encryption options that apply to clients connecting to cloud storage.""" + fallback: Optional[bool] = None + """Indicates whether fallback mode is enabled for this external location. When fallback mode is + enabled, the access to the location falls back to cluster credentials if UC credentials are not + sufficient.""" + read_only: Optional[bool] = None """Indicates whether the external location is read-only.""" @@ -1036,6 +1046,7 @@ def as_dict(self) -> dict: if self.comment is not None: body['comment'] = self.comment if self.credential_name is not None: body['credential_name'] = self.credential_name if self.encryption_details: body['encryption_details'] = self.encryption_details.as_dict() + if self.fallback is not None: body['fallback'] = self.fallback if self.name is not None: body['name'] = self.name if self.read_only is not None: body['read_only'] = self.read_only if self.skip_validation is not None: body['skip_validation'] = self.skip_validation @@ -1049,6 +1060,7 @@ def from_dict(cls, d: Dict[str, any]) -> CreateExternalLocation: comment=d.get('comment', None), credential_name=d.get('credential_name', None), encryption_details=_from_dict(d, 'encryption_details', EncryptionDetails), + fallback=d.get('fallback', None), name=d.get('name', None), read_only=d.get('read_only', None), skip_validation=d.get('skip_validation', None), @@ -1974,6 +1986,11 @@ class ExternalLocationInfo: encryption_details: Optional[EncryptionDetails] = None """Encryption options that apply to clients connecting to cloud storage.""" + fallback: Optional[bool] = None + """Indicates whether fallback mode is enabled for this external location. When fallback mode is + enabled, the access to the location falls back to cluster credentials if UC credentials are not + sufficient.""" + isolation_mode: Optional[IsolationMode] = None """Whether the current securable is accessible from all workspaces or a specific set of workspaces.""" @@ -2009,6 +2026,7 @@ def as_dict(self) -> dict: if self.credential_id is not None: body['credential_id'] = self.credential_id if self.credential_name is not None: body['credential_name'] = self.credential_name if self.encryption_details: body['encryption_details'] = self.encryption_details.as_dict() + if self.fallback is not None: body['fallback'] = self.fallback if self.isolation_mode is not None: body['isolation_mode'] = self.isolation_mode.value if self.metastore_id is not None: body['metastore_id'] = self.metastore_id if self.name is not None: body['name'] = self.name @@ -2030,6 +2048,7 @@ def from_dict(cls, d: Dict[str, any]) -> ExternalLocationInfo: credential_id=d.get('credential_id', None), credential_name=d.get('credential_name', None), encryption_details=_from_dict(d, 'encryption_details', EncryptionDetails), + fallback=d.get('fallback', None), isolation_mode=_enum(d, 'isolation_mode', IsolationMode), metastore_id=d.get('metastore_id', None), name=d.get('name', None), @@ -2544,6 +2563,23 @@ class GetMetastoreSummaryResponseDeltaSharingScope(Enum): INTERNAL_AND_EXTERNAL = 'INTERNAL_AND_EXTERNAL' +@dataclass +class GetQuotaResponse: + quota_info: Optional[QuotaInfo] = None + """The returned QuotaInfo.""" + + def as_dict(self) -> dict: + """Serializes the GetQuotaResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.quota_info: body['quota_info'] = self.quota_info.as_dict() + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> GetQuotaResponse: + """Deserializes the GetQuotaResponse from a dictionary.""" + return cls(quota_info=_from_dict(d, 'quota_info', QuotaInfo)) + + class IsolationMode(Enum): """Whether the current securable is accessible from all workspaces or a specific set of workspaces.""" @@ -2719,6 +2755,29 @@ def from_dict(cls, d: Dict[str, any]) -> ListModelVersionsResponse: next_page_token=d.get('next_page_token', None)) +@dataclass +class ListQuotasResponse: + next_page_token: Optional[str] = None + """Opaque token to retrieve the next page of results. Absent if there are no more pages. + __page_token__ should be set to this value for the next request.""" + + quotas: Optional[List[QuotaInfo]] = None + """An array of returned QuotaInfos.""" + + def as_dict(self) -> dict: + """Serializes the ListQuotasResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.next_page_token is not None: body['next_page_token'] = self.next_page_token + if self.quotas: body['quotas'] = [v.as_dict() for v in self.quotas] + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> ListQuotasResponse: + """Deserializes the ListQuotasResponse from a dictionary.""" + return cls(next_page_token=d.get('next_page_token', None), + quotas=_repeated_dict(d, 'quotas', QuotaInfo)) + + @dataclass class ListRegisteredModelsResponse: next_page_token: Optional[str] = None @@ -4048,6 +4107,49 @@ def from_dict(cls, d: Dict[str, any]) -> ProvisioningStatus: initial_pipeline_sync_progress=_from_dict(d, 'initial_pipeline_sync_progress', PipelineProgress)) +@dataclass +class QuotaInfo: + last_refreshed_at: Optional[int] = None + """The timestamp that indicates when the quota count was last updated.""" + + parent_full_name: Optional[str] = None + """Name of the parent resource. Returns metastore ID if the parent is a metastore.""" + + parent_securable_type: Optional[SecurableType] = None + """The quota parent securable type.""" + + quota_count: Optional[int] = None + """The current usage of the resource quota.""" + + quota_limit: Optional[int] = None + """The current limit of the resource quota.""" + + quota_name: Optional[str] = None + """The name of the quota.""" + + def as_dict(self) -> dict: + """Serializes the QuotaInfo into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.last_refreshed_at is not None: body['last_refreshed_at'] = self.last_refreshed_at + if self.parent_full_name is not None: body['parent_full_name'] = self.parent_full_name + if self.parent_securable_type is not None: + body['parent_securable_type'] = self.parent_securable_type.value + if self.quota_count is not None: body['quota_count'] = self.quota_count + if self.quota_limit is not None: body['quota_limit'] = self.quota_limit + if self.quota_name is not None: body['quota_name'] = self.quota_name + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> QuotaInfo: + """Deserializes the QuotaInfo from a dictionary.""" + return cls(last_refreshed_at=d.get('last_refreshed_at', None), + parent_full_name=d.get('parent_full_name', None), + parent_securable_type=_enum(d, 'parent_securable_type', SecurableType), + quota_count=d.get('quota_count', None), + quota_limit=d.get('quota_limit', None), + quota_name=d.get('quota_name', None)) + + @dataclass class RegisteredModelAlias: """Registered model alias.""" @@ -4969,6 +5071,11 @@ class UpdateExternalLocation: encryption_details: Optional[EncryptionDetails] = None """Encryption options that apply to clients connecting to cloud storage.""" + fallback: Optional[bool] = None + """Indicates whether fallback mode is enabled for this external location. When fallback mode is + enabled, the access to the location falls back to cluster credentials if UC credentials are not + sufficient.""" + force: Optional[bool] = None """Force update even if changing url invalidates dependent external tables or mounts.""" @@ -5000,6 +5107,7 @@ def as_dict(self) -> dict: if self.comment is not None: body['comment'] = self.comment if self.credential_name is not None: body['credential_name'] = self.credential_name if self.encryption_details: body['encryption_details'] = self.encryption_details.as_dict() + if self.fallback is not None: body['fallback'] = self.fallback if self.force is not None: body['force'] = self.force if self.isolation_mode is not None: body['isolation_mode'] = self.isolation_mode.value if self.name is not None: body['name'] = self.name @@ -5017,6 +5125,7 @@ def from_dict(cls, d: Dict[str, any]) -> UpdateExternalLocation: comment=d.get('comment', None), credential_name=d.get('credential_name', None), encryption_details=_from_dict(d, 'encryption_details', EncryptionDetails), + fallback=d.get('fallback', None), force=d.get('force', None), isolation_mode=_enum(d, 'isolation_mode', IsolationMode), name=d.get('name', None), @@ -6597,6 +6706,7 @@ def create(self, access_point: Optional[str] = None, comment: Optional[str] = None, encryption_details: Optional[EncryptionDetails] = None, + fallback: Optional[bool] = None, read_only: Optional[bool] = None, skip_validation: Optional[bool] = None) -> ExternalLocationInfo: """Create an external location. @@ -6617,6 +6727,10 @@ def create(self, User-provided free-form text description. :param encryption_details: :class:`EncryptionDetails` (optional) Encryption options that apply to clients connecting to cloud storage. + :param fallback: bool (optional) + Indicates whether fallback mode is enabled for this external location. When fallback mode is + enabled, the access to the location falls back to cluster credentials if UC credentials are not + sufficient. :param read_only: bool (optional) Indicates whether the external location is read-only. :param skip_validation: bool (optional) @@ -6629,6 +6743,7 @@ def create(self, if comment is not None: body['comment'] = comment if credential_name is not None: body['credential_name'] = credential_name if encryption_details is not None: body['encryption_details'] = encryption_details.as_dict() + if fallback is not None: body['fallback'] = fallback if name is not None: body['name'] = name if read_only is not None: body['read_only'] = read_only if skip_validation is not None: body['skip_validation'] = skip_validation @@ -6736,6 +6851,7 @@ def update(self, comment: Optional[str] = None, credential_name: Optional[str] = None, encryption_details: Optional[EncryptionDetails] = None, + fallback: Optional[bool] = None, force: Optional[bool] = None, isolation_mode: Optional[IsolationMode] = None, new_name: Optional[str] = None, @@ -6759,6 +6875,10 @@ def update(self, Name of the storage credential used with this location. :param encryption_details: :class:`EncryptionDetails` (optional) Encryption options that apply to clients connecting to cloud storage. + :param fallback: bool (optional) + Indicates whether fallback mode is enabled for this external location. When fallback mode is + enabled, the access to the location falls back to cluster credentials if UC credentials are not + sufficient. :param force: bool (optional) Force update even if changing url invalidates dependent external tables or mounts. :param isolation_mode: :class:`IsolationMode` (optional) @@ -6781,6 +6901,7 @@ def update(self, if comment is not None: body['comment'] = comment if credential_name is not None: body['credential_name'] = credential_name if encryption_details is not None: body['encryption_details'] = encryption_details.as_dict() + if fallback is not None: body['fallback'] = fallback if force is not None: body['force'] = force if isolation_mode is not None: body['isolation_mode'] = isolation_mode.value if new_name is not None: body['new_name'] = new_name @@ -8178,6 +8299,78 @@ def update(self, return RegisteredModelInfo.from_dict(res) +class ResourceQuotasAPI: + """Unity Catalog enforces resource quotas on all securable objects, which limits the number of resources that + can be created. Quotas are expressed in terms of a resource type and a parent (for example, tables per + metastore or schemas per catalog). The resource quota APIs enable you to monitor your current usage and + limits. For more information on resource quotas see the [Unity Catalog documentation]. + + [Unity Catalog documentation]: https://docs.databricks.com/en/data-governance/unity-catalog/index.html#resource-quotas""" + + def __init__(self, api_client): + self._api = api_client + + def get_quota(self, parent_securable_type: str, parent_full_name: str, + quota_name: str) -> GetQuotaResponse: + """Get information for a single resource quota. + + The GetQuota API returns usage information for a single resource quota, defined as a child-parent + pair. This API also refreshes the quota count if it is out of date. Refreshes are triggered + asynchronously. The updated count might not be returned in the first call. + + :param parent_securable_type: str + Securable type of the quota parent. + :param parent_full_name: str + Full name of the parent resource. Provide the metastore ID if the parent is a metastore. + :param quota_name: str + Name of the quota. Follows the pattern of the quota type, with "-quota" added as a suffix. + + :returns: :class:`GetQuotaResponse` + """ + + headers = {'Accept': 'application/json', } + + res = self._api.do( + 'GET', + f'/api/2.1/unity-catalog/resource-quotas/{parent_securable_type}/{parent_full_name}/{quota_name}', + headers=headers) + return GetQuotaResponse.from_dict(res) + + def list_quotas(self, + *, + max_results: Optional[int] = None, + page_token: Optional[str] = None) -> Iterator[QuotaInfo]: + """List all resource quotas under a metastore. + + ListQuotas returns all quota values under the metastore. There are no SLAs on the freshness of the + counts returned. This API does not trigger a refresh of quota counts. + + :param max_results: int (optional) + The number of quotas to return. + :param page_token: str (optional) + Opaque token for the next page of results. + + :returns: Iterator over :class:`QuotaInfo` + """ + + query = {} + if max_results is not None: query['max_results'] = max_results + if page_token is not None: query['page_token'] = page_token + headers = {'Accept': 'application/json', } + + while True: + json = self._api.do('GET', + '/api/2.1/unity-catalog/resource-quotas/all-resource-quotas', + query=query, + headers=headers) + if 'quotas' in json: + for v in json['quotas']: + yield QuotaInfo.from_dict(v) + if 'next_page_token' not in json or not json['next_page_token']: + return + query['page_token'] = json['next_page_token'] + + class SchemasAPI: """A schema (also called a database) is the second layer of Unity Catalog’s three-level namespace. A schema organizes tables, views and functions. To access (or list) a table or view in a schema, users must have diff --git a/databricks/sdk/service/compute.py b/databricks/sdk/service/compute.py index 148ce44e8..567518222 100755 --- a/databricks/sdk/service/compute.py +++ b/databricks/sdk/service/compute.py @@ -690,6 +690,35 @@ def from_dict(cls, d: Dict[str, any]) -> ClusterAttributes: workload_type=_from_dict(d, 'workload_type', WorkloadType)) +@dataclass +class ClusterCompliance: + cluster_id: str + """Canonical unique identifier for a cluster.""" + + is_compliant: Optional[bool] = None + """Whether this cluster is in compliance with the latest version of its policy.""" + + violations: Optional[Dict[str, str]] = None + """An object containing key-value mappings representing the first 200 policy validation errors. The + keys indicate the path where the policy validation error is occurring. The values indicate an + error message describing the policy validation error.""" + + def as_dict(self) -> dict: + """Serializes the ClusterCompliance into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.cluster_id is not None: body['cluster_id'] = self.cluster_id + if self.is_compliant is not None: body['is_compliant'] = self.is_compliant + if self.violations: body['violations'] = self.violations + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> ClusterCompliance: + """Deserializes the ClusterCompliance from a dictionary.""" + return cls(cluster_id=d.get('cluster_id', None), + is_compliant=d.get('is_compliant', None), + violations=d.get('violations', None)) + + @dataclass class ClusterDetails: autoscale: Optional[AutoScale] = None @@ -1377,6 +1406,40 @@ def from_dict(cls, d: Dict[str, any]) -> ClusterPolicyPermissionsRequest: cluster_policy_id=d.get('cluster_policy_id', None)) +@dataclass +class ClusterSettingsChange: + """Represents a change to the cluster settings required for the cluster to become compliant with + its policy.""" + + field: Optional[str] = None + """The field where this change would be made.""" + + new_value: Optional[str] = None + """The new value of this field after enforcing policy compliance (either a number, a boolean, or a + string) converted to a string. This is intended to be read by a human. The typed new value of + this field can be retrieved by reading the settings field in the API response.""" + + previous_value: Optional[str] = None + """The previous value of this field before enforcing policy compliance (either a number, a boolean, + or a string) converted to a string. This is intended to be read by a human. The type of the + field can be retrieved by reading the settings field in the API response.""" + + def as_dict(self) -> dict: + """Serializes the ClusterSettingsChange into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.field is not None: body['field'] = self.field + if self.new_value is not None: body['new_value'] = self.new_value + if self.previous_value is not None: body['previous_value'] = self.previous_value + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> ClusterSettingsChange: + """Deserializes the ClusterSettingsChange from a dictionary.""" + return cls(field=d.get('field', None), + new_value=d.get('new_value', None), + previous_value=d.get('previous_value', None)) + + @dataclass class ClusterSize: autoscale: Optional[AutoScale] = None @@ -2982,6 +3045,52 @@ def from_dict(cls, d: Dict[str, any]) -> EditResponse: return cls() +@dataclass +class EnforceClusterComplianceRequest: + cluster_id: str + """The ID of the cluster you want to enforce policy compliance on.""" + + validate_only: Optional[bool] = None + """If set, previews the changes that would be made to a cluster to enforce compliance but does not + update the cluster.""" + + def as_dict(self) -> dict: + """Serializes the EnforceClusterComplianceRequest into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.cluster_id is not None: body['cluster_id'] = self.cluster_id + if self.validate_only is not None: body['validate_only'] = self.validate_only + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> EnforceClusterComplianceRequest: + """Deserializes the EnforceClusterComplianceRequest from a dictionary.""" + return cls(cluster_id=d.get('cluster_id', None), validate_only=d.get('validate_only', None)) + + +@dataclass +class EnforceClusterComplianceResponse: + changes: Optional[List[ClusterSettingsChange]] = None + """A list of changes that have been made to the cluster settings for the cluster to become + compliant with its policy.""" + + has_changes: Optional[bool] = None + """Whether any changes have been made to the cluster settings for the cluster to become compliant + with its policy.""" + + def as_dict(self) -> dict: + """Serializes the EnforceClusterComplianceResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.changes: body['changes'] = [v.as_dict() for v in self.changes] + if self.has_changes is not None: body['has_changes'] = self.has_changes + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> EnforceClusterComplianceResponse: + """Deserializes the EnforceClusterComplianceResponse from a dictionary.""" + return cls(changes=_repeated_dict(d, 'changes', ClusterSettingsChange), + has_changes=d.get('has_changes', None)) + + @dataclass class Environment: """The environment entity used to preserve serverless environment side panel and jobs' environment @@ -3251,6 +3360,30 @@ def from_dict(cls, d: Dict[str, any]) -> GcsStorageInfo: return cls(destination=d.get('destination', None)) +@dataclass +class GetClusterComplianceResponse: + is_compliant: Optional[bool] = None + """Whether the cluster is compliant with its policy or not. Clusters could be out of compliance if + the policy was updated after the cluster was last edited.""" + + violations: Optional[Dict[str, str]] = None + """An object containing key-value mappings representing the first 200 policy validation errors. The + keys indicate the path where the policy validation error is occurring. The values indicate an + error message describing the policy validation error.""" + + def as_dict(self) -> dict: + """Serializes the GetClusterComplianceResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.is_compliant is not None: body['is_compliant'] = self.is_compliant + if self.violations: body['violations'] = self.violations + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> GetClusterComplianceResponse: + """Deserializes the GetClusterComplianceResponse from a dictionary.""" + return cls(is_compliant=d.get('is_compliant', None), violations=d.get('violations', None)) + + @dataclass class GetClusterPermissionLevelsResponse: permission_levels: Optional[List[ClusterPermissionsDescription]] = None @@ -4600,6 +4733,35 @@ def from_dict(cls, d: Dict[str, any]) -> ListAvailableZonesResponse: return cls(default_zone=d.get('default_zone', None), zones=d.get('zones', None)) +@dataclass +class ListClusterCompliancesResponse: + clusters: Optional[List[ClusterCompliance]] = None + """A list of clusters and their policy compliance statuses.""" + + next_page_token: Optional[str] = None + """This field represents the pagination token to retrieve the next page of results. If the value is + "", it means no further results for the request.""" + + prev_page_token: Optional[str] = None + """This field represents the pagination token to retrieve the previous page of results. If the + value is "", it means no further results for the request.""" + + def as_dict(self) -> dict: + """Serializes the ListClusterCompliancesResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.clusters: body['clusters'] = [v.as_dict() for v in self.clusters] + if self.next_page_token is not None: body['next_page_token'] = self.next_page_token + if self.prev_page_token is not None: body['prev_page_token'] = self.prev_page_token + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> ListClusterCompliancesResponse: + """Deserializes the ListClusterCompliancesResponse from a dictionary.""" + return cls(clusters=_repeated_dict(d, 'clusters', ClusterCompliance), + next_page_token=d.get('next_page_token', None), + prev_page_token=d.get('prev_page_token', None)) + + @dataclass class ListClustersFilterBy: cluster_sources: Optional[List[ClusterSource]] = None @@ -8584,6 +8746,116 @@ def uninstall(self, cluster_id: str, libraries: List[Library]): self._api.do('POST', '/api/2.0/libraries/uninstall', body=body, headers=headers) +class PolicyComplianceForClustersAPI: + """The policy compliance APIs allow you to view and manage the policy compliance status of clusters in your + workspace. + + A cluster is compliant with its policy if its configuration satisfies all its policy rules. Clusters could + be out of compliance if their policy was updated after the cluster was last edited. + + The get and list compliance APIs allow you to view the policy compliance status of a cluster. The enforce + compliance API allows you to update a cluster to be compliant with the current version of its policy.""" + + def __init__(self, api_client): + self._api = api_client + + def enforce_compliance(self, + cluster_id: str, + *, + validate_only: Optional[bool] = None) -> EnforceClusterComplianceResponse: + """Enforce cluster policy compliance. + + Updates a cluster to be compliant with the current version of its policy. A cluster can be updated if + it is in a `RUNNING` or `TERMINATED` state. + + If a cluster is updated while in a `RUNNING` state, it will be restarted so that the new attributes + can take effect. + + If a cluster is updated while in a `TERMINATED` state, it will remain `TERMINATED`. The next time the + cluster is started, the new attributes will take effect. + + Clusters created by the Databricks Jobs, DLT, or Models services cannot be enforced by this API. + Instead, use the "Enforce job policy compliance" API to enforce policy compliance on jobs. + + :param cluster_id: str + The ID of the cluster you want to enforce policy compliance on. + :param validate_only: bool (optional) + If set, previews the changes that would be made to a cluster to enforce compliance but does not + update the cluster. + + :returns: :class:`EnforceClusterComplianceResponse` + """ + body = {} + if cluster_id is not None: body['cluster_id'] = cluster_id + if validate_only is not None: body['validate_only'] = validate_only + headers = {'Accept': 'application/json', 'Content-Type': 'application/json', } + + res = self._api.do('POST', + '/api/2.0/policies/clusters/enforce-compliance', + body=body, + headers=headers) + return EnforceClusterComplianceResponse.from_dict(res) + + def get_compliance(self, cluster_id: str) -> GetClusterComplianceResponse: + """Get cluster policy compliance. + + Returns the policy compliance status of a cluster. Clusters could be out of compliance if their policy + was updated after the cluster was last edited. + + :param cluster_id: str + The ID of the cluster to get the compliance status + + :returns: :class:`GetClusterComplianceResponse` + """ + + query = {} + if cluster_id is not None: query['cluster_id'] = cluster_id + headers = {'Accept': 'application/json', } + + res = self._api.do('GET', '/api/2.0/policies/clusters/get-compliance', query=query, headers=headers) + return GetClusterComplianceResponse.from_dict(res) + + def list_compliance(self, + policy_id: str, + *, + page_size: Optional[int] = None, + page_token: Optional[str] = None) -> Iterator[ClusterCompliance]: + """List cluster policy compliance. + + Returns the policy compliance status of all clusters that use a given policy. Clusters could be out of + compliance if their policy was updated after the cluster was last edited. + + :param policy_id: str + Canonical unique identifier for the cluster policy. + :param page_size: int (optional) + Use this field to specify the maximum number of results to be returned by the server. The server may + further constrain the maximum number of results returned in a single page. + :param page_token: str (optional) + A page token that can be used to navigate to the next page or previous page as returned by + `next_page_token` or `prev_page_token`. + + :returns: Iterator over :class:`ClusterCompliance` + """ + + query = {} + if page_size is not None: query['page_size'] = page_size + if page_token is not None: query['page_token'] = page_token + if policy_id is not None: query['policy_id'] = policy_id + headers = {'Accept': 'application/json', } + + while True: + json = self._api.do('GET', + '/api/2.0/policies/clusters/list-compliance', + query=query, + headers=headers) + if 'clusters' in json: + for v in json['clusters']: + yield ClusterCompliance.from_dict(v) + if 'next_page_token' not in json or not json['next_page_token']: + return + query['page_token'] = json['next_page_token'] + + class PolicyFamiliesAPI: """View available policy families. A policy family contains a policy definition providing best practices for configuring clusters for a particular use case. diff --git a/databricks/sdk/service/dashboards.py b/databricks/sdk/service/dashboards.py index 28ddca569..7169531e5 100755 --- a/databricks/sdk/service/dashboards.py +++ b/databricks/sdk/service/dashboards.py @@ -27,10 +27,11 @@ class CreateDashboardRequest: parent_path: Optional[str] = None """The workspace path of the folder containing the dashboard. Includes leading slash and no - trailing slash.""" + trailing slash. This field is excluded in List Dashboards responses.""" serialized_dashboard: Optional[str] = None - """The contents of the dashboard in serialized string form.""" + """The contents of the dashboard in serialized string form. This field is excluded in List + Dashboards responses.""" warehouse_id: Optional[str] = None """The warehouse ID used to run the dashboard.""" @@ -154,23 +155,26 @@ class Dashboard: etag: Optional[str] = None """The etag for the dashboard. Can be optionally provided on updates to ensure that the dashboard - has not been modified since the last read.""" + has not been modified since the last read. This field is excluded in List Dashboards responses.""" lifecycle_state: Optional[LifecycleState] = None """The state of the dashboard resource. Used for tracking trashed status.""" parent_path: Optional[str] = None """The workspace path of the folder containing the dashboard. Includes leading slash and no - trailing slash.""" + trailing slash. This field is excluded in List Dashboards responses.""" path: Optional[str] = None - """The workspace path of the dashboard asset, including the file name.""" + """The workspace path of the dashboard asset, including the file name. This field is excluded in + List Dashboards responses.""" serialized_dashboard: Optional[str] = None - """The contents of the dashboard in serialized string form.""" + """The contents of the dashboard in serialized string form. This field is excluded in List + Dashboards responses.""" update_time: Optional[str] = None - """The timestamp of when the dashboard was last updated by the user.""" + """The timestamp of when the dashboard was last updated by the user. This field is excluded in List + Dashboards responses.""" warehouse_id: Optional[str] = None """The warehouse ID used to run the dashboard.""" @@ -1020,10 +1024,11 @@ class UpdateDashboardRequest: etag: Optional[str] = None """The etag for the dashboard. Can be optionally provided on updates to ensure that the dashboard - has not been modified since the last read.""" + has not been modified since the last read. This field is excluded in List Dashboards responses.""" serialized_dashboard: Optional[str] = None - """The contents of the dashboard in serialized string form.""" + """The contents of the dashboard in serialized string form. This field is excluded in List + Dashboards responses.""" warehouse_id: Optional[str] = None """The warehouse ID used to run the dashboard.""" @@ -1300,9 +1305,10 @@ def create(self, The display name of the dashboard. :param parent_path: str (optional) The workspace path of the folder containing the dashboard. Includes leading slash and no trailing - slash. + slash. This field is excluded in List Dashboards responses. :param serialized_dashboard: str (optional) - The contents of the dashboard in serialized string form. + The contents of the dashboard in serialized string form. This field is excluded in List Dashboards + responses. :param warehouse_id: str (optional) The warehouse ID used to run the dashboard. @@ -1714,9 +1720,10 @@ def update(self, The display name of the dashboard. :param etag: str (optional) The etag for the dashboard. Can be optionally provided on updates to ensure that the dashboard has - not been modified since the last read. + not been modified since the last read. This field is excluded in List Dashboards responses. :param serialized_dashboard: str (optional) - The contents of the dashboard in serialized string form. + The contents of the dashboard in serialized string form. This field is excluded in List Dashboards + responses. :param warehouse_id: str (optional) The warehouse ID used to run the dashboard. diff --git a/databricks/sdk/service/jobs.py b/databricks/sdk/service/jobs.py index 6e5b34ad1..ea1bfd880 100755 --- a/databricks/sdk/service/jobs.py +++ b/databricks/sdk/service/jobs.py @@ -58,8 +58,8 @@ def from_dict(cls, d: Dict[str, any]) -> BaseJob: class BaseRun: attempt_number: Optional[int] = None """The sequence number of this run attempt for a triggered job run. The initial attempt of a run - has an attempt_number of 0\. If the initial run attempt fails, and the job has a retry policy - (`max_retries` \> 0), subsequent runs are created with an `original_attempt_run_id` of the + has an attempt_number of 0. If the initial run attempt fails, and the job has a retry policy + (`max_retries` > 0), subsequent runs are created with an `original_attempt_run_id` of the original attempt’s ID and an incrementing `attempt_number`. Runs are retried only until they succeed, and the maximum `attempt_number` is the same as the `max_retries` value for the job.""" @@ -115,6 +115,11 @@ class BaseRun: job_parameters: Optional[List[JobParameter]] = None """Job-level parameters used in the run""" + job_run_id: Optional[int] = None + """ID of the job run that this run belongs to. For legacy and single-task job runs the field is + populated with the job run ID. For task runs, the field is populated with the ID of the job run + that the task run belongs to.""" + number_in_job: Optional[int] = None """A unique identifier for this job run. This is set to the same value as `run_id`.""" @@ -201,6 +206,7 @@ def as_dict(self) -> dict: if self.job_clusters: body['job_clusters'] = [v.as_dict() for v in self.job_clusters] if self.job_id is not None: body['job_id'] = self.job_id if self.job_parameters: body['job_parameters'] = [v.as_dict() for v in self.job_parameters] + if self.job_run_id is not None: body['job_run_id'] = self.job_run_id if self.number_in_job is not None: body['number_in_job'] = self.number_in_job if self.original_attempt_run_id is not None: body['original_attempt_run_id'] = self.original_attempt_run_id @@ -236,6 +242,7 @@ def from_dict(cls, d: Dict[str, any]) -> BaseRun: job_clusters=_repeated_dict(d, 'job_clusters', JobCluster), job_id=d.get('job_id', None), job_parameters=_repeated_dict(d, 'job_parameters', JobParameter), + job_run_id=d.get('job_run_id', None), number_in_job=d.get('number_in_job', None), original_attempt_run_id=d.get('original_attempt_run_id', None), overriding_parameters=_from_dict(d, 'overriding_parameters', RunParameters), @@ -827,6 +834,96 @@ def from_dict(cls, d: Dict[str, any]) -> DeleteRunResponse: return cls() +@dataclass +class EnforcePolicyComplianceForJobResponseJobClusterSettingsChange: + """Represents a change to the job cluster's settings that would be required for the job clusters to + become compliant with their policies.""" + + field: Optional[str] = None + """The field where this change would be made, prepended with the job cluster key.""" + + new_value: Optional[str] = None + """The new value of this field after enforcing policy compliance (either a number, a boolean, or a + string) converted to a string. This is intended to be read by a human. The typed new value of + this field can be retrieved by reading the settings field in the API response.""" + + previous_value: Optional[str] = None + """The previous value of this field before enforcing policy compliance (either a number, a boolean, + or a string) converted to a string. This is intended to be read by a human. The type of the + field can be retrieved by reading the settings field in the API response.""" + + def as_dict(self) -> dict: + """Serializes the EnforcePolicyComplianceForJobResponseJobClusterSettingsChange into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.field is not None: body['field'] = self.field + if self.new_value is not None: body['new_value'] = self.new_value + if self.previous_value is not None: body['previous_value'] = self.previous_value + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> EnforcePolicyComplianceForJobResponseJobClusterSettingsChange: + """Deserializes the EnforcePolicyComplianceForJobResponseJobClusterSettingsChange from a dictionary.""" + return cls(field=d.get('field', None), + new_value=d.get('new_value', None), + previous_value=d.get('previous_value', None)) + + +@dataclass +class EnforcePolicyComplianceRequest: + job_id: int + """The ID of the job you want to enforce policy compliance on.""" + + validate_only: Optional[bool] = None + """If set, previews changes made to the job to comply with its policy, but does not update the job.""" + + def as_dict(self) -> dict: + """Serializes the EnforcePolicyComplianceRequest into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.job_id is not None: body['job_id'] = self.job_id + if self.validate_only is not None: body['validate_only'] = self.validate_only + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> EnforcePolicyComplianceRequest: + """Deserializes the EnforcePolicyComplianceRequest from a dictionary.""" + return cls(job_id=d.get('job_id', None), validate_only=d.get('validate_only', None)) + + +@dataclass +class EnforcePolicyComplianceResponse: + has_changes: Optional[bool] = None + """Whether any changes have been made to the job cluster settings for the job to become compliant + with its policies.""" + + job_cluster_changes: Optional[List[EnforcePolicyComplianceForJobResponseJobClusterSettingsChange]] = None + """A list of job cluster changes that have been made to the job’s cluster settings in order for + all job clusters to become compliant with their policies.""" + + settings: Optional[JobSettings] = None + """Updated job settings after policy enforcement. Policy enforcement only applies to job clusters + that are created when running the job (which are specified in new_cluster) and does not apply to + existing all-purpose clusters. Updated job settings are derived by applying policy default + values to the existing job clusters in order to satisfy policy requirements.""" + + def as_dict(self) -> dict: + """Serializes the EnforcePolicyComplianceResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.has_changes is not None: body['has_changes'] = self.has_changes + if self.job_cluster_changes: + body['job_cluster_changes'] = [v.as_dict() for v in self.job_cluster_changes] + if self.settings: body['settings'] = self.settings.as_dict() + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> EnforcePolicyComplianceResponse: + """Deserializes the EnforcePolicyComplianceResponse from a dictionary.""" + return cls(has_changes=d.get('has_changes', None), + job_cluster_changes=_repeated_dict( + d, 'job_cluster_changes', + EnforcePolicyComplianceForJobResponseJobClusterSettingsChange), + settings=_from_dict(d, 'settings', JobSettings)) + + @dataclass class ExportRunOutput: """Run was exported successfully.""" @@ -914,7 +1011,8 @@ class ForEachTask: """Configuration for the task that will be run for each element in the array""" concurrency: Optional[int] = None - """Controls the number of active iterations task runs. Default is 20, maximum allowed is 100.""" + """An optional maximum allowed number of concurrent runs of the task. Set this value if you want to + be able to execute multiple runs of the task concurrently.""" def as_dict(self) -> dict: """Serializes the ForEachTask into a dictionary suitable for use as a JSON request body.""" @@ -1024,6 +1122,32 @@ def from_dict(cls, d: Dict[str, any]) -> GetJobPermissionLevelsResponse: return cls(permission_levels=_repeated_dict(d, 'permission_levels', JobPermissionsDescription)) +@dataclass +class GetPolicyComplianceResponse: + is_compliant: Optional[bool] = None + """Whether the job is compliant with its policies or not. Jobs could be out of compliance if a + policy they are using was updated after the job was last edited and some of its job clusters no + longer comply with their updated policies.""" + + violations: Optional[Dict[str, str]] = None + """An object containing key-value mappings representing the first 200 policy validation errors. The + keys indicate the path where the policy validation error is occurring. An identifier for the job + cluster is prepended to the path. The values indicate an error message describing the policy + validation error.""" + + def as_dict(self) -> dict: + """Serializes the GetPolicyComplianceResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.is_compliant is not None: body['is_compliant'] = self.is_compliant + if self.violations: body['violations'] = self.violations + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> GetPolicyComplianceResponse: + """Deserializes the GetPolicyComplianceResponse from a dictionary.""" + return cls(is_compliant=d.get('is_compliant', None), violations=d.get('violations', None)) + + class GitProvider(Enum): AWS_CODE_COMMIT = 'awsCodeCommit' @@ -1260,6 +1384,36 @@ def from_dict(cls, d: Dict[str, any]) -> JobCluster: new_cluster=_from_dict(d, 'new_cluster', compute.ClusterSpec)) +@dataclass +class JobCompliance: + job_id: int + """Canonical unique identifier for a job.""" + + is_compliant: Optional[bool] = None + """Whether this job is in compliance with the latest version of its policy.""" + + violations: Optional[Dict[str, str]] = None + """An object containing key-value mappings representing the first 200 policy validation errors. The + keys indicate the path where the policy validation error is occurring. An identifier for the job + cluster is prepended to the path. The values indicate an error message describing the policy + validation error.""" + + def as_dict(self) -> dict: + """Serializes the JobCompliance into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.is_compliant is not None: body['is_compliant'] = self.is_compliant + if self.job_id is not None: body['job_id'] = self.job_id + if self.violations: body['violations'] = self.violations + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> JobCompliance: + """Deserializes the JobCompliance from a dictionary.""" + return cls(is_compliant=d.get('is_compliant', None), + job_id=d.get('job_id', None), + violations=d.get('violations', None)) + + @dataclass class JobDeployment: kind: JobDeploymentKind @@ -1874,6 +2028,35 @@ def from_dict(cls, d: Dict[str, any]) -> JobsHealthRules: return cls(rules=_repeated_dict(d, 'rules', JobsHealthRule)) +@dataclass +class ListJobComplianceForPolicyResponse: + jobs: Optional[List[JobCompliance]] = None + """A list of jobs and their policy compliance statuses.""" + + next_page_token: Optional[str] = None + """This field represents the pagination token to retrieve the next page of results. If this field + is not in the response, it means no further results for the request.""" + + prev_page_token: Optional[str] = None + """This field represents the pagination token to retrieve the previous page of results. If this + field is not in the response, it means no further results for the request.""" + + def as_dict(self) -> dict: + """Serializes the ListJobComplianceForPolicyResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.jobs: body['jobs'] = [v.as_dict() for v in self.jobs] + if self.next_page_token is not None: body['next_page_token'] = self.next_page_token + if self.prev_page_token is not None: body['prev_page_token'] = self.prev_page_token + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> ListJobComplianceForPolicyResponse: + """Deserializes the ListJobComplianceForPolicyResponse from a dictionary.""" + return cls(jobs=_repeated_dict(d, 'jobs', JobCompliance), + next_page_token=d.get('next_page_token', None), + prev_page_token=d.get('prev_page_token', None)) + + @dataclass class ListJobsResponse: """List of jobs was retrieved successfully.""" @@ -2568,8 +2751,8 @@ class Run: attempt_number: Optional[int] = None """The sequence number of this run attempt for a triggered job run. The initial attempt of a run - has an attempt_number of 0\. If the initial run attempt fails, and the job has a retry policy - (`max_retries` \> 0), subsequent runs are created with an `original_attempt_run_id` of the + has an attempt_number of 0. If the initial run attempt fails, and the job has a retry policy + (`max_retries` > 0), subsequent runs are created with an `original_attempt_run_id` of the original attempt’s ID and an incrementing `attempt_number`. Runs are retried only until they succeed, and the maximum `attempt_number` is the same as the `max_retries` value for the job.""" @@ -2628,6 +2811,11 @@ class Run: job_parameters: Optional[List[JobParameter]] = None """Job-level parameters used in the run""" + job_run_id: Optional[int] = None + """ID of the job run that this run belongs to. For legacy and single-task job runs the field is + populated with the job run ID. For task runs, the field is populated with the ID of the job run + that the task run belongs to.""" + next_page_token: Optional[str] = None """A token that can be used to list the next page of sub-resources.""" @@ -2721,6 +2909,7 @@ def as_dict(self) -> dict: if self.job_clusters: body['job_clusters'] = [v.as_dict() for v in self.job_clusters] if self.job_id is not None: body['job_id'] = self.job_id if self.job_parameters: body['job_parameters'] = [v.as_dict() for v in self.job_parameters] + if self.job_run_id is not None: body['job_run_id'] = self.job_run_id if self.next_page_token is not None: body['next_page_token'] = self.next_page_token if self.number_in_job is not None: body['number_in_job'] = self.number_in_job if self.original_attempt_run_id is not None: @@ -2759,6 +2948,7 @@ def from_dict(cls, d: Dict[str, any]) -> Run: job_clusters=_repeated_dict(d, 'job_clusters', JobCluster), job_id=d.get('job_id', None), job_parameters=_repeated_dict(d, 'job_parameters', JobParameter), + job_run_id=d.get('job_run_id', None), next_page_token=d.get('next_page_token', None), number_in_job=d.get('number_in_job', None), original_attempt_run_id=d.get('original_attempt_run_id', None), @@ -2832,7 +3022,8 @@ class RunForEachTask: """Configuration for the task that will be run for each element in the array""" concurrency: Optional[int] = None - """Controls the number of active iterations task runs. Default is 20, maximum allowed is 100.""" + """An optional maximum allowed number of concurrent runs of the task. Set this value if you want to + be able to execute multiple runs of the task concurrently.""" stats: Optional[ForEachStats] = None """Read only field. Populated for GetRun and ListRuns RPC calls and stores the execution stats of @@ -3429,8 +3620,8 @@ class RunTask: attempt_number: Optional[int] = None """The sequence number of this run attempt for a triggered job run. The initial attempt of a run - has an attempt_number of 0\. If the initial run attempt fails, and the job has a retry policy - (`max_retries` \> 0), subsequent runs are created with an `original_attempt_run_id` of the + has an attempt_number of 0. If the initial run attempt fails, and the job has a retry policy + (`max_retries` > 0), subsequent runs are created with an `original_attempt_run_id` of the original attempt’s ID and an incrementing `attempt_number`. Runs are retried only until they succeed, and the maximum `attempt_number` is the same as the `max_retries` value for the job.""" @@ -6127,3 +6318,102 @@ def update_permissions( res = self._api.do('PATCH', f'/api/2.0/permissions/jobs/{job_id}', body=body, headers=headers) return JobPermissions.from_dict(res) + + +class PolicyComplianceForJobsAPI: + """The compliance APIs allow you to view and manage the policy compliance status of jobs in your workspace. + This API currently only supports compliance controls for cluster policies. + + A job is in compliance if its cluster configurations satisfy the rules of all their respective cluster + policies. A job could be out of compliance if a cluster policy it uses was updated after the job was last + edited. The job is considered out of compliance if any of its clusters no longer comply with their updated + policies. + + The get and list compliance APIs allow you to view the policy compliance status of a job. The enforce + compliance API allows you to update a job so that it becomes compliant with all of its policies.""" + + def __init__(self, api_client): + self._api = api_client + + def enforce_compliance(self, + job_id: int, + *, + validate_only: Optional[bool] = None) -> EnforcePolicyComplianceResponse: + """Enforce job policy compliance. + + Updates a job so the job clusters that are created when running the job (specified in `new_cluster`) + are compliant with the current versions of their respective cluster policies. All-purpose clusters + used in the job will not be updated. + + :param job_id: int + The ID of the job you want to enforce policy compliance on. + :param validate_only: bool (optional) + If set, previews changes made to the job to comply with its policy, but does not update the job. + + :returns: :class:`EnforcePolicyComplianceResponse` + """ + body = {} + if job_id is not None: body['job_id'] = job_id + if validate_only is not None: body['validate_only'] = validate_only + headers = {'Accept': 'application/json', 'Content-Type': 'application/json', } + + res = self._api.do('POST', '/api/2.0/policies/jobs/enforce-compliance', body=body, headers=headers) + return EnforcePolicyComplianceResponse.from_dict(res) + + def get_compliance(self, job_id: int) -> GetPolicyComplianceResponse: + """Get job policy compliance. + + Returns the policy compliance status of a job. Jobs could be out of compliance if a cluster policy + they use was updated after the job was last edited and some of its job clusters no longer comply with + their updated policies. + + :param job_id: int + The ID of the job whose compliance status you are requesting. + + :returns: :class:`GetPolicyComplianceResponse` + """ + + query = {} + if job_id is not None: query['job_id'] = job_id + headers = {'Accept': 'application/json', } + + res = self._api.do('GET', '/api/2.0/policies/jobs/get-compliance', query=query, headers=headers) + return GetPolicyComplianceResponse.from_dict(res) + + def list_compliance(self, + policy_id: str, + *, + page_size: Optional[int] = None, + page_token: Optional[str] = None) -> Iterator[JobCompliance]: + """List job policy compliance. + + Returns the policy compliance status of all jobs that use a given policy. Jobs could be out of + compliance if a cluster policy they use was updated after the job was last edited and its job clusters + no longer comply with the updated policy. + + :param policy_id: str + Canonical unique identifier for the cluster policy. + :param page_size: int (optional) + Use this field to specify the maximum number of results to be returned by the server. The server may + further constrain the maximum number of results returned in a single page. + :param page_token: str (optional) + A page token that can be used to navigate to the next page or previous page as returned by + `next_page_token` or `prev_page_token`. + + :returns: Iterator over :class:`JobCompliance` + """ + + query = {} + if page_size is not None: query['page_size'] = page_size + if page_token is not None: query['page_token'] = page_token + if policy_id is not None: query['policy_id'] = policy_id + headers = {'Accept': 'application/json', } + + while True: + json = self._api.do('GET', '/api/2.0/policies/jobs/list-compliance', query=query, headers=headers) + if 'jobs' in json: + for v in json['jobs']: + yield JobCompliance.from_dict(v) + if 'next_page_token' not in json or not json['next_page_token']: + return + query['page_token'] = json['next_page_token'] diff --git a/databricks/sdk/service/sql.py b/databricks/sdk/service/sql.py index f2526909f..b77e5b5e6 100755 --- a/databricks/sdk/service/sql.py +++ b/databricks/sdk/service/sql.py @@ -600,68 +600,6 @@ class ColumnInfoTypeName(Enum): USER_DEFINED_TYPE = 'USER_DEFINED_TYPE' -@dataclass -class ContextFilter: - dbsql_alert_id: Optional[str] = None - """Databricks SQL Alert id""" - - dbsql_dashboard_id: Optional[str] = None - """Databricks SQL Dashboard id""" - - dbsql_query_id: Optional[str] = None - """Databricks SQL Query id""" - - dbsql_session_id: Optional[str] = None - """Databricks SQL Query session id""" - - job_id: Optional[str] = None - """Databricks Workflows id""" - - job_run_id: Optional[str] = None - """Databricks Workflows task run id""" - - lakeview_dashboard_id: Optional[str] = None - """Databricks Lakeview Dashboard id""" - - notebook_cell_run_id: Optional[str] = None - """Databricks Notebook runnableCommandId""" - - notebook_id: Optional[str] = None - """Databricks Notebook id""" - - statement_ids: Optional[List[str]] = None - """Databricks Query History statement ids.""" - - def as_dict(self) -> dict: - """Serializes the ContextFilter into a dictionary suitable for use as a JSON request body.""" - body = {} - if self.dbsql_alert_id is not None: body['dbsql_alert_id'] = self.dbsql_alert_id - if self.dbsql_dashboard_id is not None: body['dbsql_dashboard_id'] = self.dbsql_dashboard_id - if self.dbsql_query_id is not None: body['dbsql_query_id'] = self.dbsql_query_id - if self.dbsql_session_id is not None: body['dbsql_session_id'] = self.dbsql_session_id - if self.job_id is not None: body['job_id'] = self.job_id - if self.job_run_id is not None: body['job_run_id'] = self.job_run_id - if self.lakeview_dashboard_id is not None: body['lakeview_dashboard_id'] = self.lakeview_dashboard_id - if self.notebook_cell_run_id is not None: body['notebook_cell_run_id'] = self.notebook_cell_run_id - if self.notebook_id is not None: body['notebook_id'] = self.notebook_id - if self.statement_ids: body['statement_ids'] = [v for v in self.statement_ids] - return body - - @classmethod - def from_dict(cls, d: Dict[str, any]) -> ContextFilter: - """Deserializes the ContextFilter from a dictionary.""" - return cls(dbsql_alert_id=d.get('dbsql_alert_id', None), - dbsql_dashboard_id=d.get('dbsql_dashboard_id', None), - dbsql_query_id=d.get('dbsql_query_id', None), - dbsql_session_id=d.get('dbsql_session_id', None), - job_id=d.get('job_id', None), - job_run_id=d.get('job_run_id', None), - lakeview_dashboard_id=d.get('lakeview_dashboard_id', None), - notebook_cell_run_id=d.get('notebook_cell_run_id', None), - notebook_id=d.get('notebook_id', None), - statement_ids=d.get('statement_ids', None)) - - @dataclass class CreateAlert: name: str @@ -3434,12 +3372,12 @@ def from_dict(cls, d: Dict[str, any]) -> QueryEditContent: @dataclass class QueryFilter: - context_filter: Optional[ContextFilter] = None - """Filter by one or more property describing where the query was generated""" - query_start_time_range: Optional[TimeRange] = None """A range filter for query submitted time. The time range must be <= 30 days.""" + statement_ids: Optional[List[str]] = None + """A list of statement IDs.""" + statuses: Optional[List[QueryStatus]] = None user_ids: Optional[List[int]] = None @@ -3451,8 +3389,8 @@ class QueryFilter: def as_dict(self) -> dict: """Serializes the QueryFilter into a dictionary suitable for use as a JSON request body.""" body = {} - if self.context_filter: body['context_filter'] = self.context_filter.as_dict() if self.query_start_time_range: body['query_start_time_range'] = self.query_start_time_range.as_dict() + if self.statement_ids: body['statement_ids'] = [v for v in self.statement_ids] if self.statuses: body['statuses'] = [v.value for v in self.statuses] if self.user_ids: body['user_ids'] = [v for v in self.user_ids] if self.warehouse_ids: body['warehouse_ids'] = [v for v in self.warehouse_ids] @@ -3461,8 +3399,8 @@ def as_dict(self) -> dict: @classmethod def from_dict(cls, d: Dict[str, any]) -> QueryFilter: """Deserializes the QueryFilter from a dictionary.""" - return cls(context_filter=_from_dict(d, 'context_filter', ContextFilter), - query_start_time_range=_from_dict(d, 'query_start_time_range', TimeRange), + return cls(query_start_time_range=_from_dict(d, 'query_start_time_range', TimeRange), + statement_ids=d.get('statement_ids', None), statuses=_repeated_enum(d, 'statuses', QueryStatus), user_ids=d.get('user_ids', None), warehouse_ids=d.get('warehouse_ids', None)) @@ -3944,12 +3882,6 @@ class QuerySource: notebook_id: Optional[str] = None - pipeline_id: Optional[str] = None - """Id associated with a DLT pipeline""" - - pipeline_update_id: Optional[str] = None - """Id associated with a DLT update""" - query_tags: Optional[str] = None """String provided by a customer that'll help them identify the query""" @@ -3984,8 +3916,6 @@ def as_dict(self) -> dict: if self.job_id is not None: body['job_id'] = self.job_id if self.job_managed_by is not None: body['job_managed_by'] = self.job_managed_by.value if self.notebook_id is not None: body['notebook_id'] = self.notebook_id - if self.pipeline_id is not None: body['pipeline_id'] = self.pipeline_id - if self.pipeline_update_id is not None: body['pipeline_update_id'] = self.pipeline_update_id if self.query_tags is not None: body['query_tags'] = self.query_tags if self.run_id is not None: body['run_id'] = self.run_id if self.runnable_command_id is not None: body['runnable_command_id'] = self.runnable_command_id @@ -4012,8 +3942,6 @@ def from_dict(cls, d: Dict[str, any]) -> QuerySource: job_id=d.get('job_id', None), job_managed_by=_enum(d, 'job_managed_by', QuerySourceJobManager), notebook_id=d.get('notebook_id', None), - pipeline_id=d.get('pipeline_id', None), - pipeline_update_id=d.get('pipeline_update_id', None), query_tags=d.get('query_tags', None), run_id=d.get('run_id', None), runnable_command_id=d.get('runnable_command_id', None), @@ -6558,8 +6486,8 @@ def update(self, class QueryHistoryAPI: - """A service responsible for storing and retrieving the list of queries run against SQL endpoints, serverless - compute, and DLT.""" + """A service responsible for storing and retrieving the list of queries run against SQL endpoints and + serverless compute.""" def __init__(self, api_client): self._api = api_client @@ -6567,11 +6495,12 @@ def __init__(self, api_client): def list(self, *, filter_by: Optional[QueryFilter] = None, + include_metrics: Optional[bool] = None, max_results: Optional[int] = None, page_token: Optional[str] = None) -> ListQueriesResponse: """List Queries. - List the history of queries through SQL warehouses, serverless compute, and DLT. + List the history of queries through SQL warehouses, and serverless compute. You can filter by user ID, warehouse ID, status, and time range. Most recently started queries are returned first (up to max_results in request). The pagination token returned in response can be used @@ -6579,6 +6508,9 @@ def list(self, :param filter_by: :class:`QueryFilter` (optional) A filter to limit query history results. This field is optional. + :param include_metrics: bool (optional) + Whether to include the query metrics with each query. Only use this for a small subset of queries + (max_results). Defaults to false. :param max_results: int (optional) Limit the number of results returned in one page. Must be less than 1000 and the default is 100. :param page_token: str (optional) @@ -6591,6 +6523,7 @@ def list(self, query = {} if filter_by is not None: query['filter_by'] = filter_by.as_dict() + if include_metrics is not None: query['include_metrics'] = include_metrics if max_results is not None: query['max_results'] = max_results if page_token is not None: query['page_token'] = page_token headers = {'Accept': 'application/json', } diff --git a/databricks/sdk/version.py b/databricks/sdk/version.py index e187e0aa6..c3d10d7c4 100644 --- a/databricks/sdk/version.py +++ b/databricks/sdk/version.py @@ -1 +1 @@ -__version__ = '0.30.0' +__version__ = '0.31.0' diff --git a/docs/dbdataclasses/catalog.rst b/docs/dbdataclasses/catalog.rst index d1195dd44..d15edc813 100644 --- a/docs/dbdataclasses/catalog.rst +++ b/docs/dbdataclasses/catalog.rst @@ -249,9 +249,18 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: CONNECTION_BIGQUERY :value: "CONNECTION_BIGQUERY" + .. py:attribute:: CONNECTION_BUILTIN_HIVE_METASTORE + :value: "CONNECTION_BUILTIN_HIVE_METASTORE" + .. py:attribute:: CONNECTION_DATABRICKS :value: "CONNECTION_DATABRICKS" + .. py:attribute:: CONNECTION_EXTERNAL_HIVE_METASTORE + :value: "CONNECTION_EXTERNAL_HIVE_METASTORE" + + .. py:attribute:: CONNECTION_GLUE + :value: "CONNECTION_GLUE" + .. py:attribute:: CONNECTION_MYSQL :value: "CONNECTION_MYSQL" @@ -283,6 +292,12 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: DATABRICKS :value: "DATABRICKS" + .. py:attribute:: GLUE + :value: "GLUE" + + .. py:attribute:: HIVE_METASTORE + :value: "HIVE_METASTORE" + .. py:attribute:: MYSQL :value: "MYSQL" @@ -672,6 +687,10 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: INTERNAL_AND_EXTERNAL :value: "INTERNAL_AND_EXTERNAL" +.. autoclass:: GetQuotaResponse + :members: + :undoc-members: + .. py:class:: IsolationMode Whether the current securable is accessible from all workspaces or a specific set of workspaces. @@ -714,6 +733,10 @@ These dataclasses are used in the SDK to represent API requests and responses fo :members: :undoc-members: +.. autoclass:: ListQuotasResponse + :members: + :undoc-members: + .. autoclass:: ListRegisteredModelsResponse :members: :undoc-members: @@ -1149,6 +1172,10 @@ These dataclasses are used in the SDK to represent API requests and responses fo :members: :undoc-members: +.. autoclass:: QuotaInfo + :members: + :undoc-members: + .. autoclass:: RegisteredModelAlias :members: :undoc-members: diff --git a/docs/dbdataclasses/compute.rst b/docs/dbdataclasses/compute.rst index 7b280c519..f4e175920 100644 --- a/docs/dbdataclasses/compute.rst +++ b/docs/dbdataclasses/compute.rst @@ -103,6 +103,10 @@ These dataclasses are used in the SDK to represent API requests and responses fo :members: :undoc-members: +.. autoclass:: ClusterCompliance + :members: + :undoc-members: + .. autoclass:: ClusterDetails :members: :undoc-members: @@ -179,6 +183,10 @@ These dataclasses are used in the SDK to represent API requests and responses fo :members: :undoc-members: +.. autoclass:: ClusterSettingsChange + :members: + :undoc-members: + .. autoclass:: ClusterSize :members: :undoc-members: @@ -443,6 +451,14 @@ These dataclasses are used in the SDK to represent API requests and responses fo :members: :undoc-members: +.. autoclass:: EnforceClusterComplianceRequest + :members: + :undoc-members: + +.. autoclass:: EnforceClusterComplianceResponse + :members: + :undoc-members: + .. autoclass:: Environment :members: :undoc-members: @@ -565,6 +581,10 @@ These dataclasses are used in the SDK to represent API requests and responses fo :members: :undoc-members: +.. autoclass:: GetClusterComplianceResponse + :members: + :undoc-members: + .. autoclass:: GetClusterPermissionLevelsResponse :members: :undoc-members: @@ -817,6 +837,10 @@ These dataclasses are used in the SDK to represent API requests and responses fo :members: :undoc-members: +.. autoclass:: ListClusterCompliancesResponse + :members: + :undoc-members: + .. autoclass:: ListClustersFilterBy :members: :undoc-members: diff --git a/docs/dbdataclasses/jobs.rst b/docs/dbdataclasses/jobs.rst index 0f501f77a..0140be948 100644 --- a/docs/dbdataclasses/jobs.rst +++ b/docs/dbdataclasses/jobs.rst @@ -111,6 +111,18 @@ These dataclasses are used in the SDK to represent API requests and responses fo :members: :undoc-members: +.. autoclass:: EnforcePolicyComplianceForJobResponseJobClusterSettingsChange + :members: + :undoc-members: + +.. autoclass:: EnforcePolicyComplianceRequest + :members: + :undoc-members: + +.. autoclass:: EnforcePolicyComplianceResponse + :members: + :undoc-members: + .. autoclass:: ExportRunOutput :members: :undoc-members: @@ -147,6 +159,10 @@ These dataclasses are used in the SDK to represent API requests and responses fo :members: :undoc-members: +.. autoclass:: GetPolicyComplianceResponse + :members: + :undoc-members: + .. py:class:: GitProvider .. py:attribute:: AWS_CODE_COMMIT @@ -197,6 +213,10 @@ These dataclasses are used in the SDK to represent API requests and responses fo :members: :undoc-members: +.. autoclass:: JobCompliance + :members: + :undoc-members: + .. autoclass:: JobDeployment :members: :undoc-members: @@ -329,6 +349,10 @@ These dataclasses are used in the SDK to represent API requests and responses fo :members: :undoc-members: +.. autoclass:: ListJobComplianceForPolicyResponse + :members: + :undoc-members: + .. autoclass:: ListJobsResponse :members: :undoc-members: diff --git a/docs/dbdataclasses/sql.rst b/docs/dbdataclasses/sql.rst index b39ea9edf..255123067 100644 --- a/docs/dbdataclasses/sql.rst +++ b/docs/dbdataclasses/sql.rst @@ -189,10 +189,6 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: USER_DEFINED_TYPE :value: "USER_DEFINED_TYPE" -.. autoclass:: ContextFilter - :members: - :undoc-members: - .. autoclass:: CreateAlert :members: :undoc-members: diff --git a/docs/workspace/catalog/external_locations.rst b/docs/workspace/catalog/external_locations.rst index 3f6114f18..365007b09 100644 --- a/docs/workspace/catalog/external_locations.rst +++ b/docs/workspace/catalog/external_locations.rst @@ -15,7 +15,7 @@ To create external locations, you must be a metastore admin or a user with the **CREATE_EXTERNAL_LOCATION** privilege. - .. py:method:: create(name: str, url: str, credential_name: str [, access_point: Optional[str], comment: Optional[str], encryption_details: Optional[EncryptionDetails], read_only: Optional[bool], skip_validation: Optional[bool]]) -> ExternalLocationInfo + .. py:method:: create(name: str, url: str, credential_name: str [, access_point: Optional[str], comment: Optional[str], encryption_details: Optional[EncryptionDetails], fallback: Optional[bool], read_only: Optional[bool], skip_validation: Optional[bool]]) -> ExternalLocationInfo Usage: @@ -63,6 +63,10 @@ User-provided free-form text description. :param encryption_details: :class:`EncryptionDetails` (optional) Encryption options that apply to clients connecting to cloud storage. + :param fallback: bool (optional) + Indicates whether fallback mode is enabled for this external location. When fallback mode is + enabled, the access to the location falls back to cluster credentials if UC credentials are not + sufficient. :param read_only: bool (optional) Indicates whether the external location is read-only. :param skip_validation: bool (optional) @@ -163,7 +167,7 @@ :returns: Iterator over :class:`ExternalLocationInfo` - .. py:method:: update(name: str [, access_point: Optional[str], comment: Optional[str], credential_name: Optional[str], encryption_details: Optional[EncryptionDetails], force: Optional[bool], isolation_mode: Optional[IsolationMode], new_name: Optional[str], owner: Optional[str], read_only: Optional[bool], skip_validation: Optional[bool], url: Optional[str]]) -> ExternalLocationInfo + .. py:method:: update(name: str [, access_point: Optional[str], comment: Optional[str], credential_name: Optional[str], encryption_details: Optional[EncryptionDetails], fallback: Optional[bool], force: Optional[bool], isolation_mode: Optional[IsolationMode], new_name: Optional[str], owner: Optional[str], read_only: Optional[bool], skip_validation: Optional[bool], url: Optional[str]]) -> ExternalLocationInfo Usage: @@ -210,6 +214,10 @@ Name of the storage credential used with this location. :param encryption_details: :class:`EncryptionDetails` (optional) Encryption options that apply to clients connecting to cloud storage. + :param fallback: bool (optional) + Indicates whether fallback mode is enabled for this external location. When fallback mode is + enabled, the access to the location falls back to cluster credentials if UC credentials are not + sufficient. :param force: bool (optional) Force update even if changing url invalidates dependent external tables or mounts. :param isolation_mode: :class:`IsolationMode` (optional) diff --git a/docs/workspace/catalog/index.rst b/docs/workspace/catalog/index.rst index 935804016..3bf2522d8 100644 --- a/docs/workspace/catalog/index.rst +++ b/docs/workspace/catalog/index.rst @@ -18,6 +18,7 @@ Configure data governance with Unity Catalog for metastores, catalogs, schemas, online_tables quality_monitors registered_models + resource_quotas schemas storage_credentials system_schemas diff --git a/docs/workspace/catalog/resource_quotas.rst b/docs/workspace/catalog/resource_quotas.rst new file mode 100644 index 000000000..3396011f0 --- /dev/null +++ b/docs/workspace/catalog/resource_quotas.rst @@ -0,0 +1,45 @@ +``w.resource_quotas``: Resource Quotas +====================================== +.. currentmodule:: databricks.sdk.service.catalog + +.. py:class:: ResourceQuotasAPI + + Unity Catalog enforces resource quotas on all securable objects, which limits the number of resources that + can be created. Quotas are expressed in terms of a resource type and a parent (for example, tables per + metastore or schemas per catalog). The resource quota APIs enable you to monitor your current usage and + limits. For more information on resource quotas see the [Unity Catalog documentation]. + + [Unity Catalog documentation]: https://docs.databricks.com/en/data-governance/unity-catalog/index.html#resource-quotas + + .. py:method:: get_quota(parent_securable_type: str, parent_full_name: str, quota_name: str) -> GetQuotaResponse + + Get information for a single resource quota. + + The GetQuota API returns usage information for a single resource quota, defined as a child-parent + pair. This API also refreshes the quota count if it is out of date. Refreshes are triggered + asynchronously. The updated count might not be returned in the first call. + + :param parent_securable_type: str + Securable type of the quota parent. + :param parent_full_name: str + Full name of the parent resource. Provide the metastore ID if the parent is a metastore. + :param quota_name: str + Name of the quota. Follows the pattern of the quota type, with "-quota" added as a suffix. + + :returns: :class:`GetQuotaResponse` + + + .. py:method:: list_quotas( [, max_results: Optional[int], page_token: Optional[str]]) -> Iterator[QuotaInfo] + + List all resource quotas under a metastore. + + ListQuotas returns all quota values under the metastore. There are no SLAs on the freshness of the + counts returned. This API does not trigger a refresh of quota counts. + + :param max_results: int (optional) + The number of quotas to return. + :param page_token: str (optional) + Opaque token for the next page of results. + + :returns: Iterator over :class:`QuotaInfo` + \ No newline at end of file diff --git a/docs/workspace/compute/index.rst b/docs/workspace/compute/index.rst index b13a21610..858cf70ff 100644 --- a/docs/workspace/compute/index.rst +++ b/docs/workspace/compute/index.rst @@ -14,4 +14,5 @@ Use and configure compute for Databricks instance_pools instance_profiles libraries + policy_compliance_for_clusters policy_families \ No newline at end of file diff --git a/docs/workspace/compute/policy_compliance_for_clusters.rst b/docs/workspace/compute/policy_compliance_for_clusters.rst new file mode 100644 index 000000000..90c3aeb98 --- /dev/null +++ b/docs/workspace/compute/policy_compliance_for_clusters.rst @@ -0,0 +1,71 @@ +``w.policy_compliance_for_clusters``: Policy compliance for clusters +==================================================================== +.. currentmodule:: databricks.sdk.service.compute + +.. py:class:: PolicyComplianceForClustersAPI + + The policy compliance APIs allow you to view and manage the policy compliance status of clusters in your + workspace. + + A cluster is compliant with its policy if its configuration satisfies all its policy rules. Clusters could + be out of compliance if their policy was updated after the cluster was last edited. + + The get and list compliance APIs allow you to view the policy compliance status of a cluster. The enforce + compliance API allows you to update a cluster to be compliant with the current version of its policy. + + .. py:method:: enforce_compliance(cluster_id: str [, validate_only: Optional[bool]]) -> EnforceClusterComplianceResponse + + Enforce cluster policy compliance. + + Updates a cluster to be compliant with the current version of its policy. A cluster can be updated if + it is in a `RUNNING` or `TERMINATED` state. + + If a cluster is updated while in a `RUNNING` state, it will be restarted so that the new attributes + can take effect. + + If a cluster is updated while in a `TERMINATED` state, it will remain `TERMINATED`. The next time the + cluster is started, the new attributes will take effect. + + Clusters created by the Databricks Jobs, DLT, or Models services cannot be enforced by this API. + Instead, use the "Enforce job policy compliance" API to enforce policy compliance on jobs. + + :param cluster_id: str + The ID of the cluster you want to enforce policy compliance on. + :param validate_only: bool (optional) + If set, previews the changes that would be made to a cluster to enforce compliance but does not + update the cluster. + + :returns: :class:`EnforceClusterComplianceResponse` + + + .. py:method:: get_compliance(cluster_id: str) -> GetClusterComplianceResponse + + Get cluster policy compliance. + + Returns the policy compliance status of a cluster. Clusters could be out of compliance if their policy + was updated after the cluster was last edited. + + :param cluster_id: str + The ID of the cluster to get the compliance status + + :returns: :class:`GetClusterComplianceResponse` + + + .. py:method:: list_compliance(policy_id: str [, page_size: Optional[int], page_token: Optional[str]]) -> Iterator[ClusterCompliance] + + List cluster policy compliance. + + Returns the policy compliance status of all clusters that use a given policy. Clusters could be out of + compliance if their policy was updated after the cluster was last edited. + + :param policy_id: str + Canonical unique identifier for the cluster policy. + :param page_size: int (optional) + Use this field to specify the maximum number of results to be returned by the server. The server may + further constrain the maximum number of results returned in a single page. + :param page_token: str (optional) + A page token that can be used to navigate to the next page or previous page as returned by + `next_page_token` or `prev_page_token`. + + :returns: Iterator over :class:`ClusterCompliance` + \ No newline at end of file diff --git a/docs/workspace/dashboards/lakeview.rst b/docs/workspace/dashboards/lakeview.rst index d3257b79e..92aa8c0e3 100644 --- a/docs/workspace/dashboards/lakeview.rst +++ b/docs/workspace/dashboards/lakeview.rst @@ -17,9 +17,10 @@ The display name of the dashboard. :param parent_path: str (optional) The workspace path of the folder containing the dashboard. Includes leading slash and no trailing - slash. + slash. This field is excluded in List Dashboards responses. :param serialized_dashboard: str (optional) - The contents of the dashboard in serialized string form. + The contents of the dashboard in serialized string form. This field is excluded in List Dashboards + responses. :param warehouse_id: str (optional) The warehouse ID used to run the dashboard. @@ -257,9 +258,10 @@ The display name of the dashboard. :param etag: str (optional) The etag for the dashboard. Can be optionally provided on updates to ensure that the dashboard has - not been modified since the last read. + not been modified since the last read. This field is excluded in List Dashboards responses. :param serialized_dashboard: str (optional) - The contents of the dashboard in serialized string form. + The contents of the dashboard in serialized string form. This field is excluded in List Dashboards + responses. :param warehouse_id: str (optional) The warehouse ID used to run the dashboard. diff --git a/docs/workspace/jobs/index.rst b/docs/workspace/jobs/index.rst index a8f242ea2..0729f8dce 100644 --- a/docs/workspace/jobs/index.rst +++ b/docs/workspace/jobs/index.rst @@ -7,4 +7,5 @@ Schedule automated jobs on Databricks Workspaces .. toctree:: :maxdepth: 1 - jobs \ No newline at end of file + jobs + policy_compliance_for_jobs \ No newline at end of file diff --git a/docs/workspace/jobs/policy_compliance_for_jobs.rst b/docs/workspace/jobs/policy_compliance_for_jobs.rst new file mode 100644 index 000000000..69f211552 --- /dev/null +++ b/docs/workspace/jobs/policy_compliance_for_jobs.rst @@ -0,0 +1,66 @@ +``w.policy_compliance_for_jobs``: Policy compliance for jobs +============================================================ +.. currentmodule:: databricks.sdk.service.jobs + +.. py:class:: PolicyComplianceForJobsAPI + + The compliance APIs allow you to view and manage the policy compliance status of jobs in your workspace. + This API currently only supports compliance controls for cluster policies. + + A job is in compliance if its cluster configurations satisfy the rules of all their respective cluster + policies. A job could be out of compliance if a cluster policy it uses was updated after the job was last + edited. The job is considered out of compliance if any of its clusters no longer comply with their updated + policies. + + The get and list compliance APIs allow you to view the policy compliance status of a job. The enforce + compliance API allows you to update a job so that it becomes compliant with all of its policies. + + .. py:method:: enforce_compliance(job_id: int [, validate_only: Optional[bool]]) -> EnforcePolicyComplianceResponse + + Enforce job policy compliance. + + Updates a job so the job clusters that are created when running the job (specified in `new_cluster`) + are compliant with the current versions of their respective cluster policies. All-purpose clusters + used in the job will not be updated. + + :param job_id: int + The ID of the job you want to enforce policy compliance on. + :param validate_only: bool (optional) + If set, previews changes made to the job to comply with its policy, but does not update the job. + + :returns: :class:`EnforcePolicyComplianceResponse` + + + .. py:method:: get_compliance(job_id: int) -> GetPolicyComplianceResponse + + Get job policy compliance. + + Returns the policy compliance status of a job. Jobs could be out of compliance if a cluster policy + they use was updated after the job was last edited and some of its job clusters no longer comply with + their updated policies. + + :param job_id: int + The ID of the job whose compliance status you are requesting. + + :returns: :class:`GetPolicyComplianceResponse` + + + .. py:method:: list_compliance(policy_id: str [, page_size: Optional[int], page_token: Optional[str]]) -> Iterator[JobCompliance] + + List job policy compliance. + + Returns the policy compliance status of all jobs that use a given policy. Jobs could be out of + compliance if a cluster policy they use was updated after the job was last edited and its job clusters + no longer comply with the updated policy. + + :param policy_id: str + Canonical unique identifier for the cluster policy. + :param page_size: int (optional) + Use this field to specify the maximum number of results to be returned by the server. The server may + further constrain the maximum number of results returned in a single page. + :param page_token: str (optional) + A page token that can be used to navigate to the next page or previous page as returned by + `next_page_token` or `prev_page_token`. + + :returns: Iterator over :class:`JobCompliance` + \ No newline at end of file diff --git a/docs/workspace/sql/query_history.rst b/docs/workspace/sql/query_history.rst index 5fa003c0e..2f5520cdf 100644 --- a/docs/workspace/sql/query_history.rst +++ b/docs/workspace/sql/query_history.rst @@ -4,10 +4,10 @@ .. py:class:: QueryHistoryAPI - A service responsible for storing and retrieving the list of queries run against SQL endpoints, serverless - compute, and DLT. + A service responsible for storing and retrieving the list of queries run against SQL endpoints and + serverless compute. - .. py:method:: list( [, filter_by: Optional[QueryFilter], max_results: Optional[int], page_token: Optional[str]]) -> ListQueriesResponse + .. py:method:: list( [, filter_by: Optional[QueryFilter], include_metrics: Optional[bool], max_results: Optional[int], page_token: Optional[str]]) -> ListQueriesResponse Usage: @@ -24,7 +24,7 @@ List Queries. - List the history of queries through SQL warehouses, serverless compute, and DLT. + List the history of queries through SQL warehouses, and serverless compute. You can filter by user ID, warehouse ID, status, and time range. Most recently started queries are returned first (up to max_results in request). The pagination token returned in response can be used @@ -32,6 +32,9 @@ :param filter_by: :class:`QueryFilter` (optional) A filter to limit query history results. This field is optional. + :param include_metrics: bool (optional) + Whether to include the query metrics with each query. Only use this for a small subset of queries + (max_results). Defaults to false. :param max_results: int (optional) Limit the number of results returned in one page. Must be less than 1000 and the default is 100. :param page_token: str (optional)