Skip to content

SeaDatabricksClient: Add Metadata Commands #593

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 119 commits into from
Jun 26, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
119 commits
Select commit Hold shift + click to select a range
138c2ae
[squash from exec-sea] bring over execution phase changes
varun-edachali-dbx Jun 9, 2025
3e3ab94
remove excess test
varun-edachali-dbx Jun 9, 2025
4a78165
add docstring
varun-edachali-dbx Jun 9, 2025
0dac4aa
remvoe exec func in sea backend
varun-edachali-dbx Jun 9, 2025
1b794c7
remove excess files
varun-edachali-dbx Jun 9, 2025
da5a6fe
remove excess models
varun-edachali-dbx Jun 9, 2025
686ade4
remove excess sea backend tests
varun-edachali-dbx Jun 9, 2025
31e6c83
cleanup
varun-edachali-dbx Jun 9, 2025
69ea238
re-introduce get_schema_desc
varun-edachali-dbx Jun 9, 2025
66d7517
remove SeaResultSet
varun-edachali-dbx Jun 9, 2025
71feef9
clean imports and attributes
varun-edachali-dbx Jun 9, 2025
ae9862f
pass CommandId to ExecResp
varun-edachali-dbx Jun 9, 2025
d8aa69e
remove changes in types
varun-edachali-dbx Jun 9, 2025
db139bc
add back essential types (ExecResponse, from_sea_state)
varun-edachali-dbx Jun 9, 2025
b977b12
fix fetch types
varun-edachali-dbx Jun 9, 2025
da615c0
excess imports
varun-edachali-dbx Jun 9, 2025
0da04a6
reduce diff by maintaining logs
varun-edachali-dbx Jun 9, 2025
ea9d456
fix int test types
varun-edachali-dbx Jun 9, 2025
8985c62
[squashed from exec-sea] init execution func
varun-edachali-dbx Jun 9, 2025
d9bcdbe
remove irrelevant changes
varun-edachali-dbx Jun 9, 2025
ee9fa1c
remove ResultSetFilter functionality
varun-edachali-dbx Jun 9, 2025
24c6152
remove more irrelevant changes
varun-edachali-dbx Jun 9, 2025
67fd101
remove more irrelevant changes
varun-edachali-dbx Jun 9, 2025
271fcaf
even more irrelevant changes
varun-edachali-dbx Jun 9, 2025
bf26ea3
remove sea response as init option
varun-edachali-dbx Jun 9, 2025
ed7cf91
exec test example scripts
varun-edachali-dbx Jun 9, 2025
dae15e3
formatting (black)
varun-edachali-dbx Jun 9, 2025
db5bbea
[squashed from sea-exec] merge sea stuffs
varun-edachali-dbx Jun 9, 2025
d5d3699
remove excess changes
varun-edachali-dbx Jun 9, 2025
6137a3d
remove excess removed docstring
varun-edachali-dbx Jun 9, 2025
75b0773
remove excess changes in backend
varun-edachali-dbx Jun 9, 2025
4494dcd
remove excess imports
varun-edachali-dbx Jun 9, 2025
4d0aeca
remove accidentally removed _get_schema_desc
varun-edachali-dbx Jun 9, 2025
7cece5e
remove unnecessary init with sea_response tests
varun-edachali-dbx Jun 9, 2025
8977c06
rmeove unnecessary changes
varun-edachali-dbx Jun 9, 2025
0216d7a
formatting (black)
varun-edachali-dbx Jun 9, 2025
4cb15fd
improved models and filters from cloudfetch-sea branch
varun-edachali-dbx Jun 9, 2025
dee47f7
filters stuff (align with JDBC)
varun-edachali-dbx Jun 10, 2025
e385d5b
backend from cloudfetch-sea
varun-edachali-dbx Jun 11, 2025
484064e
remove filtering, metadata ops
varun-edachali-dbx Jun 11, 2025
030edf8
raise NotImplementedErrror for metadata ops
varun-edachali-dbx Jun 11, 2025
30f8266
add metadata commands
varun-edachali-dbx Jun 11, 2025
033ae73
formatting (black)
varun-edachali-dbx Jun 11, 2025
33821f4
add metadata command unit tests
varun-edachali-dbx Jun 11, 2025
3e22c6c
change to valid table name
varun-edachali-dbx Jun 11, 2025
787f1f7
Merge branch 'sea-migration' into sea-test-scripts
varun-edachali-dbx Jun 11, 2025
165c4f3
remove un-necessary changes
varun-edachali-dbx Jun 11, 2025
a6e40d0
simplify test module
varun-edachali-dbx Jun 11, 2025
52e3088
logging -> debug level
varun-edachali-dbx Jun 11, 2025
641c09b
change table name in log
varun-edachali-dbx Jun 11, 2025
8bd12d8
Merge branch 'sea-migration' into exec-models-sea
varun-edachali-dbx Jun 11, 2025
ffded6e
remove un-necessary changes
varun-edachali-dbx Jun 11, 2025
227f6b3
remove un-necessary backend cahnges
varun-edachali-dbx Jun 11, 2025
68657a3
remove un-needed GetChunksResponse
varun-edachali-dbx Jun 11, 2025
3940eec
remove un-needed GetChunksResponse
varun-edachali-dbx Jun 11, 2025
37813ba
reduce code duplication in response parsing
varun-edachali-dbx Jun 11, 2025
267c9f4
reduce code duplication
varun-edachali-dbx Jun 11, 2025
2967119
more clear docstrings
varun-edachali-dbx Jun 11, 2025
47fd60d
introduce strongly typed ChunkInfo
varun-edachali-dbx Jun 11, 2025
982fdf2
remove is_volume_operation from response
varun-edachali-dbx Jun 12, 2025
9e14d48
add is_volume_op and more ResultData fields
varun-edachali-dbx Jun 12, 2025
be1997e
Merge branch 'exec-models-sea' into exec-phase-sea
varun-edachali-dbx Jun 12, 2025
e8e8ee7
Merge branch 'sea-test-scripts' into exec-phase-sea
varun-edachali-dbx Jun 12, 2025
05ee4e7
add test scripts
varun-edachali-dbx Jun 12, 2025
3ffa898
Merge branch 'exec-models-sea' into metadata-sea
varun-edachali-dbx Jun 12, 2025
2952d8d
Revert "Merge branch 'sea-migration' into exec-models-sea"
varun-edachali-dbx Jun 12, 2025
89e2aa0
Merge branch 'exec-phase-sea' into metadata-sea
varun-edachali-dbx Jun 12, 2025
cbace3f
Revert "Merge branch 'exec-models-sea' into exec-phase-sea"
varun-edachali-dbx Jun 12, 2025
c075b07
change logging level
varun-edachali-dbx Jun 12, 2025
c62f76d
remove un-necessary changes
varun-edachali-dbx Jun 12, 2025
199402e
remove excess changes
varun-edachali-dbx Jun 12, 2025
8ac574b
remove excess changes
varun-edachali-dbx Jun 12, 2025
398ca70
Merge branch 'sea-migration' into exec-phase-sea
varun-edachali-dbx Jun 12, 2025
b1acc5b
remove _get_schema_bytes (for now)
varun-edachali-dbx Jun 12, 2025
ef2a7ee
redundant comments
varun-edachali-dbx Jun 12, 2025
699942d
Merge branch 'sea-migration' into exec-phase-sea
varun-edachali-dbx Jun 12, 2025
af8f74e
remove fetch phase methods
varun-edachali-dbx Jun 12, 2025
5540c5c
reduce code repetititon + introduce gaps after multi line pydocs
varun-edachali-dbx Jun 12, 2025
efe3881
remove unused imports
varun-edachali-dbx Jun 12, 2025
36ab59b
move description extraction to helper func
varun-edachali-dbx Jun 12, 2025
1d57c99
formatting (black)
varun-edachali-dbx Jun 12, 2025
df6dac2
add more unit tests
varun-edachali-dbx Jun 12, 2025
ad0e527
streamline unit tests
varun-edachali-dbx Jun 12, 2025
ed446a0
test getting the list of allowed configurations
varun-edachali-dbx Jun 12, 2025
38e4b5c
reduce diff
varun-edachali-dbx Jun 12, 2025
94879c0
reduce diff
varun-edachali-dbx Jun 12, 2025
1809956
house constants in enums for readability and immutability
varun-edachali-dbx Jun 13, 2025
da5260c
add note on hybrid disposition
varun-edachali-dbx Jun 13, 2025
0385ffb
remove redundant note on arrow_schema_bytes
varun-edachali-dbx Jun 16, 2025
349c021
Merge branch 'exec-phase-sea' into metadata-sea
varun-edachali-dbx Jun 17, 2025
6229848
remove irrelevant changes
varun-edachali-dbx Jun 17, 2025
fd52356
remove un-necessary test changes
varun-edachali-dbx Jun 17, 2025
64e58b0
remove un-necessary changes in thrift backend tests
varun-edachali-dbx Jun 17, 2025
0a2cdfd
remove unimplemented methods test
varun-edachali-dbx Jun 17, 2025
90bb09c
Merge branch 'sea-migration' into exec-phase-sea
varun-edachali-dbx Jun 17, 2025
cd22389
remove invalid import
varun-edachali-dbx Jun 17, 2025
82e0f8b
Merge branch 'sea-migration' into exec-phase-sea
varun-edachali-dbx Jun 17, 2025
e64b81b
Merge branch 'exec-phase-sea' into metadata-sea
varun-edachali-dbx Jun 17, 2025
5ab9bbe
better align queries with JDBC impl
varun-edachali-dbx Jun 18, 2025
1ab6e87
line breaks after multi-line PRs
varun-edachali-dbx Jun 18, 2025
f469c24
remove unused imports
varun-edachali-dbx Jun 18, 2025
68ec65f
fix: introduce ExecuteResponse import
varun-edachali-dbx Jun 18, 2025
ffd478e
Merge branch 'sea-migration' into metadata-sea
varun-edachali-dbx Jun 18, 2025
f6d873d
remove unimplemented metadata methods test, un-necessary imports
varun-edachali-dbx Jun 18, 2025
28675f5
introduce unit tests for metadata methods
varun-edachali-dbx Jun 18, 2025
3578659
remove verbosity in ResultSetFilter docstring
varun-edachali-dbx Jun 20, 2025
8713023
remove un-necessary info in ResultSetFilter docstring
varun-edachali-dbx Jun 20, 2025
22dc252
remove explicit type checking, string literals around forward annotat…
varun-edachali-dbx Jun 20, 2025
390f592
house SQL commands in constants
varun-edachali-dbx Jun 20, 2025
35f1ef0
remove catalog requirement in get_tables
varun-edachali-dbx Jun 26, 2025
a515d26
move filters.py to SEA utils
varun-edachali-dbx Jun 26, 2025
59b1330
ensure SeaResultSet
varun-edachali-dbx Jun 26, 2025
293e356
Merge branch 'sea-migration' into metadata-sea
varun-edachali-dbx Jun 26, 2025
dd40beb
prevent circular imports
varun-edachali-dbx Jun 26, 2025
14057ac
remove unused imports
varun-edachali-dbx Jun 26, 2025
a4d5bdb
remove cast, throw error if not SeaResultSet
varun-edachali-dbx Jun 26, 2025
e9b1314
make SEA backend methods return SeaResultSet
varun-edachali-dbx Jun 26, 2025
8ede414
use spec-aligned Exceptions in SEA backend
varun-edachali-dbx Jun 26, 2025
09a1b11
remove defensive row type check
varun-edachali-dbx Jun 26, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
163 changes: 130 additions & 33 deletions src/databricks/sql/backend/sea/backend.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
from __future__ import annotations

import logging
import time
import re
Expand All @@ -10,11 +12,12 @@
ResultDisposition,
ResultCompression,
WaitTimeout,
MetadataCommands,
)

if TYPE_CHECKING:
from databricks.sql.client import Cursor
from databricks.sql.result_set import ResultSet
from databricks.sql.result_set import SeaResultSet

from databricks.sql.backend.databricks_client import DatabricksClient
from databricks.sql.backend.types import (
Expand All @@ -24,7 +27,7 @@
BackendType,
ExecuteResponse,
)
from databricks.sql.exc import DatabaseError, ServerOperationError
from databricks.sql.exc import DatabaseError, ProgrammingError, ServerOperationError
from databricks.sql.backend.sea.utils.http_client import SeaHttpClient
from databricks.sql.types import SSLOptions

Expand Down Expand Up @@ -169,7 +172,7 @@ def _extract_warehouse_id(self, http_path: str) -> str:
f"Note: SEA only works for warehouses."
)
logger.error(error_message)
raise ValueError(error_message)
raise ProgrammingError(error_message)

@property
def max_download_threads(self) -> int:
Expand Down Expand Up @@ -241,14 +244,14 @@ def close_session(self, session_id: SessionId) -> None:
session_id: The session identifier returned by open_session()

Raises:
ValueError: If the session ID is invalid
ProgrammingError: If the session ID is invalid
OperationalError: If there's an error closing the session
"""

logger.debug("SeaDatabricksClient.close_session(session_id=%s)", session_id)

if session_id.backend_type != BackendType.SEA:
raise ValueError("Not a valid SEA session ID")
raise ProgrammingError("Not a valid SEA session ID")
sea_session_id = session_id.to_sea_session_id()

request_data = DeleteSessionRequest(
Expand Down Expand Up @@ -400,12 +403,12 @@ def execute_command(
max_rows: int,
max_bytes: int,
lz4_compression: bool,
cursor: "Cursor",
cursor: Cursor,
use_cloud_fetch: bool,
parameters: List[Dict[str, Any]],
async_op: bool,
enforce_embedded_schema_correctness: bool,
) -> Union["ResultSet", None]:
) -> Union[SeaResultSet, None]:
"""
Execute a SQL command using the SEA backend.

Expand All @@ -426,7 +429,7 @@ def execute_command(
"""

if session_id.backend_type != BackendType.SEA:
raise ValueError("Not a valid SEA session ID")
raise ProgrammingError("Not a valid SEA session ID")

sea_session_id = session_id.to_sea_session_id()

Expand Down Expand Up @@ -501,11 +504,11 @@ def cancel_command(self, command_id: CommandId) -> None:
command_id: Command identifier to cancel

Raises:
ValueError: If the command ID is invalid
ProgrammingError: If the command ID is invalid
"""

if command_id.backend_type != BackendType.SEA:
raise ValueError("Not a valid SEA command ID")
raise ProgrammingError("Not a valid SEA command ID")

sea_statement_id = command_id.to_sea_statement_id()

Expand All @@ -524,11 +527,11 @@ def close_command(self, command_id: CommandId) -> None:
command_id: Command identifier to close

Raises:
ValueError: If the command ID is invalid
ProgrammingError: If the command ID is invalid
"""

if command_id.backend_type != BackendType.SEA:
raise ValueError("Not a valid SEA command ID")
raise ProgrammingError("Not a valid SEA command ID")

sea_statement_id = command_id.to_sea_statement_id()

Expand All @@ -550,7 +553,7 @@ def get_query_state(self, command_id: CommandId) -> CommandState:
CommandState: The current state of the command

Raises:
ValueError: If the command ID is invalid
ProgrammingError: If the command ID is invalid
"""

if command_id.backend_type != BackendType.SEA:
Expand All @@ -572,8 +575,8 @@ def get_query_state(self, command_id: CommandId) -> CommandState:
def get_execution_result(
self,
command_id: CommandId,
cursor: "Cursor",
) -> "ResultSet":
cursor: Cursor,
) -> SeaResultSet:
"""
Get the result of a command execution.

Expand All @@ -582,14 +585,14 @@ def get_execution_result(
cursor: Cursor executing the command

Returns:
ResultSet: A SeaResultSet instance with the execution results
SeaResultSet: A SeaResultSet instance with the execution results

Raises:
ValueError: If the command ID is invalid
"""

if command_id.backend_type != BackendType.SEA:
raise ValueError("Not a valid SEA command ID")
raise ProgrammingError("Not a valid SEA command ID")

sea_statement_id = command_id.to_sea_statement_id()

Expand Down Expand Up @@ -626,47 +629,141 @@ def get_catalogs(
session_id: SessionId,
max_rows: int,
max_bytes: int,
cursor: "Cursor",
):
"""Not implemented yet."""
raise NotImplementedError("get_catalogs is not yet implemented for SEA backend")
cursor: Cursor,
) -> SeaResultSet:
"""Get available catalogs by executing 'SHOW CATALOGS'."""
result = self.execute_command(
operation=MetadataCommands.SHOW_CATALOGS.value,
session_id=session_id,
max_rows=max_rows,
max_bytes=max_bytes,
lz4_compression=False,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not using compression for metadata?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a side effect of setting use_cloud_fetch=False: compression is not supported for INLINE + JSON in SEA.

cursor=cursor,
use_cloud_fetch=False,
parameters=[],
async_op=False,
enforce_embedded_schema_correctness=False,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a thrift-specific param?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but it is a param passed to Cursor's execute method, so I don't see a way to not include it in our execute_command, if it were passed as a connection level property then we may have been able to access it that way. Should I try to avoid passing it to the SEA backend?

)
assert result is not None, "execute_command returned None in synchronous mode"
return result

def get_schemas(
self,
session_id: SessionId,
max_rows: int,
max_bytes: int,
cursor: "Cursor",
cursor: Cursor,
catalog_name: Optional[str] = None,
schema_name: Optional[str] = None,
):
"""Not implemented yet."""
raise NotImplementedError("get_schemas is not yet implemented for SEA backend")
) -> SeaResultSet:
"""Get schemas by executing 'SHOW SCHEMAS IN catalog [LIKE pattern]'."""
if not catalog_name:
raise DatabaseError("Catalog name is required for get_schemas")

operation = MetadataCommands.SHOW_SCHEMAS.value.format(catalog_name)

if schema_name:
operation += MetadataCommands.LIKE_PATTERN.value.format(schema_name)

result = self.execute_command(
operation=operation,
session_id=session_id,
max_rows=max_rows,
max_bytes=max_bytes,
lz4_compression=False,
cursor=cursor,
use_cloud_fetch=False,
parameters=[],
async_op=False,
enforce_embedded_schema_correctness=False,
)
assert result is not None, "execute_command returned None in synchronous mode"
return result

def get_tables(
self,
session_id: SessionId,
max_rows: int,
max_bytes: int,
cursor: "Cursor",
cursor: Cursor,
catalog_name: Optional[str] = None,
schema_name: Optional[str] = None,
table_name: Optional[str] = None,
table_types: Optional[List[str]] = None,
):
"""Not implemented yet."""
raise NotImplementedError("get_tables is not yet implemented for SEA backend")
) -> SeaResultSet:
"""Get tables by executing 'SHOW TABLES IN catalog [SCHEMA LIKE pattern] [LIKE pattern]'."""
operation = (
MetadataCommands.SHOW_TABLES_ALL_CATALOGS.value
if catalog_name in [None, "*", "%"]
else MetadataCommands.SHOW_TABLES.value.format(
MetadataCommands.CATALOG_SPECIFIC.value.format(catalog_name)
)
)

if schema_name:
operation += MetadataCommands.SCHEMA_LIKE_PATTERN.value.format(schema_name)

if table_name:
operation += MetadataCommands.LIKE_PATTERN.value.format(table_name)

result = self.execute_command(
operation=operation,
session_id=session_id,
max_rows=max_rows,
max_bytes=max_bytes,
lz4_compression=False,
cursor=cursor,
use_cloud_fetch=False,
parameters=[],
async_op=False,
enforce_embedded_schema_correctness=False,
)
assert result is not None, "execute_command returned None in synchronous mode"

# Apply client-side filtering by table_types
from databricks.sql.backend.sea.utils.filters import ResultSetFilter

result = ResultSetFilter.filter_tables_by_type(result, table_types)

return result

def get_columns(
self,
session_id: SessionId,
max_rows: int,
max_bytes: int,
cursor: "Cursor",
cursor: Cursor,
catalog_name: Optional[str] = None,
schema_name: Optional[str] = None,
table_name: Optional[str] = None,
column_name: Optional[str] = None,
):
"""Not implemented yet."""
raise NotImplementedError("get_columns is not yet implemented for SEA backend")
) -> SeaResultSet:
"""Get columns by executing 'SHOW COLUMNS IN CATALOG catalog [SCHEMA LIKE pattern] [TABLE LIKE pattern] [LIKE pattern]'."""
if not catalog_name:
raise DatabaseError("Catalog name is required for get_columns")

operation = MetadataCommands.SHOW_COLUMNS.value.format(catalog_name)

if schema_name:
operation += MetadataCommands.SCHEMA_LIKE_PATTERN.value.format(schema_name)

if table_name:
operation += MetadataCommands.TABLE_LIKE_PATTERN.value.format(table_name)

if column_name:
operation += MetadataCommands.LIKE_PATTERN.value.format(column_name)

result = self.execute_command(
operation=operation,
session_id=session_id,
max_rows=max_rows,
max_bytes=max_bytes,
lz4_compression=False,
cursor=cursor,
use_cloud_fetch=False,
parameters=[],
async_op=False,
enforce_embedded_schema_correctness=False,
)
assert result is not None, "execute_command returned None in synchronous mode"
return result
20 changes: 20 additions & 0 deletions src/databricks/sql/backend/sea/utils/constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,3 +45,23 @@ class WaitTimeout(Enum):

ASYNC = "0s"
SYNC = "10s"


class MetadataCommands(Enum):
"""SQL commands used in the SEA backend.

These constants are used for metadata operations and other SQL queries
to ensure consistency and avoid string literal duplication.
"""

SHOW_CATALOGS = "SHOW CATALOGS"
SHOW_SCHEMAS = "SHOW SCHEMAS IN {}"
SHOW_TABLES = "SHOW TABLES IN {}"
SHOW_TABLES_ALL_CATALOGS = "SHOW TABLES IN ALL CATALOGS"
SHOW_COLUMNS = "SHOW COLUMNS IN CATALOG {}"

LIKE_PATTERN = " LIKE '{}'"
SCHEMA_LIKE_PATTERN = " SCHEMA" + LIKE_PATTERN
TABLE_LIKE_PATTERN = " TABLE" + LIKE_PATTERN

CATALOG_SPECIFIC = "CATALOG {}"
Loading
Loading