You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm running into this currently while attempting to build an incremental model with vector embeddings for use in a simple RAG pipeline in Snowflake. Is there any update on prioritization of a fix here or workaround in the short term. It's untenable and wasteful to not run models like these in an incremental fashion.
Also seeing this issue with Snowflake. It's blocking us from being able to add data tests to sources which contain any field of either Array or Object type. Even if I add a test to completely different field on the source table, I'll still get the same error:
Could not interpret data_type "ARRAY(VARCHAR(16777216))": could not convert "VARCHAR(16777216" to an integer
Is this a new bug?
Current Behavior
If a given column is a complex ARRAY type such as
ARRAY(VARCHAR(16777216))
, the macroget_columns_in_relation
raises an error.Expected Behavior
We can better handle this case to not raise an error.
Steps To Reproduce
Do a
dbt run
on a model which callsadapter.get_columns_in_relation()
on a table with a column of typeARRAY(VARCHAR(16777216))
.Relevant log output
Environment
Additional Context
Relevant code that can more gracefully handle this case -
https://github.com/dbt-labs/dbt-adapters/blob/v1.6.1/dbt/adapters/base/column.py#L142-L146
The text was updated successfully, but these errors were encountered: