Replies: 1 comment 1 reply
-
Hello, Thank you for this suggestion. We had a lot of internal discussions about it. In our view, it would be absolutely fantastic to manage everything Snowflake-related in SnowDDL, but it's currently not possible for INTEGRATIONs. I'll explain why. For example, please consider the following notes for integration with S3:
So every time STORAGE INTEGRATION is created or replaced, trust policy outside of Snowflake should be updated to make it actually work. But one of the main features of SnowDDL is the ability to quickly create and destroy "dev" and "testing" environments with env prefix feature. It includes objects like If SnowDDL attempts to create a brand new However, if STORAGE (or any other type) INTEGRATIONs are managed externally (e.g. via Terraform), most issues are resolved automatically.
One more important consideration is security. Please note how SnowDDL requires It is related to set of security parameters, like this one: https://docs.snowflake.com/en/sql-reference/parameters.html#require-storage-integration-for-stage-creation Enabling these parameters are highly recommended. It greatly improves security of Snowflake account, and it prevents developers in "dev" environment from accidentally (or intentionally) exposing your data to random locations. But if we allow to configure and create storage integrations from SnowDDL, it means that developers will be able to create storage integrations as well. And it will ultimately defeat the whole purpose of this extra security check. All things considered, managing all integrations out of scope of SnowDDL and "env prefix" seems like the best approach right now. But it should not be a major issue, since integrations are not changed very often compared to tables or views. |
Beta Was this translation helpful? Give feedback.
-
Hello,
Thank you for open sourcing this tool, it is much better to use than the Terraform connector or rolling your own.
I noticed documentation for integrations state that these objects should be handled manually due to scope outside of Snowflake. I understand the logic, but believe storage integrations can be created and altered by SnowDDL with relative ease. I am happy to implement this from an AWS perspective, but wanted to get other thoughts. Also seeing if anyone else can implement and test the Azure/GCP sides, and making sure there aren't larger blockers or use cases I am missing.
In our use case, we create storage integrations one time, and use the output to modify a trust relationship in an AWS role. Occasionally, we alter the storage integration (to update the assumed role, or allowed locations). Although the AWS-side updates are outside of the scope of SnowDDL, creating and modifying storage integrations are not. This will prove very useful in manual SnowDDL runs, CI/CD pipelines, and managing Snowflake infrastructure in one place.
Beta Was this translation helpful? Give feedback.
All reactions