Onboarding the Flipside shares is a simple three or four step process.
- Get the listing
- Mount the shared data
- Run the provided script that will finish the setup
- OPTIONAL: Create an alert to notify if the task fails
Flipside data shares contain the underlying tables and the script in step three will ensure the final presentation layer is recreated within the consumer’s snowflake account.
{% embed url="https://www.loom.com/share/72b376e9ed1b4933b6da3ec0e4517586?sid=919df848-e828-41f4-be80-ae65e34ab3c" %} Quick walkhrough of mounting a Data Share listing. {% endembed %}
In the Snowflake account you shared with Flipside, navigate to: Data > Data Products > then click on the tab Shared With Me. Find the listing titled Flipside_[Chain Name] and click the blue button labeled 'Get' \
Mount the data and make sure the database is named Flipside_[Chain name] (if Trial remove _Trial at the end)
- Example: Flipside_Ethereum
Step 3: Run the SQL script to recreate the presentation layer as ACCOUNTADMIN
Open a new worksheet in Snowflake and run the script linked here.
(Optional) Step 4: Modify and execute the sample code to set up email alerting
--Create utility db
create database if not exists _flipside;
-- create base proc that uses the db and updated_since
CREATE OR REPLACE PROCEDURE _flipside.public.sp_ddl_refresh(db string ,UPDATED_SINCE TIMESTAMP_NTZ(9))
RETURNS VARCHAR NOT NULL
LANGUAGE SQL
AS
DECLARE
full_table varchar DEFAULT :db || '._datashare._create_gold';
destination_db varchar default REPLACE(:db,'FLIPSIDE_');
QUERY varchar default '';
DDL_HASH STRING;
BEGIN
select replace(
replace(
replace(ddl, '\\$\\$', '$$'),
'__SOURCE__',
UPPER(:db)
),
'__NEW__',
UPPER(:destination_db)
) ,
ddl_hash into :QUERY,
:DDL_HASH
from
IDENTIFIER(:full_table)
WHERE DDL_CREATED_AT >= :UPDATED_SINCE
order by
ddl_created_at desc
limit
1;
IF (:QUERY is not null) then
EXECUTE IMMEDIATE :QUERY;
end if;
RETURN:DDL_HASH;
END;
-- create proc that uses just the db
CREATE OR REPLACE PROCEDURE _FLIPSIDE.PUBLIC.SP_DDL_REFRESH(DB string)
RETURNS TABLE (ddl_hash STRING)
LANGUAGE SQL
AS DECLARE
results resultset;
BEGIN
results := (call _flipside.public.sp_ddl_refresh(:db, '2000-01-01'::TIMESTAMP_NTZ) );
RETURN table(results);
END;
-- create proc that uses the updated since and will run for all flipside dbs
CREATE OR REPLACE PROCEDURE _flipside.public.sp_ddl_refresh("UPDATED_SINCE" TIMESTAMP_NTZ(9))
RETURNS TABLE (dbs string)
LANGUAGE SQL
-- EXECUTE AS OWNER
AS DECLARE
cur CURSOR FOR select
database_name
from SNOWFLAKE.INFORMATION_SCHEMA.DATABASES
where database_name like 'FLIPSIDE_%';
BEGIN
create or replace temporary table results as
select ''::string as db
from dual
limit 0;
FOR cur_row IN cur DO
let db varchar:= cur_row.database_name;
call _flipside.public.sp_ddl_refresh(:db, :updated_since);
insert into results (db) values (:db);
END FOR;
let rs resultset := (select * from results order by db);
RETURN TABLE(rs);
END;
-- create proc that will recreated all dbs regardless of updated since date
CREATE OR REPLACE PROCEDURE _flipside.public.sp_ddl_refresh()
RETURNS TABLE (dbs string)
LANGUAGE SQL
AS
DECLARE
results resultset;
BEGIN
results := (call _flipside.public.sp_ddl_refresh( '2000-01-01'::TIMESTAMP_NTZ) );
RETURN table(results);
END;
-- create task that runs every 10 minutes (procs only run if there's a change -- in the last 10 mins)
create or replace task _flipside.public.TK_ddl_refresh
schedule='10 MINUTE'
as DECLARE
rs resultset;
output string;
BEGIN
rs := (call _flipside.public.sp_ddl_refresh(sysdate() - interval '10 mins'));
select listagg($1, ';') into :output from table(result_scan(last_query_id())) limit 1;
call SYSTEM$SET_RETURN_VALUE( :output );
END;
--set task to resume. By default they are suspended
alter task _flipside.public.TK_ddl_refresh resume;
--Call the proc for the first time to created the new dbs manually
call _flipside.public.sp_ddl_refresh();
--query to see that the task is successfully created
select *
from table(_flipside.information_schema.task_history())
where name ='TK_DDL_REFRESH'
order by scheduled_time desc;// Some code
Alerting/Monitoring for failed tasks can be done in many ways. One option is to use the built in snowflake email alerting. Snowflake docs: alerting documentation | email notification
Below is a sample example of how we can check the task history every 10 minutes to look for failures
--create email integrations
CREATE NOTIFICATION INTEGRATION my_email_int
TYPE=EMAIL
ENABLED=TRUE
ALLOWED_RECIPIENTS=('[email protected]');
--create alert
CREATE OR REPLACE ALERT myalert
WAREHOUSE = [warehouse name]
SCHEDULE = '10 MINUTE'
IF (EXISTS (
SELECT *
from table(_flipside.information_schema.task_history())
where name ='TK_DDL_REFRESH' and STATE ='FAILED'
and COMPLETED_TIME BETWEEN SNOWFLAKE.ALERT.LAST_SUCCESSFUL_SCHEDULED_TIME()
AND SNOWFLAKE.ALERT.SCHEDULED_TIME()
))
THEN CALL SYSTEM$SEND_EMAIL(
'my_email_int',
'[email protected]',
'Email Alert: The data share ddl refresh failed',
'Task A has ...'
);
--make sure to resume the alert. By default it's suspended
ALTER ALERT myalert RESUME;// Some code
If another role outside of accountadmin needs access to the data, the below scripts should allow for granting access to the current views and will also ensure any new schemas/views are accessible.
GRANT usage on database [blockchain] to role [your_role];
GRANT usage on all schemas in database [blockchain] to role [your_role];
GRANT select on all views in database [blockchain] to role [your_role];
GRANT usage on future schemas in database [blockchain] to role [your_role];
GRANT select on future views in database [blockchain] to role [your_role];