diff --git a/docs/_snippets/cloud/features.mdx b/docs/_snippets/cloud/features.mdx new file mode 100644 index 000000000..4d4442eef --- /dev/null +++ b/docs/_snippets/cloud/features.mdx @@ -0,0 +1,81 @@ + +### Detection & Coverage + +Elementary integrates powerful anomaly detection and dbt tests into a unified detection strategy. +Effective detection of data issues requires a comprehensive approach, +including both pipeline and data monitoring, validation tests, +anomaly detection for unexpected behavior, and a single interface to manage it all at scale. + + + + ML-powered monitors automatically detect data quality issues. + Out-of-the-box for volume and freshness, and opt-in for data quality metrics. + + + Validate data and track the results of dbt tests, dbt packages tests (dbt-utils, dbt-expectations, elementary) and custom SQL tests. + + + Validate there are no breaking changes in tables schema, JSON schema, and downstream exposures such as dashboards. + + + Track failures and runs of jobs, models, and tests overtime. + Pipeline failures and performance issues can cause data incidents, and create unneceserry costs. + + + Configure Elementary in code, or via the UI for non-technical users or for adding tests in bulk. + The platform opens PRs to your repo, saving hours of tedious YAML edits. + + + Coming soon! + + + +### Triage & Response + +Detecting issues is just the first step to achieve data reliability. +Elementary offers tools to create an effective response plan, for faster recovery. +This includes investigating the root cause and impact of issues, communicating issues to the relevant people, assigning owners to fix issues, keeping track of open incidents and more. + + + + Column-level lineage that spans through sources, models and BI tools, enriched with monitoring results. Enables granular root cause and impact analysis. + + + Define clear ownership of data assets and enable owners to be informed and accountable for the health and status of their data. + + + Distribute highly configurable alerts to different channels and integrations. + Automatically tag owners, and enable setting status and assigns at the alert level. + + + Different failures related to the same issue are grouped automatically to a single incident. + This accelerates triage and response, and reduces alerts fautigue. + + + Manage all open incidents in a single interface, with a clear view of status and assignees. + Track historical incidents and high-level incidents metrics. + + + +### Collaboration & Communication + +The data team doesn’t live in a silo - you have many stakeholders. +The only way to improve data trust is by bringing in more team members, users and stakeholders to the data health process. +Elementary fosters collaboration by allowing you to easily share and communicate the status of issues, +the overall health of the data platform and progress made to improve it with the broader organization. + + + + Up to date dashboard with current status and trends of data issues. + Share the dashboard with others, enable them to slice results and stay informed. + + + Enable effective collaboration and communication by grouping related data assets and tests by business domains, data products, priority, etc. + + + Search and explore your datasets information - descriptions, columns, column descriptions, compiled code, datasets health and more. + + + Coming soon! + + \ No newline at end of file diff --git a/docs/_snippets/cloud/features/alerts-and-incidents/alert-types.mdx b/docs/_snippets/cloud/features/alerts-and-incidents/alert-types.mdx new file mode 100644 index 000000000..bd5e74083 --- /dev/null +++ b/docs/_snippets/cloud/features/alerts-and-incidents/alert-types.mdx @@ -0,0 +1,7 @@ +Elementary can be configured to send alerts on: + +- Model run failures +- Failures and/or warnings of dbt tests (including Elementary dbt package and other packages) +- Failures and/or warnings Elementary Anomaly Detection monitors +- Failures and/or warning of custom SQL tests +- dbt source freshness failures \ No newline at end of file diff --git a/docs/_snippets/cloud/features/anomaly-detection/automated-monitors-cards.mdx b/docs/_snippets/cloud/features/anomaly-detection/automated-monitors-cards.mdx new file mode 100644 index 000000000..578c45146 --- /dev/null +++ b/docs/_snippets/cloud/features/anomaly-detection/automated-monitors-cards.mdx @@ -0,0 +1,10 @@ + + + Monitors updates to tables and how frequently a table is updated, + and fails if there is an unexpected delay. + + + Monitors how many rows were added or removed to a table on each update, + and fails if there is an unexpected drop or spike in rows. + + \ No newline at end of file diff --git a/docs/_snippets/cloud/features/anomaly-detection/automated-monitors-intro.mdx b/docs/_snippets/cloud/features/anomaly-detection/automated-monitors-intro.mdx new file mode 100644 index 000000000..ad7abf84a --- /dev/null +++ b/docs/_snippets/cloud/features/anomaly-detection/automated-monitors-intro.mdx @@ -0,0 +1,5 @@ +Out-of-the-box ML-powered monitoring for freshness and volume issues on all production tables. +The automated monitors feature provides broad coverage and detection of critical pipeline issues, without any configuration effort. + +These monitors track updates to tables, and will detect data delays, incomplete updates, and significant volume changes. +Additionally, there will be no increase in compute costs as the monitors leverage only warehouse metadata (e.g. information schema, query history). \ No newline at end of file diff --git a/docs/_snippets/cloud/how-it-works.mdx b/docs/_snippets/cloud/how-it-works.mdx new file mode 100644 index 000000000..941bef9a2 --- /dev/null +++ b/docs/_snippets/cloud/how-it-works.mdx @@ -0,0 +1,12 @@ +1. You install the Elementary dbt package in your dbt project and configure it to write to it's own schema, the Elementary schema. +2. The package writes test results, run results, logs and metadata to the Elementary schema. +3. The cloud service only requires `read access` to the Elementary schema, not to schemas where your sensitive data is stored. +4. The cloud service connects to sync the Elementary schema using an **encrypted connection** and a **static IP address** that you will need to add to your allowlist. + + + Elementary cloud security + \ No newline at end of file diff --git a/docs/_snippets/cloud/integrations/cards-groups/alerts-destination-cards.mdx b/docs/_snippets/cloud/integrations/cards-groups/alerts-destination-cards.mdx index bf9a777ac..1c369051d 100644 --- a/docs/_snippets/cloud/integrations/cards-groups/alerts-destination-cards.mdx +++ b/docs/_snippets/cloud/integrations/cards-groups/alerts-destination-cards.mdx @@ -51,6 +51,14 @@ } > + + } + > + - - - - } - > - Click for details - -### Communication and collaboration +### Alerts & incidents \ No newline at end of file diff --git a/docs/_snippets/cloud/integrations/snowflake.mdx b/docs/_snippets/cloud/integrations/snowflake.mdx index 6eb762ada..6a2ba656b 100644 --- a/docs/_snippets/cloud/integrations/snowflake.mdx +++ b/docs/_snippets/cloud/integrations/snowflake.mdx @@ -12,7 +12,7 @@ Provide the following fields: - **Elementary schema**: The name of your Elementary schema. Usually `[schema name]_elementary`. - **Role (optional)**: e.g. `ELEMENTARY_ROLE`. -Elementary cloud supports the user password and key pair authentication connection methods. +Elementary Cloud supports the user password and key pair authentication connection methods. - **User password**: - User: The user created for Elementary. diff --git a/docs/_snippets/guides/collect-job-data.mdx b/docs/_snippets/guides/collect-job-data.mdx index 85b8e38ea..b221413ea 100644 --- a/docs/_snippets/guides/collect-job-data.mdx +++ b/docs/_snippets/guides/collect-job-data.mdx @@ -15,7 +15,7 @@ The goal is to provide context that is useful to triage and resolve data issues, - The ID of a specific run execution: `job_run_id` - Job run results URL: `job_run_url` -## How Elementary collects jobs metadata? +## How Elementary collects jobs metadata #### Environment variables @@ -31,7 +31,7 @@ To configure `env_var` for your orchestrator, refer to your orchestrator's docs. Elementary also supports passing job metadata as dbt vars. If `env_var` and `var` exist, the `var` will be prioritized. -To pass job data to elementary using `var`, use the `--vars` flag in your invocations: +To pass job data to Elementary using `var`, use the `--vars` flag in your invocations: ```shell dbt run --vars '{"orchestrator": "Airflow", "job_name": "dbt_marketing_night_load"}' @@ -57,7 +57,7 @@ The following default environment variables are supported out of the box: | Github actions | orchestrator
job_run_id: `GITHUB_RUN_ID`
job_url: generated from `GITHUB_SERVER_URL`, `GITHUB_REPOSITORY`, `GITHUB_RUN_ID` | | Airflow | orchestrator | -## What if I use dbt cloud + orchestrator? +## What if I use dbt Cloud + orchestrator? By default, Elementary will collect the dbt cloud jobs info. If you wish to override that, change your dbt cloud invocations to pass the orchestrator job info using `--vars`: diff --git a/docs/_snippets/guides/dbt-source-freshness.mdx b/docs/_snippets/guides/dbt-source-freshness.mdx index 85bc288c3..eafe5ac37 100644 --- a/docs/_snippets/guides/dbt-source-freshness.mdx +++ b/docs/_snippets/guides/dbt-source-freshness.mdx @@ -1,5 +1,5 @@ Unlike dbt and Elementary tests, the results of the command `dbt source-freshness` are not automatically collected. -You can collect the results using Elementary CLI tool. +You can collect the results using the Elementary CLI tool. If dbt source freshness results are collected, they will be presented in the UI, and in alerts upon failure. @@ -21,7 +21,7 @@ This operation will upload the results to a table, and the execution of `edr mon - Note that `dbt source freshness` and `upload-source-freshness` needs to run from the same machine. - Note that `upload-source-freshness` requires passing `--project-dir` argument. -#### dbt cloud users +#### dbt Cloud users -The results can't be collected from dbt cloud. +The results can't be collected from dbt Cloud. Here is a [suggestion from an Elementary user](https://elementary-community.slack.com/archives/C02CTC89LAX/p1688113609829869) for a solution you can implement. \ No newline at end of file diff --git a/docs/_snippets/quickstart/quickstart-cards.mdx b/docs/_snippets/quickstart/quickstart-cards.mdx index 5d4ecca61..871de0acd 100644 --- a/docs/_snippets/quickstart/quickstart-cards.mdx +++ b/docs/_snippets/quickstart/quickstart-cards.mdx @@ -3,7 +3,7 @@ title="Elementary Cloud Platform" icon="cloud" iconType="solid" - href="https://elementary-data.frontegg.com/oauth/account/sign-up" + href="/cloud/introduction" >
Built on top of the OSS package, ideal for teams monitoring mission-critical data pipelines, requiring guaranteed uptime and reliability, short-time-to-value, advanced features, collaboration, and professional support. diff --git a/docs/cloud/features.mdx b/docs/cloud/features.mdx new file mode 100644 index 000000000..4a27276bb --- /dev/null +++ b/docs/cloud/features.mdx @@ -0,0 +1,6 @@ +--- +title: "Platform features" +icon: "browsers" +--- + + \ No newline at end of file diff --git a/docs/cloud/general/security-and-privacy.mdx b/docs/cloud/general/security-and-privacy.mdx index cd513679f..c665454bd 100644 --- a/docs/cloud/general/security-and-privacy.mdx +++ b/docs/cloud/general/security-and-privacy.mdx @@ -6,7 +6,7 @@ icon: "lock" ## Security highlights -Our product is designed with security and compliance in mind. +Our product is designed with security and privacy in mind. - Elementary Cloud does not have read access to raw data in your data warehouse. - Elementary Cloud only extracts and stores metadata, logs and aggregated metrics. diff --git a/docs/cloud/guides/collect-job-data.mdx b/docs/cloud/guides/collect-job-data.mdx index 72361e2b1..cfd3749bc 100644 --- a/docs/cloud/guides/collect-job-data.mdx +++ b/docs/cloud/guides/collect-job-data.mdx @@ -1,5 +1,5 @@ --- -title: "Collect jobs info from orchestrator" +title: "Collect Jobs Info From Orchestrator" sidebarTitle: "Collect jobs data" --- diff --git a/docs/cloud/guides/sync-scheduling.mdx b/docs/cloud/guides/sync-scheduling.mdx index 666c29b28..6e55c36da 100644 --- a/docs/cloud/guides/sync-scheduling.mdx +++ b/docs/cloud/guides/sync-scheduling.mdx @@ -1,10 +1,10 @@ --- -title: "Environment syncs schedule" +title: "Environment Syncs Schedule" --- ## Synchronizing the Elementary schema -The data on your Elementary cloud environments is updated by syncing the local Elementary schema from the data warehouse. +The data on your Elementary Cloud environments is updated by syncing the local Elementary schema from the data warehouse. There are 2 available scheduling options: @@ -24,9 +24,9 @@ In the _Schedule Settings_, you're provided with a webhook URL. Next, you will n -Heading to dbt Cloud, you can [create a webhook subscription](https://docs.getdbt.com/docs/deploy/webhooks#create-a-webhook-subscription) that would trigger a sync after your jobs are done. +Heading to dbt Cloud, you can [create a webhook subscription](https://docs.getdbt.com/docs/deploy/webhooks#create-a-webhook-subscription) that will trigger a sync after your jobs are done. -- Make sure the webhook is triggered on `Run completed` events +- Make sure the webhook is triggered on `Run completed` events. - Select **only** the main jobs of the relevant environment. Make sure to select only the main jobs of the relevant environment. Selecting all jobs will trigger a sync for each job, which may result in unnecessary updates and therefore increased cost on the data warehouse. diff --git a/docs/cloud/guides/troubleshoot.mdx b/docs/cloud/guides/troubleshoot.mdx index df12af1c3..3a808c2be 100644 --- a/docs/cloud/guides/troubleshoot.mdx +++ b/docs/cloud/guides/troubleshoot.mdx @@ -4,7 +4,7 @@ title: "Troubleshooting" ### I connected my data warehouse but I don't see any test results -If you already connected your data warehouse to Elementary but don't see anything in Elementary UI, there could be several reasons. +If you already connected your data warehouse to Elementary but are not seeing anything in the Elementary UI, there could be several reasons. Try following these steps to troubleshoot: @@ -18,15 +18,15 @@ Try following these steps to troubleshoot: - If you have, make sure the table was created as an incremental table (not a regular table or view). - If not, there is a materialization configuration in your `dbt_project.yml` file that overrides the package config. Remove it, and run `dbt run --select elementary --full-refresh` to recreate the tables. After that run `dbt test` again and check if there is data. -**4. Still no data in the table? Reach out to the elementary team by starting an intercom chat from Elementary UI.** +**4. Still no data in the table? Reach out to the Elementary team by starting an intercom chat from the Elementary UI.** ### Column information cannot be retrieved This error can happen because of a few reasons: -1. check that your elementary dbt package version is 0.12.0 or higher -2. check that the user you are using to connect to your database has permission to access the information schema of all the schemas built or used by your dbt project +1. Check that your elementary dbt package version is 0.12.0 or higher. +2. Check that the user you are using to connect to your database has permission to access the information schema of all the schemas built or used by your dbt project. For more information on the permissions required by each data warehouse: @@ -39,3 +39,14 @@ For more information on the permissions required by each data warehouse: [Databricks](/cloud/integrations/dwh/databricks#permissions-and-security) [Postgres](/cloud/integrations/dwh/postgres#permissions-and-security) + + +### How do I set up the table name of my Singular test? + +Singular tests are sql queries that can reference more than one table, but are often intended to test a logic that is related to one table in particular. +In order to have that table name appear in the UI in the test results, test execution and more screens, you can set it up by adding the following to the config block of your singular test file: +``` +{{ config( + override_primary_test_model_id="model_name" +) }} +``` \ No newline at end of file diff --git a/docs/cloud/integrations/alerts/ms-teams.mdx b/docs/cloud/integrations/alerts/ms-teams.mdx index 542a1dfaa..9cfbbe15c 100644 --- a/docs/cloud/integrations/alerts/ms-teams.mdx +++ b/docs/cloud/integrations/alerts/ms-teams.mdx @@ -1,6 +1,128 @@ --- -title: "MS Teams (Beta)" +title: "Microsoft Teams" --- -Routing alerts to MS Teams is supported as a beta integration. -Reach out to us to enable it for your instance! \ No newline at end of file +Elementary's Microsoft Teams integration enables sending alerts when data issues happen. + +The alerts include rich context, and you can create [alert rules](/features/alerts-and-incidents/alert-rules) to distribute alerts to different channels and destinations. + + +
+ MS teams alert screenshot +
+ + +## Enabling Microsoft Teams alerts + +1. Go to the `Environments` page on the sidebar. +2. Select an environment and click connect on the `Connect messaging app` card (first card), and select `Microsoft Teams`. + + +
+ Connect messaging app +
+ + +3. For each MS Teams channel you connect to Elementary, you will need to create a Webhook. + + + 1. Go to a channel in your Team and choose `Manage channel` + + +
+ Teams manage channel +
+ + + +2. Click on `Edit` connectors. + + +
+ Teams edit connectors +
+ + +3. Search for `Incoming webhook` and choose `Add`. + + +
+ Teams add incoming webhook +
+ + +4. Choose `Add` again and add a name to your webhook, then click on `Create`. + + +
+ +
+ + +5. Copy the URL of the webhook. + + +
+ +
+ + + +
+ +4. Configure your Microsoft Teams webhooks, and give each one a name indicating it's connected channel: + + +
+ Provide webhooks +
+ + +5. Select a default channel for alerts, and set the suppression interval. + + +The default channel you select will automatically add a default [alert rule](/features/alerts-and-incidents/alert-rules) +to sends all failures to this channel. Alerts on warnings are not sent by default. To modify and add tules, navigate to `Alert Rules` page. + + + + +
+ Select channel and suppression interval +
+ diff --git a/docs/cloud/integrations/alerts/opsgenie.mdx b/docs/cloud/integrations/alerts/opsgenie.mdx index e78907cf8..74df5e663 100644 --- a/docs/cloud/integrations/alerts/opsgenie.mdx +++ b/docs/cloud/integrations/alerts/opsgenie.mdx @@ -2,14 +2,68 @@ title: "Opsgenie" --- - - - -} -> - Click for details - \ No newline at end of file +Elementary's Opsgenie integration enables sending alerts when data issues happen. + +It is recommended to create [alert rules](/features/alerts-and-incidents/alert-rules) to filter and select the alerts that will create incidents in Opsgenei. + + + +
+ Opsgenie alerts screen +
+ + + +
+ Opsgenie alerts detail +
+ + +## Enabling Opsgenie alerts + +### Create an Opsgenie API key + +To create an `Opsgenie API key`, go to `Opsgenie` and follow the following steps: + +- Create or select an `Opsgenie` team - this team will be responsible for alerts generated by Elementary. +- On the selected team go `Integrations` tab and press on `Add Integration`: + - Select `API` and press `Add` + - Select a name for the `API integration` - “Elementary” for example + - Make sure `Create` and `update access` are selected + - Press on `Save Integration` + - Copy the `API key` and provide it to Elementary UI. + +### Add API key to an environment + +1. Go to the `Environments` page on the sidebar. + +2. Select an environment and click connect on the `Connect incident management tool` card (second card), and select `Opsgenie`. + + +
+ Connect incident management tool +
+ + +3. Fill the `API key` and select the `API URL` and save the integration: + + +
+ Enter Opsgenie API key +
+ + +4. `Opsgenie` will now be available as a destination on the [`alert rules`](/features/alerts-and-incidents/alert-rules) page. You can add rules to create Opsgenie incidents out of alerts who match your rule. \ No newline at end of file diff --git a/docs/cloud/integrations/alerts/pagerduty.mdx b/docs/cloud/integrations/alerts/pagerduty.mdx index 331ef7fde..5b629aa6e 100644 --- a/docs/cloud/integrations/alerts/pagerduty.mdx +++ b/docs/cloud/integrations/alerts/pagerduty.mdx @@ -1,6 +1,45 @@ --- -title: "PagerDuty (Beta)" +title: "PagerDuty" --- -Routing alerts to PagerDuty is supported as a beta integration. -Reach out to us to enable it for your instance! \ No newline at end of file +Elementary's PagerDuty integration enables sending alerts when data issues happen. + +It is recommended to create [alert rules](/features/alerts-and-incidents/alert-rules) to filter and select the alerts that will create incidents in PagerDuty. + + +
+ PagerDuty Alerts +
+ + +## Enabling PagerDuty alerts + +1. Go to the `Environments` page on the sidebar. + +2. Select an environment and click connect on the `Connect incident management tool` card (second card), and select `PagerDuty`. + + +
+ Connect incident management tool +
+ + +3. Authorize the Elementary for your account. **This step may require admin approval.** + + +
+ PagerDuty approval +
+ + +4. `PagerDuty` will now be available as a destination on the [`alert rules`](/features/alerts-and-incidents/alert-rules) page. You can add rules to create PagerDuty incidents out of alerts who match your rule. \ No newline at end of file diff --git a/docs/cloud/integrations/alerts/slack.mdx b/docs/cloud/integrations/alerts/slack.mdx index a364f3725..3738e79cf 100644 --- a/docs/cloud/integrations/alerts/slack.mdx +++ b/docs/cloud/integrations/alerts/slack.mdx @@ -2,13 +2,78 @@ title: "Slack" --- - +Elementary's Slack integration enables sending Slack alerts when data issues happen. -## Enable Slack alerts +The alerts include rich context, and you can change the incident status and asssigne from the alert itself. +You can also create [alert rules](/features/alerts-and-incidents/alert-rules) to distribute alerts to different channels and destinations. -On the environments page, select an environment and click `connect` on the **Connect Slack** card. -After connecting your workspace, you will need to select a default channel for alerts. + +
+ Slack alert screenshot +
+ -## Alerts configuration +## Enabling Slack alerts - \ No newline at end of file +1. Go to the `Environments` page on the sidebar. +2. Select an environment and click connect on the `Connect messaging app` card (first card), and select `Slack`. + + +
+ Connect messaging app +
+ + +4. Authorize the Elementary app for your workspace. **This step may require a workspace admin approval.** + + +
+ Select Slack channel and alert suppression +
+ + +5. Select a default channel for alerts, and set the suppression interval. + + +The default channel you select will automatically add a default [alert rule](/features/alerts-and-incidents/alert-rules) +to sends all failures to this channel. Alerts on warnings are not sent by default. To modify and add tules, navigate to `Alert Rules` page. + + + +
+ Select Slack channel and alert suppression +
+ + + +## Alerts to private channels + +If the channel you want to send alerts to is private (🔒), it will not appear in the channels dropdown on the onboarding or the alert rules screen. + +You will need to invite the Elementary bot to the private channel by typing `@Elementary` in the channel and clicking to invite the bot in, and then it will appear in the UI. + + +
+ Add Elementary to private channel +
+ \ No newline at end of file diff --git a/docs/cloud/integrations/bi/tableau.mdx b/docs/cloud/integrations/bi/tableau.mdx index 9a1a5a362..5c61b2407 100644 --- a/docs/cloud/integrations/bi/tableau.mdx +++ b/docs/cloud/integrations/bi/tableau.mdx @@ -3,7 +3,7 @@ title: "Tableau" --- After you connect Tableau, Elementary will automatically and continuously extend the lineage to the dashboard level. -This will provide you end-to-end data lineage to understand your downstream dependencies, called exposures. +This will provide end-to-end data lineage to help you understand your downstream dependencies, called exposures. In order for Elementary to extract your metadata from Tableau you must meet all of the Tableau Metadata GraphQL requirements (most are set by default): @@ -15,7 +15,7 @@ In order for Elementary to extract your metadata from Tableau you must meet all ## Tableau Cloud -### Creating Personal Access Token +### Creating a Personal Access Token Create a Personal Access Token in Tableau. For details on how to create a user token please refer to the **[Tableau guide](https://help.tableau.com/current/pro/desktop/en-us/useracct.htm#create-a-personal-access-token)**. diff --git a/docs/cloud/integrations/code-repo/github.mdx b/docs/cloud/integrations/code-repo/github.mdx index 67b37af4a..1cca0f38a 100644 --- a/docs/cloud/integrations/code-repo/github.mdx +++ b/docs/cloud/integrations/code-repo/github.mdx @@ -4,8 +4,16 @@ title: "Github" Elementary connects to the code repository where your dbt project code is managed, and opens PRs with configuration changes. +### Recommended: Connect using Elementary Elementary Github App + +Simply Click the blue button that says "Connect with Elementary Github App" and follow the instructions. +In the menu that opens up later on, select the repository where your dbt project is stored, and if needed the branch and path to the dbt project. + ### Create a Github [fine-grained token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-fine-grained-personal-access-token) +If for some reason you prefer to, you can connect to Github using a fine-grained token managed by your team instead. +Here is how you can create one: + 1. In the upper-right corner of any page, click your profile photo, then click **Settings**. 2. On the bottom of the left sidebar, click **Developer settings**. 3. On the left sidebar, select **Personal access tokens > Fine-grained tokens**. diff --git a/docs/cloud/introduction.mdx b/docs/cloud/introduction.mdx index 0ced5c12c..e593ff0b6 100644 --- a/docs/cloud/introduction.mdx +++ b/docs/cloud/introduction.mdx @@ -1,20 +1,44 @@ --- -title: "Elementary Cloud" +title: "Elementary Cloud Platform" sidebarTitle: "Introduction" icon: "cloud" --- - +**Elementary is a data observability platform tailored for dbt-first data organizations.** - - - Start 30 days free trial, no credit card is required. - - +The unique dbt-native architecture seamlessly integrates into engineers' workflows, ensuring ease of use and smooth adoption. +The platform provides out-of-the-box monitoring for critical issues, tools to effortlessly increase coverage, and integrations for end-to-end visibility across the data stack. - +Elementary promotes ownership and collaboration on incidents, and enables the whole data organization to take an active role in the data quality process. +By automatically measuring and tracking data health, it helps teams transition from reactive firefighting to proactively communicating data health to consumers and stakeholders. + + + + + +## Cloud Platform Features + + + +## Architecture and Security + + + +Our product is designed with [Security and Privacy](/cloud/general/security-and-privacy) in mind. + + +**SOC 2 certification:** Elementary is SOC2 type II certified! + + + +## How to Start? + + + + \ No newline at end of file diff --git a/docs/cloud/onboarding/signup.mdx b/docs/cloud/onboarding/signup.mdx index 04b510088..dea505618 100644 --- a/docs/cloud/onboarding/signup.mdx +++ b/docs/cloud/onboarding/signup.mdx @@ -6,7 +6,7 @@ icon: "square-1" - [Signup to Elementary](https://elementary-data.frontegg.com/oauth/account/sign-up) using Google SSO or email. + [Sign up to Elementary](https://elementary-data.frontegg.com/oauth/account/sign-up) using Google SSO or email. If you are interested in advanced authentication such as MFA, Okta SSO, Microsoft AD - please contact us at cloud@elementary-data.com diff --git a/docs/data-tests/anomaly-detection-configuration/anomaly-params.mdx b/docs/data-tests/anomaly-detection-configuration/anomaly-params.mdx index 0765fd48e..4a9951d8b 100644 --- a/docs/data-tests/anomaly-detection-configuration/anomaly-params.mdx +++ b/docs/data-tests/anomaly-detection-configuration/anomaly-params.mdx @@ -36,6 +36,9 @@ sidebarTitle: "All configuration params"     period: [hour | day | week | month]     count: int + dimension_anomalies, column_anomalies, all_columns_anomalies tests: + -- dimensions: sql expression + volume_anomalies test: -- fail_on_zero: [true | false] @@ -45,7 +48,7 @@ sidebarTitle: "All configuration params" -- exclude_regexp: regex dimension_anomalies test: - -- dimensions: sql expression + -- exclude_final_results: [SQL where expression on fields value / average] event_freshness_anomalies: -- event_timestamp_column: column name diff --git a/docs/data-tests/anomaly-detection-configuration/column-anomalies.mdx b/docs/data-tests/anomaly-detection-configuration/column-anomalies.mdx index 3614c1d6b..0eb68820e 100644 --- a/docs/data-tests/anomaly-detection-configuration/column-anomalies.mdx +++ b/docs/data-tests/anomaly-detection-configuration/column-anomalies.mdx @@ -8,7 +8,7 @@ sidebarTitle: "column_anomalies" Select which monitors to activate as part of the test. - _Default: default monitors_ -- _Relevant tests: `all_column_anomalies`, `column_anomalies`_ +- _Relevant tests: `all_columns_anomalies`, `column_anomalies`_ - _Configuration level: test_ diff --git a/docs/data-tests/anomaly-detection-configuration/dimensions.mdx b/docs/data-tests/anomaly-detection-configuration/dimensions.mdx index d5218e3b6..748ae262f 100644 --- a/docs/data-tests/anomaly-detection-configuration/dimensions.mdx +++ b/docs/data-tests/anomaly-detection-configuration/dimensions.mdx @@ -5,15 +5,18 @@ sidebarTitle: "dimensions" `dimensions: [list of SQL expressions]` -Configuration for the tests `dimension_anomalies`, `column_anomalies` and `all_columns_anomalies`. -The test counts rows grouped by given column / columns / valid select sql expression. +The test will group the results by a given column / columns / valid select sql expression. Under `dimensions` you can configure the group by expression. -This test monitors the frequency of values in the configured dimension over time, and alerts on unexpected changes in the distribution. -It is best to configure it on low-cardinality fields. +Using this param segments the tested data per dimension, and each dimension is monitored separately. + +For example - +A `column_anomalies` test monitoring for `null_rate` with `dimensions` configured will monitor the +`null_rate` of values in the column, grouped by dimension, and will fail if in a specific dimension there is an anomaly in `null_rate`. +It is best to configure low-cardinality fields as `dimensions`. - _Default: None_ -- _Relevant tests: `dimension_anomalies`_ +- _Relevant tests: `dimension_anomalies`, `column_anomalies`, `all_columns_anomalies`_ - _Configuration level: test_ diff --git a/docs/data-tests/anomaly-detection-configuration/exclude-final-results.mdx b/docs/data-tests/anomaly-detection-configuration/exclude-final-results.mdx index fac867d5a..3152cf031 100644 --- a/docs/data-tests/anomaly-detection-configuration/exclude-final-results.mdx +++ b/docs/data-tests/anomaly-detection-configuration/exclude-final-results.mdx @@ -5,15 +5,15 @@ sidebarTitle: "exclude_final_results" `exclude_final_results: [SQL where expression on fields value / average]` -Failures in dimension anomaly tests consist of outliers in row counts across all dimensions during the training period. -Some dimensions may contribute metrics that are considered insignificant compared to others, and you may prefer not to receive alerts for them. -With this parameter, you can disregard such failures. +Failures in dimension anomaly tests consist of outliers in row count of each dimension. +Some dimensions may be considered insignificant compared to others, and you may prefer not to receive alerts for them. +With this parameter, you can exclude these dimensions from the results set and avoid such failures. -1. `value` - Outlier row count of a dimension during the detection period. +1. `value` - Max row count of a dimension during the detection period. 2. `average` - The average rows count of a dimension during the training period. - _Supported values: valid SQL where expression on the columns value / average_ -- _Relevant tests: Dimension anomalies _ +- _Relevant tests: Dimension anomalies_ diff --git a/docs/data-tests/anomaly-detection-configuration/exclude_prefix.mdx b/docs/data-tests/anomaly-detection-configuration/exclude_prefix.mdx index ae9b3c2e8..56109822b 100644 --- a/docs/data-tests/anomaly-detection-configuration/exclude_prefix.mdx +++ b/docs/data-tests/anomaly-detection-configuration/exclude_prefix.mdx @@ -8,7 +8,7 @@ sidebarTitle: "exclude_prefix" Param for the `all_columns_anomalies` test only, which enables to exclude a column from the tests based on prefix match. - _Default: None_ -- _Relevant tests: `all_column_anomalies`_ +- _Relevant tests: `all_columns_anomalies`_ - _Configuration level: test_ diff --git a/docs/data-tests/anomaly-detection-configuration/exclude_regexp.mdx b/docs/data-tests/anomaly-detection-configuration/exclude_regexp.mdx index 8bc02fcaf..02f27769b 100644 --- a/docs/data-tests/anomaly-detection-configuration/exclude_regexp.mdx +++ b/docs/data-tests/anomaly-detection-configuration/exclude_regexp.mdx @@ -8,7 +8,7 @@ sidebarTitle: "exclude_regexp" Param for the `all_columns_anomalies` test only, which enables to exclude a column from the tests based on regular expression match. - _Default: None_ -- _Relevant tests: `all_column_anomalies`_ +- _Relevant tests: `all_columns_anomalies`_ - _Configuration level: test_ diff --git a/docs/data-tests/anomaly-detection-tests/all-columns-anomalies.mdx b/docs/data-tests/anomaly-detection-tests/all-columns-anomalies.mdx index 613a45d2e..546ea6ebb 100644 --- a/docs/data-tests/anomaly-detection-tests/all-columns-anomalies.mdx +++ b/docs/data-tests/anomaly-detection-tests/all-columns-anomalies.mdx @@ -24,7 +24,7 @@ No mandatory configuration, however it is highly recommended to configure a `tim   -- elementary.all_columns_anomalies:     timestamp_column: column name     column_anomalies: column monitors list -     dimensions: list +     dimensions: sql expression     exclude_prefix: string     exclude_regexp: regex     where_expression: sql expression diff --git a/docs/data-tests/anomaly-detection-tests/column-anomalies.mdx b/docs/data-tests/anomaly-detection-tests/column-anomalies.mdx index e48b157fc..fd88dbab9 100644 --- a/docs/data-tests/anomaly-detection-tests/column-anomalies.mdx +++ b/docs/data-tests/anomaly-detection-tests/column-anomalies.mdx @@ -22,7 +22,7 @@ No mandatory configuration, however it is highly recommended to configure a `tim tests:   -- elementary.column_anomalies:     column_anomalies: column monitors list -     dimensions: list +     dimensions: sql expression     timestamp_column: column name     where_expression: sql expression     anomaly_sensitivity: int diff --git a/docs/data-tests/anomaly-detection-tests/volume-anomalies.mdx b/docs/data-tests/anomaly-detection-tests/volume-anomalies.mdx index fffea01c1..b564514fc 100644 --- a/docs/data-tests/anomaly-detection-tests/volume-anomalies.mdx +++ b/docs/data-tests/anomaly-detection-tests/volume-anomalies.mdx @@ -24,7 +24,7 @@ No mandatory configuration, however it is highly recommended to configure a `tim
  
   tests:
-      -- elementary.volume_anomalies:
+      - elementary.volume_anomalies:
           timestamp_column: column name
           where_expression: sql expression
           anomaly_sensitivity: int
diff --git a/docs/data-tests/how-anomaly-detection-works.mdx b/docs/data-tests/how-anomaly-detection-works.mdx
index 4a2817968..f2c2663f1 100644
--- a/docs/data-tests/how-anomaly-detection-works.mdx
+++ b/docs/data-tests/how-anomaly-detection-works.mdx
@@ -54,7 +54,7 @@ If a value in the detection set is an outlier to the expected range, it will be
 ### Expected range
 
 Based of the values in the training test, we calculate an expected range for the monitor.
-Each data point in the detection period will be compared to the expected range calculated based on it’s training set.
+Each data point in the detection period will be compared to the expected range calculated based on its training set.
 
 ### Training period
 
diff --git a/docs/data-tests/introduction.mdx b/docs/data-tests/introduction.mdx
index 1f28c4238..f844d255b 100644
--- a/docs/data-tests/introduction.mdx
+++ b/docs/data-tests/introduction.mdx
@@ -41,7 +41,7 @@ Tests to detect anomalies in data quality metrics such as volume, freshness, nul
   title="Event freshness anomalies"
   href="/data-tests/anomaly-detection-tests/event-freshness-anomalies"
 >
-  Monitors the gap between the latest event timestamp and it's loading time, to
+  Monitors the gap between the latest event timestamp and its loading time, to
   detect event freshness issues.
 
 
diff --git a/docs/features/alerts-and-incidents/alert-configuration.mdx b/docs/features/alerts-and-incidents/alert-configuration.mdx
new file mode 100644
index 000000000..e69de29bb
diff --git a/docs/cloud/guides/alert-rules.mdx b/docs/features/alerts-and-incidents/alert-rules.mdx
similarity index 94%
rename from docs/cloud/guides/alert-rules.mdx
rename to docs/features/alerts-and-incidents/alert-rules.mdx
index 3716c2893..199820163 100644
--- a/docs/cloud/guides/alert-rules.mdx
+++ b/docs/features/alerts-and-incidents/alert-rules.mdx
@@ -2,7 +2,7 @@
 title: "Alert rules"
 ---
 
-Elementary cloud allows you to create rules that route your alerts.
+Elementary Cloud allows you to create rules that route your alerts.
 Each rule is a combination of a filter and a destination.
 
 The Slack channel you choose when connecting your Slack workspace is automatically added as a default alert rule, that sends all the alerts to that channel without any filtering.
diff --git a/docs/features/alerts-and-incidents/alerts-and-incidents-overview.mdx b/docs/features/alerts-and-incidents/alerts-and-incidents-overview.mdx
new file mode 100644
index 000000000..c20f9a1f0
--- /dev/null
+++ b/docs/features/alerts-and-incidents/alerts-and-incidents-overview.mdx
@@ -0,0 +1,37 @@
+---
+title: Alerts and Incidents Overview
+sidebarTitle: Alerts & incidents overview
+---
+
+
+
+Alerts and incidents in Elementary are designed to shorten your time to response and time to resolution when data issues occur.
+
+- **Alert -** Notification about an event that indicates a data issue.
+- **[Incident](/features/alerts-and-incidents/incidents) -** A data issue that starts with an event, but can include several events grouped to an incident. An incident has a start time, status, severity, assignee and end time. 
+
+Alerts provide information and context for recipients to quickly triage, prioritize and resolve issues. 
+For collaboration and promoting ownership, alerts include owners and tags. 
+You can create distribution rules to route alerts to the relevant people and channels, for faster response. 
+
+An alert would either open a new incident, or be automatically grouped and added to an ongoing incident.
+From the alert itself, you can update the status and assignee of an incident. In the [incidents page](/features/alerts-and-incidents/incident-management),
+you will be able to track all open and historical incidents, and get metrics on the quality of your response.
+
+## Alerts & incidents core functionality 
+
+- **Alert distribution rules** - 
+- **Incident status and assignee** -
+- **Owners and subscribers** -
+- **Severity and tags** -
+- **Alerts customization** -
+- **Group alerts to incidents** -
+- **Alerts suppression** -
+
+## Alert types
+
+
+
+## Supported alert integrations
+
+
\ No newline at end of file
diff --git a/docs/features/alerts-and-incidents/effective-alerts-setup.mdx b/docs/features/alerts-and-incidents/effective-alerts-setup.mdx
new file mode 100644
index 000000000..e69de29bb
diff --git a/docs/features/alerts-and-incidents/incident-management.mdx b/docs/features/alerts-and-incidents/incident-management.mdx
new file mode 100644
index 000000000..dac4ae81a
--- /dev/null
+++ b/docs/features/alerts-and-incidents/incident-management.mdx
@@ -0,0 +1,54 @@
+---
+title: Incident Management
+sidebarTitle: Incident management
+---
+
+
+
+The `Incidents` page is designed to enable your team to stay on top of open incidents and collaborate on resolving them.
+The page gives a comprehensive overview of all current and previous incidents, where users can view the status, prioritize, assign and resolve incidents.
+
+## Incidents view and filters
+
+The page provides a view of all incidents, and useful filters:
+
+- **Quick Filters:** Preset quick filters for all, unresolved and “open and unassigned” incidents.
+- **Filter:** Allows users to filter incidents based on various criteria such as status, severity, model name and assignee.
+- **Time frame:** Filter incidents which were open in a certain timeframe.
+
+
+
+
+## Interacting with Incidents
+
+An incident has a status, assignee and severity.
+These can be set in the Incidents page, or from an alert in integrations that support alert actions.
+
+- **Incident status**: Will be set to `open` by default, and can be changed to `Acknowledged` and back to `Open`. When an alert is manually or automatically set as `Resolved`, it will close and will no longer be modified.
+- **Incident assignee**: An incident can be assigned to any user on the team, and they will be notified.
+    - If you assign an incident to a user, it is recommended to leave the incident `Open` until the user changes status to `Acknowledged`.
+- **Incident severity**: Users can set a severity level (High, Low, Normal, Critical) to an incident. _Coming soon_ Severity will be automated by an analysis of the impacted assets.
+
+## Incidents overview and metrics
+
+The top bar of the page present aggregated metrics on incidents, to provide an overall status. 
+You will also be able to track your average resolution time.
+
+_ _Coming soon_ _ The option to create and share a periodic summary of incidents will be supported in the future.
+
+
+
+ Incidents overview +
+ \ No newline at end of file diff --git a/docs/features/alerts-and-incidents/incidents.mdx b/docs/features/alerts-and-incidents/incidents.mdx new file mode 100644 index 000000000..910827430 --- /dev/null +++ b/docs/features/alerts-and-incidents/incidents.mdx @@ -0,0 +1,53 @@ +--- +title: Incidents in Elementary +sidebarTitle: Incidents +--- + + + +One of the challenges data teams face is tracking and understand and collaborate on the status of data issues. +Tests fail daily, pipelines are executed frequently, alerts are sent to different channels. +There is a need for a centralized place to track: +- What data issues are open? Which issues were already resolved? +- Who is on it, and what's the latest status? +- Are multiple failures part of the same issue? +- What actions and events happened since the incident started? +- Did such issue happen before? Who resolved it and how? + +In Elementary, these are solved with `Incidents`. + +A comprehensive view of all incidents can be found in the [Incidents page](/features/alerts-and-incidents/incident-management). + +## How incidents work? + +Every failure or warning in Elementary will automatically open a new incident or be added as an event to an ongoing incident. +Based on grouping rules, different failures are grouped to the same incident. + +An incident has a [status, assignee and severity](/features/alerts-and-incidents/incident-management#interacting-with-incidents). +These can be set in the [Incidents page](/features/alerts-and-incidents/incident-management), or from an alert in integrations that support alert actions. + + +
+ Elementary Incidents +
+ + +## How incidents are resolved? + +Each incident starts at the first failure, and ends when the status is changed manually or automatically to `Resolved`. +An incident is **automatically resolved** when the failing tests, monitors and / or models are successful again. + +## Incident grouping rules + +Different failures and warnings are grouped to the same incident by the following grouping rules: + +1. Additional failures of the same test / monitor on a table that has an active incident. +2. _ _Coming soon_ _ Freshness and volume issues that are downstream of an open incident on a model failure. +3. _ _Coming soon_ _ Failures of the same test / monitor that are on downstream tables of an active incident. + +## Incident deep dive + +_ _Coming soon_ _ diff --git a/docs/features/alerts-and-incidents/owners-and-subscribers.mdx b/docs/features/alerts-and-incidents/owners-and-subscribers.mdx new file mode 100644 index 000000000..e69de29bb diff --git a/docs/features/anomaly-detection/automated-freshness.mdx b/docs/features/anomaly-detection/automated-freshness.mdx new file mode 100644 index 000000000..d6bd1e775 --- /dev/null +++ b/docs/features/anomaly-detection/automated-freshness.mdx @@ -0,0 +1,6 @@ +--- +title: Automated Freshness Monitor +sidebarTitle: "Automated freshness" +--- + +_🚧 Under construction 🚧_ \ No newline at end of file diff --git a/docs/features/anomaly-detection/automated-monitors.mdx b/docs/features/anomaly-detection/automated-monitors.mdx new file mode 100644 index 000000000..0a459495f --- /dev/null +++ b/docs/features/anomaly-detection/automated-monitors.mdx @@ -0,0 +1,27 @@ +--- +title: Automated Freshness & Volume Monitors +sidebarTitle: "Introduction" +--- + + + + + +### How it works? + +The monitors collect metadata, and the [anomaly detection model](cloud/features/anomaly-detection/monitors-overview#how-anomaly-detection-works?) adjusts based on updates frequency, seasonality and trends. + +As soon as you connect Elementary Cloud Platform to your data warehouse, a backfill process will begin to collect historical metadata. +Within an average of a few hours, your automated monitors will be operational. + +You can fine tune the [configuration](cloud/features/anomaly-detection/monitors-configuration) and [provide feedback](cloud/features/anomaly-detection/monitors-feedback) to adjust the detection to your needs. + +As views are stateless, automated volume and freshness monitors only apply on tables. + +## Automated Monitors + + + +## Alerts on Failures + +_🚧 Under construction 🚧_ \ No newline at end of file diff --git a/docs/features/anomaly-detection/automated-volume.mdx b/docs/features/anomaly-detection/automated-volume.mdx new file mode 100644 index 000000000..5ccf6e8ea --- /dev/null +++ b/docs/features/anomaly-detection/automated-volume.mdx @@ -0,0 +1,6 @@ +--- +title: Automated Volume Monitor +sidebarTitle: "Automated volume" +--- + +_🚧 Under construction 🚧_ \ No newline at end of file diff --git a/docs/features/anomaly-detection/disable-or-mute-monitors.mdx b/docs/features/anomaly-detection/disable-or-mute-monitors.mdx new file mode 100644 index 000000000..7c3d52cb8 --- /dev/null +++ b/docs/features/anomaly-detection/disable-or-mute-monitors.mdx @@ -0,0 +1,6 @@ +--- +title: Mute or Delete Monitors +sidebarTitle: "Mute or delete" +--- + +_🚧 Under construction 🚧_ \ No newline at end of file diff --git a/docs/features/anomaly-detection/monitors-configuration.mdx b/docs/features/anomaly-detection/monitors-configuration.mdx new file mode 100644 index 000000000..e5970c29e --- /dev/null +++ b/docs/features/anomaly-detection/monitors-configuration.mdx @@ -0,0 +1,6 @@ +--- +title: Monitors Configuration +sidebarTitle: "Monitors configuration" +--- + +_🚧 Under construction 🚧_ \ No newline at end of file diff --git a/docs/features/anomaly-detection/monitors-feedback.mdx b/docs/features/anomaly-detection/monitors-feedback.mdx new file mode 100644 index 000000000..b2e270eb2 --- /dev/null +++ b/docs/features/anomaly-detection/monitors-feedback.mdx @@ -0,0 +1,6 @@ +--- +title: Monitors Feedback +sidebarTitle: "Monitors feedback" +--- + +_🚧 Under construction 🚧_ \ No newline at end of file diff --git a/docs/features/anomaly-detection/monitors-overview.mdx b/docs/features/anomaly-detection/monitors-overview.mdx new file mode 100644 index 000000000..aa25ca4c3 --- /dev/null +++ b/docs/features/anomaly-detection/monitors-overview.mdx @@ -0,0 +1,32 @@ +--- +title: Anomaly Detection Monitors +sidebarTitle: "Monitors overview" +--- + + + +ML-powered anomaly detection monitors automatically identify outliers and unexpected patterns in your data. +These are useful to detect issues such as incomplete data, delays, a drop in a specific dimension or a spike in null values. + +Elementary offers two types of monitors: + +- **Automated Monitors** - Out-of-the-box monitors activated automatically, that query metadata only. +- **Opt-in Monitors** - Monitors that query raw data and require configuration. + +## [Automated monitors](/features/anomaly-detection/automated-monitors) + + + + + +## Opt-in monitors + +_Coming soon_ + +## How anomaly detection works? + +_🚧 Under construction 🚧_ + +## Monitor results + +_🚧 Under construction 🚧_ \ No newline at end of file diff --git a/docs/features/anomaly-detection/opt-in-monitors.mdx b/docs/features/anomaly-detection/opt-in-monitors.mdx new file mode 100644 index 000000000..114c9a4d8 --- /dev/null +++ b/docs/features/anomaly-detection/opt-in-monitors.mdx @@ -0,0 +1,9 @@ +--- +title: Opt-In Monitors +sidebarTitle: "Opt-in monitors" +--- + +_Coming Soon_ + +For now, please refer to the [Elementary Anomaly Detection dbt tests](/data-tests/introduction#anomaly-detection-tests). + diff --git a/docs/features/automated-monitors.mdx b/docs/features/automated-monitors.mdx deleted file mode 100644 index c963763a2..000000000 --- a/docs/features/automated-monitors.mdx +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: Automated freshness, volume and schema monitoring -sidebarTitle: "Automated Monitors" -icon: "wand-magic-sparkles" ---- - - - -Elementary offers out-of-the-box automated monitors to detect freshness, volume and schema issues. -This provides broad coverage and a basic level of observability, without any configuration effort. - -Additionally, these monitors will not increase compute costs as they leverage only warehouse metadata (information schema, query history). - -The monitors are trained on historical metadata, and adjust based on updates frequency, seasonality and trends. - -As views are stateless, automated volume and freshness monitors only apply on tables. - - - Elementary Automated Monitors - - -## Supported automated monitors - -### Volume - -Monitors how much data was added / removed / updated to the table with each update. -The monitor alerts you if there is an unexpected drop or spike in rows. - -### Freshness - -Monitors how frequently a table is updated, and alerts you if there is an unexpected delay. - -### Schema changes - -_Coming soon_ diff --git a/docs/features/ci.mdx b/docs/features/ci.mdx index c17753e90..588ea4df6 100644 --- a/docs/features/ci.mdx +++ b/docs/features/ci.mdx @@ -1,7 +1,6 @@ --- title: "Elementary CI" sidebarTitle: "Elementary CI" -icon: "code-pull-request" --- @@ -15,7 +14,7 @@ You'll also be able to see if any of your dbt tests are failing or your models a -Elementary CI automations will help you make changes with confidence and seeing the full picture before merging your pull request. +Elementary CI automations help you make changes with confidence by providing a comprehensive view before merging your pull request. ## Want to join the beta? diff --git a/docs/features/catalog.mdx b/docs/features/collaboration-and-communication/catalog.mdx similarity index 84% rename from docs/features/catalog.mdx rename to docs/features/collaboration-and-communication/catalog.mdx index 5f61a58b4..3004b4018 100644 --- a/docs/features/catalog.mdx +++ b/docs/features/collaboration-and-communication/catalog.mdx @@ -1,13 +1,11 @@ --- title: "Data Catalog" -icon: "folder-tree" -iconType: "solid" --- On the Catalog tab you can now explore your datasets information - descriptions, columns, columns descriptions, latest update time and datasets health. -From the dataset you can navigate directly to it’s lineage and test results. +From the dataset you can navigate directly to its lineage and test results. The catalog content is generated from the descriptions you maintain in your dbt project YML files. diff --git a/docs/features/data-observability-dashboard.mdx b/docs/features/collaboration-and-communication/data-observability-dashboard.mdx similarity index 97% rename from docs/features/data-observability-dashboard.mdx rename to docs/features/collaboration-and-communication/data-observability-dashboard.mdx index 7dd10b357..afb567124 100644 --- a/docs/features/data-observability-dashboard.mdx +++ b/docs/features/collaboration-and-communication/data-observability-dashboard.mdx @@ -1,6 +1,5 @@ --- title: Data Observability Dashboard -icon: "browsers" --- Managing data systems can be a complex task, especially when there are hundreds (or even thousands) of models being orchestrated separately across multiple DAGs. These models serve different data consumers, including internal stakeholders, clients, and reverse-ETL pipelines. diff --git a/docs/features/config-as-code.mdx b/docs/features/config-as-code.mdx index 804899bdf..de7526a49 100644 --- a/docs/features/config-as-code.mdx +++ b/docs/features/config-as-code.mdx @@ -1,11 +1,10 @@ --- -title: "Configuration as Code" -icon: "code" +title: "Configuration-as-Code" --- -All Elementary configuration is managed in your dbt code. +All Elementary configurations are managed in your dbt code. Configuring observability becomes a part of the development process that includes version control, continuous integration, and a review process. -In Elementary Cloud, you can save time by adding tests in bulk from the UI that will be added to your code. Additionally, you can allow data analysts to create quality tests without writing any code. Elementary will take care of it for them and open pull requests for them. +In Elementary Cloud, you can save time by adding tests in bulk from the UI that will be added to your code. Additionally, you can allow data analysts to create quality tests without writing any code. Elementary will take care of it for them and open pull requests on their behalf. diff --git a/docs/features/data-governance/define-ownership.mdx b/docs/features/data-governance/define-ownership.mdx new file mode 100644 index 000000000..e69de29bb diff --git a/docs/features/data-governance/leverage-tags.mdx b/docs/features/data-governance/leverage-tags.mdx new file mode 100644 index 000000000..e69de29bb diff --git a/docs/features/data-governance/overview-and-best-practices.mdx b/docs/features/data-governance/overview-and-best-practices.mdx new file mode 100644 index 000000000..e69de29bb diff --git a/docs/features/column-level-lineage.mdx b/docs/features/data-lineage/column-level-lineage.mdx similarity index 63% rename from docs/features/column-level-lineage.mdx rename to docs/features/data-lineage/column-level-lineage.mdx index d94b067ed..a88474b56 100644 --- a/docs/features/column-level-lineage.mdx +++ b/docs/features/data-lineage/column-level-lineage.mdx @@ -1,38 +1,42 @@ --- -title: Column Level Lineage -sidebarTitle: Column Level Lineage +title: Column-Level Lineage +sidebarTitle: Column level lineage --- + The table nodes in Elementary lineage can be expanded to show the columns. When you select a column, the lineage of that specific column will be highlighted. -Column level lineage is useful for answering questions such as: +Column-level lineage is useful for answering questions such as: - Which downstream columns are actually impacted by a data quality issue? - Can we deprecate or rename a column? - Will changing this column impact a dashboard? - - Elementary Column Level Lineage - - ### Filter and highlight columns path -To help navigate graphs with large amount of columns per table, use the `...` menu right to the column: +To help navigate graphs with large amount of columns per table, use the `...` menu to the right of the column: + +- **Filter**: Will show a graph of only the selected column and its dependencies. +- **Highlight**: Will highlight only the selected column and its dependencies. -- **Filter**: Will show a graph of only the selected column and it's dependencies. -- **Highlight**: Will highlight only the selected column and it's dependencies. + -### Column level lineage generation +### Column-level lineage generation Elementary parses SQL queries to determine the dependencies between columns. Note that the lineage is only of the columns that directly contribute data to the column. -For example for the query: +For example, for the query: ```sql create or replace table db.schema.users as @@ -46,4 +50,4 @@ where user_type != 'test_user' The direct dependency of `total_logins` is `login_events.login_time`. The column `login_events.user_type` filter the data of `total_logins`, but it is an indirect dependency and will not show in lineage. -If you want a different approach in your Elementary Cloud instance - Contact us. +If you want a different approach in your Elementary Cloud instance - contact us. diff --git a/docs/features/exposures-lineage.mdx b/docs/features/data-lineage/exposures-lineage.mdx similarity index 53% rename from docs/features/exposures-lineage.mdx rename to docs/features/data-lineage/exposures-lineage.mdx index c99f90666..634427c2d 100644 --- a/docs/features/exposures-lineage.mdx +++ b/docs/features/data-lineage/exposures-lineage.mdx @@ -1,17 +1,17 @@ --- -title: Lineage to Downstream Dashboards -sidebarTitle: BI Integrations +title: Lineage to Downstream Dashboards and Tools +sidebarTitle: Lineage to BI --- Some of your data is used downstream in dashboards, applications, data science pipelines, reverse ETLs, etc. These downstream data consumers are called _exposures_. -Elementary lineage graph presents downstream exposures of two origins: +The Elementary lineage graph presents downstream exposures of two origins: -1. Elementary Cloud Automated BI integrations +1. Elementary automated BI integrations 2. Exposures configured in your dbt project. Read about [how to configure exposures](https://docs.getdbt.com/docs/build/exposures) in code. - + ```yaml exposures: @@ -41,29 +41,24 @@ exposures: -### Automated BI lineage +### Automated lineage to the BI -Elementary will automatically and continuously extend the column-level-lineage to the dashboard level of your data visualization tool. +Elementary will automatically and continuously extend the column-level lineage to the dashboard level of your data visualization tool. - +frameborder="0" +allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" +allowfullscreen +alt="Elementary Lineage" +> ### Supported BI tools: - - -### Why is lineage to exposures useful? - -- **Incidents impact analysis**: You could explore which exposures are impacted by each data issue. -- **Exposure health**: By selecting an exposure and filtering on upstream nodes, you could see the status of all it’s upstream datasets. -- **Prioritize data issues**: Prioritize the triage and resolution of issues that are impacting your critical downstream assets. -- **Change impact**: Analyze which exposures will be impacted by a planned change. -- **Unused datasets**: Detect datasets that no exposure consumes, that could be removed to save costs. + \ No newline at end of file diff --git a/docs/features/data-lineage/lineage.mdx b/docs/features/data-lineage/lineage.mdx new file mode 100644 index 000000000..2420ccf77 --- /dev/null +++ b/docs/features/data-lineage/lineage.mdx @@ -0,0 +1,42 @@ +--- +title: End-to-End Data Lineage +sidebarTitle: Lineage overview +--- + + + +Elementary offers automated [Column-Level Lineage](/features/column-level-lineage) functionality, enriched with the latest test and monitors results. +It is built with usability and performance in mind. +The column-level lineage is built from the metadata of your data warehouse, and integrations with [BI tools]((/features/exposures-lineage#automated-bi-lineage)) such as Looker and Tableau. + +Elementary updates your lineage view frequently, ensuring it is always current. +This up-to-date lineage data is essential for supporting several critical workflows, including: + +- **Effective Data Issue Debugging**: Identify and trace data issues back to their sources. +- **Incidents impact analysis**: You could explore which downstream assets are impacted by each data issue. +- **Prioritize data issues**: Prioritize the triage and resolution of issues that are impacting your critical downstream assets. +- **Public assets health**: By selecting an exposure and filtering on upstream nodes, you can see the status of all its upstream datasets. +- **Change impact**: Analyze which exposures will be impacted by a planned change. +- **Unused datasets**: Detect datasets that are not consumed downstrean, and could be removed to reduce costs. + + + +## Node info and test results + +To view additional information in the lineage view, use the `...` menu to the right of the column: + +- **Test results**: Access the table's latest test results in the lineage view. +- **Node info**: See details such as description, owner and tags. If collected, it will include the latest job info. + + +## Job info in lineage + +You can [configure Elementary to collect jobs information](/cloud/guides/collect-job-data) to present in the lineage _Node info_ tab. Job names can also be used to filter the lineage graph. diff --git a/docs/features/data-tests.mdx b/docs/features/data-tests.mdx deleted file mode 100644 index 0c3ef16d8..000000000 --- a/docs/features/data-tests.mdx +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: "Elementary Data Tests" -icon: "monitor-waveform" -sidebarTitle: "Data Tests" ---- - -Elementary provides tests for detection of data quality issues. -Elementary data tests are configured and executed like native tests in your dbt project. - -Elementary tests can be used in addition to dbt tests, packages tests (such as dbt-expectations), and custom tests. -All of these test results will be presented in the Elementary UI and alerts. - - diff --git a/docs/features/data-tests/custom-sql-tests.mdx b/docs/features/data-tests/custom-sql-tests.mdx new file mode 100644 index 000000000..4f7205478 --- /dev/null +++ b/docs/features/data-tests/custom-sql-tests.mdx @@ -0,0 +1,6 @@ +--- +title: Custom SQL Tests +sidebarTitle: Custom SQL test +--- + +_🚧 Under construction 🚧_ \ No newline at end of file diff --git a/docs/features/data-tests/data-tests-overview.mdx b/docs/features/data-tests/data-tests-overview.mdx new file mode 100644 index 000000000..c848aa572 --- /dev/null +++ b/docs/features/data-tests/data-tests-overview.mdx @@ -0,0 +1,15 @@ +--- +title: Data Tests Overview +sidebarTitle: Overview and configuration +--- + +Data tests are useful for validating and enforcing explicit expectations on your data. + +Elementary enables data validation and result tracking by leveraging dbt tests and dbt packages such as dbt-utils, dbt-expectations, and Elementary. +This rich ecosystem of tests covers various use cases, and is widely adopted as a standard for data validations. +Any custom dbt generic or singular test you develop will also be included. + +Additionally, users can create custom SQL tests in Elementary. + + +_🚧 Under construction 🚧_ \ No newline at end of file diff --git a/docs/features/data-tests/dbt-tests.mdx b/docs/features/data-tests/dbt-tests.mdx new file mode 100644 index 000000000..1578b4b26 --- /dev/null +++ b/docs/features/data-tests/dbt-tests.mdx @@ -0,0 +1,24 @@ +--- +title: dbt, Packages and Elementary Tests +sidebarTitle: dbt tests +--- + +_🚧 Under construction 🚧_ + + +## Elementary dbt package tests + +The Elementary dbt package also provides tests for detection of data quality issues. +Elementary data tests are configured and executed like native tests in your dbt project. + + + + +## Supported dbt packages + +Elementary collects and monitors the results of all dbt tests. + +The following packages are supported in the tests configuration wizard: + +- dbt expectations +- dbt utils \ No newline at end of file diff --git a/docs/features/data-tests/schema-validation-test.mdx b/docs/features/data-tests/schema-validation-test.mdx new file mode 100644 index 000000000..8b30d021d --- /dev/null +++ b/docs/features/data-tests/schema-validation-test.mdx @@ -0,0 +1,6 @@ +--- +title: Schema Validation Tests +sidebarTitle: Schema validation +--- + +_🚧 Under construction 🚧_ \ No newline at end of file diff --git a/docs/features/elementary-alerts.mdx b/docs/features/elementary-alerts.mdx index 4e6f06cc7..e69de29bb 100644 --- a/docs/features/elementary-alerts.mdx +++ b/docs/features/elementary-alerts.mdx @@ -1,14 +0,0 @@ ---- -title: "Alerts" -icon: "bell-exclamation" ---- - - - -## Alerts destinations - - - -## Alerts configuration - - diff --git a/docs/features/lineage.mdx b/docs/features/lineage.mdx deleted file mode 100644 index db015dc93..000000000 --- a/docs/features/lineage.mdx +++ /dev/null @@ -1,32 +0,0 @@ ---- -title: End-to-End Data Lineage -sidebarTitle: Data Lineage ---- - -Elementary Cloud UI and Elementary OSS Report include a rich data lineage graph. -The graph is enriched with the latest test results, to enable easy impact and root cause analysis of data issues. - -In Elementary Cloud lineage includes [Column Level Lineage](/features/column-level-lineage) and [BI integrations](/features/exposures-lineage#automated-bi-lineage). - -## Node info and test results - -To see additional information in the lineage view, use the `...` menu right to the column: - -- **Test results**: Access the table latest test results in the lineage view. -- **Node info**: See details such as description, owner and tags. If collected, it will include the latest job info. - - - -## Job info in lineage - -You can configure Elementary to collect jobs names and information to present in the lineage _Node info_ tab. Job names can also be used to filter the lineage graph. - -Read how to configure jobs info collection for [Elementary Cloud](/cloud/guides/collect-job-data) or [OSS](/oss/guides/collect-job-data). diff --git a/docs/features/multi-env.mdx b/docs/features/multi-env.mdx index fb6d55921..4fb533ac2 100644 --- a/docs/features/multi-env.mdx +++ b/docs/features/multi-env.mdx @@ -1,11 +1,10 @@ --- title: "Multiple Environments" -icon: "rectangle-history-circle-plus" --- -An environment in Elementary is a combination of dbt project and target. -For example: If you have a single dbt project with three targets, prod, staging and dev, you could create 3 environments in Elementary and monitor these envs. +An environment in Elementary is a combination of a dbt project and a target. +For example: If you have a single dbt project with three targets, prod, staging and dev, you can create 3 environments in Elementary and monitor these environments. If you have several dbt projects and even different data warehouses, Elementary enables monitoring the data quality of all these environments in a single interface. diff --git a/docs/features/performance-monitoring/performance-monitoring.mdx b/docs/features/performance-monitoring/performance-monitoring.mdx new file mode 100644 index 000000000..77ce069ba --- /dev/null +++ b/docs/features/performance-monitoring/performance-monitoring.mdx @@ -0,0 +1,39 @@ +--- +title: Performance Monitoring +sidebarTitle: Performance monitoring +--- + +Monitoring the performance of your data pipeline is critical for maintaining data quality, reliability, and operational efficiency. +Proactively monitoring performance issues enables to detect bottlenecks and opportunities for optimization, prevent data delays, and avoid unnecessary costs. + +Elementary monitors and logs the execution times of: +- dbt models +- dbt tests + +## Models performance + +Navigate to the `Model Duration` tab. + +The table displays the latest execution time, median execution time, and execution time trend for each model. You can sort the table by these metrics and explore the execution times over time for the models with the longest durations + +It is also useful to use the navigation bar to filter the results, and see run times per tag/owner/folder. + + + +## Tests performance + +Navigate to the `Test Execution History` tab. + +On the table you can see the median execution time and fail rate per test. +You can sort the table by this time column, and detect tests that are compute heavy. + +It is also useful to use the navigation bar to filter the results, and see run times per tag/owner/folder. \ No newline at end of file diff --git a/docs/introduction.mdx b/docs/introduction.mdx index 1fc128423..8bd67abcd 100644 --- a/docs/introduction.mdx +++ b/docs/introduction.mdx @@ -12,7 +12,8 @@ icon: "fire" alt="Elementary banner" /> - + + Elementary includes two products: diff --git a/docs/key-features.mdx b/docs/key-features.mdx index 581b8d25d..75068c125 100644 --- a/docs/key-features.mdx +++ b/docs/key-features.mdx @@ -72,3 +72,43 @@ icon: "stars" Explore and discover data sets, manage your documentation in code. + + + +#### Anomaly Detection + + + Out-of-the-box ML-powered monitoring for freshness and volume issues on all production tables. + The monitors track updates to tables, and will detect data delays, incomplete updates, and significant volume changes. + By qurying only metadata (e.g. information schema, query history), the monitors don't add compute costs. + + + + ML-powered anomaly detection on data quality metrics such as null rate, empty values, string length, numeric metrics (sum, max, min, avg), etc. + Elementary also supports monitoring for anomalies by dimensions. + The monitors are activated for specific data sets, and require minimal configuration (e.g. timestamp column, dimensions). + + +#### Schema Validation + + + Elementary offers a set of schema tests for validating there are no breaking changes. + The tests support detecting any schema changes, only detecting changes from a configured baseline, JSON schema validation, + and schema changes that break downstream exposures such as dashboards. + + + + Coming soon! + + +#### Data Tests + +Custom SQL Tests + +dbt tests + +Python tests + +#### Tests Coverage + +#### Performance monitoring diff --git a/docs/mint.json b/docs/mint.json index 9d9bb9820..a4b1acf83 100644 --- a/docs/mint.json +++ b/docs/mint.json @@ -29,7 +29,7 @@ }, "tabs": [ { - "name": "Data tests", + "name": "Elementary Tests", "url": "data-tests" }, { @@ -61,7 +61,6 @@ "pages": [ "introduction", "quickstart", - "cloud/general/security-and-privacy", { "group": "dbt package", "icon": "cube", @@ -75,27 +74,98 @@ ] }, { - "group": "Features", + "group": "Cloud Platform", "pages": [ - "features/data-tests", - "features/automated-monitors", - "features/elementary-alerts", - "features/data-observability-dashboard", + "cloud/introduction", + "cloud/features", + "features/integrations", + "cloud/general/security-and-privacy" + ] + }, + { + "group": "Anomaly Detection Monitors", + "pages": [ + "features/anomaly-detection/monitors-overview", { - "group": "End-to-End Lineage", - "icon": "arrow-progress", - "iconType": "solid", + "group": "Automated monitors", "pages": [ - "features/lineage", - "features/exposures-lineage", - "features/column-level-lineage" + "features/anomaly-detection/automated-monitors", + "features/anomaly-detection/automated-freshness", + "features/anomaly-detection/automated-volume" ] }, + "features/anomaly-detection/opt-in-monitors", + { + "group": "Configuration and Feedback", + "pages": [ + "features/anomaly-detection/monitors-configuration", + "features/anomaly-detection/monitors-feedback", + "features/anomaly-detection/disable-or-mute-monitors" + ] + } + ] + }, + { + "group": "Data Tests", + "pages": [ + "features/data-tests/data-tests-overview", + "features/data-tests/dbt-tests", + "features/data-tests/custom-sql-tests", + "features/data-tests/schema-validation-test" + ] + }, + { + "group": "Data Lineage", + "pages": [ + "features/data-lineage/lineage", + "features/data-lineage/column-level-lineage", + "features/data-lineage/exposures-lineage" + ] + }, + { + "group": "Alerts and Incidents", + "pages": [ + "features/alerts-and-incidents/alerts-and-incidents-overview", + { + "group": "Setup & configure alerts", + "pages": [ + "features/alerts-and-incidents/effective-alerts-setup", + "features/alerts-and-incidents/alert-rules", + "features/alerts-and-incidents/owners-and-subscribers", + "features/alerts-and-incidents/alert-configuration" + ] + }, + "features/alerts-and-incidents/incidents", + "features/alerts-and-incidents/incident-management" + ] + }, + { + "group": "Performance & Cost", + "pages": [ + "features/performance-monitoring/performance-monitoring" + ] + }, + { + "group": "Data Governance", + "pages": [ + "features/data-governance/overview-and-best-practices", + "features/data-governance/define-ownership", + "features/data-governance/leverage-tags" + ] + }, + { + "group": "Collaboration & Communication", + "pages": [ + "features/collaboration-and-communication/data-observability-dashboard", + "features/collaboration-and-communication/catalog" + ] + }, + { + "group": "Additional features", + "pages": [ "features/config-as-code", - "features/catalog", "features/multi-env", - "features/ci", - "features/integrations" + "features/ci" ] }, { @@ -181,7 +251,7 @@ ] }, { - "group": "Communication & collaboration", + "group": "Alerts & Incidents", "pages": [ "cloud/integrations/alerts/slack", "cloud/integrations/alerts/ms-teams", @@ -197,6 +267,7 @@ { "group": "Resources", "pages": [ + "resources/business-case-data-observability-platform", "overview/cloud-vs-oss", "resources/pricing", "resources/community" @@ -240,7 +311,8 @@ "data-tests/anomaly-detection-configuration/ignore_small_changes", "data-tests/anomaly-detection-configuration/fail_on_zero", "data-tests/anomaly-detection-configuration/detection-delay", - "data-tests/anomaly-detection-configuration/anomaly-exclude-metrics" + "data-tests/anomaly-detection-configuration/anomaly-exclude-metrics", + "data-tests/anomaly-detection-configuration/exclude-final-results" ] }, "data-tests/anomaly-detection-tests/volume-anomalies", @@ -262,7 +334,9 @@ }, { "group": "Other Tests", - "pages": ["data-tests/python-tests"] + "pages": [ + "data-tests/python-tests" + ] }, { "group": "Elementary OSS", @@ -308,7 +382,10 @@ }, { "group": "Configuration & usage", - "pages": ["oss/cli-install", "oss/cli-commands"] + "pages": [ + "oss/cli-install", + "oss/cli-commands" + ] }, { "group": "Deployment", @@ -369,7 +446,8 @@ ] } ], - "footerSocials": { + "footerSocials": + { "website": "https://www.elementary-data.com", "slack": "https://elementary-data.com/community" }, @@ -383,5 +461,43 @@ "gtm": { "tagId": "GTM-TKR4HS3Q" } - } + }, + "redirects": [ + { + "source": "/features/lineage", + "destination": "/features/data-lineage/lineage" + }, + { + "source": "/features/exposures-lineage", + "destination": "/features/data-lineage/exposures-lineage" + }, + { + "source": "/features/column-level-lineage", + "destination": "/features/data-lineage/column-level-lineage" + }, + { + "source": "/features/automated-monitors", + "destination": "/features/anomaly-detection/automated-monitors" + }, + { + "source": "/features/data-tests", + "destination": "/features/data-tests/dbt-tests" + }, + { + "source": "/features/elementary-alerts", + "destination": "/features/alerts-and-incidents/alerts-and-incidents-overview" + }, + { + "source": "/cloud/guides/alert-rules", + "destination": "/features/alerts-and-incidents/alert-rules" + }, + { + "source": "/features/catalog", + "destination": "/features/collaboration-and-communication/catalog" + }, + { + "source": "/features/data-observability-dashboard", + "destination": "/features/collaboration-and-communication/data-observability-dashboard" + } + ] } diff --git a/docs/resources/business-case-data-observability-platform.mdx b/docs/resources/business-case-data-observability-platform.mdx new file mode 100644 index 000000000..a78f11580 --- /dev/null +++ b/docs/resources/business-case-data-observability-platform.mdx @@ -0,0 +1,25 @@ +--- +title: "When do I need a data observability platform?" +sidebarTitle: "When to add data observability" +--- + + +### If the consequences of data issues are high +If you are running performance marketing budgets of $millions, a data issue can result in a loss of hundreds of thousands of dollars. +In these cases, the ability to detect and resolve issues fast is business-critical. It typically involves multiple teams and the ability to measure, track, and report on data quality. + +### If data is scaling faster than the data team +The scale and complexity of modern data environments make it impossible for teams to manually manage quality without expanding the team. A data observability platform enables automation and collaboration, ensuring data quality is maintained as data continues to grow, without impacting team efficiency. + +### Common use cases +If your data is being used in one of the following use cases, you should consider adding a data observability platform: +- Self-service analytics +- Data activation +- Powering AI & ML products +- Embedded analytics +- Performance marketing +- Regulatory reporting +- A/B testing and experiments + +## Why isn't the open-source package enough? +The open-source package was designed for engineers that want to monitor their dbt project. The Cloud Platform was designed to support the complex, multifaceted requirements of larger teams and organizations, providing a holistic observability solution. \ No newline at end of file diff --git a/docs/resources/how-does-elementary-work b/docs/resources/how-does-elementary-work new file mode 100644 index 000000000..937fb3500 --- /dev/null +++ b/docs/resources/how-does-elementary-work @@ -0,0 +1,28 @@ +--- +title: "How does Elementary work" +sidebarTitle: "Elementary Could Platform" +--- +## Cloud platform architecture +The Elementary open-source package creates a schema that collects the test results and the models from your dbt projects. The platform is part of your package and it runs in your dbt pipeline and it writes to its own data set in the data warehouse and then the platform syncs that data set to the cloud. It also integrates directly with your data warehouse so it has access to the information schema, the query history and the metadata. + +We also integrate with your dbt code repository - so we understand how it’s built including tags, owners, which tables are part of your dbt project and what tables are not, and we see daily usage by connecting to your BI. + + + Elementary Cloud Platform Architecture + + + +## How it works? +1. You install the Elementary dbt package in your dbt project and configure it to write to it's own schema, the Elementary schema. +2. The package writes test results, run results, logs and metadata to the Elementary schema. +3. The cloud service only requires `read access` to the Elementary schema, not to schemas where your sensitive data is stored. +4. The cloud service connects to sync the Elementary schema using an **encrypted connection** and a **static IP address** that you will need to add to your allowlist. + + +## + + +[Read about Security and Privacy](/cloud/general/security-and-privacy) \ No newline at end of file