From 787e6408d9010db7e554ca71662f721ca97630e3 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Mon, 8 Apr 2024 14:04:57 -0400 Subject: [PATCH 01/51] Edits to PGD PR5384 --- product_docs/docs/pgd/5/scaling.mdx | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/product_docs/docs/pgd/5/scaling.mdx b/product_docs/docs/pgd/5/scaling.mdx index 610e840b0f7..99d7e80c0e6 100644 --- a/product_docs/docs/pgd/5/scaling.mdx +++ b/product_docs/docs/pgd/5/scaling.mdx @@ -23,10 +23,10 @@ function to create or alter the definition of automatic range partitioning for a no definition exists, it's created. Otherwise, later executions will alter the definition. -PGD Autopartition in PGD 5 currently locks the actual table while performing +PGD AutoPartition in PGD 5 currently locks the actual table while performing new partition maintenance operations. -An ERROR is raised if the table isn't RANGE partitioned or a multi-column +An error is raised if the table isn't range partitioned or a multi-column partition key is used. By default, AutoPartition manages partitions globally. In other words, when a @@ -57,7 +57,7 @@ managed by AutoPartition. Doing so can make the AutoPartition metadata inconsistent and might cause it to fail. -## AutoPartition Examples +## AutoPartition examples Daily partitions, keep data for one month: @@ -83,7 +83,7 @@ bdr.autopartition('Orders', '1000000000', ``` -## RANGE-partitioned tables +## Range-partitioned tables A new partition is added for every `partition_increment` range of values, with lower and upper bound `partition_increment` apart. For tables with a partition @@ -174,12 +174,12 @@ function to find the partition for the given partition key value. If partition to hold that value doesn't exist, then the function returns NULL. Otherwise Oid of the partition is returned. -## Enabling or disabling AutoPartitioning +## Enabling or disabling autopartitioning Use [`bdr.autopartition_enable()`](/pgd/latest/reference/autopartition#bdrautopartition_enable) -to enable AutoPartitioning on the given table. If AutoPartitioning is already +to enable autopartitioning on the given table. If autopartitioning is already enabled, then no action occurs. Similarly, use [`bdr.autopartition_disable()`](/pgd/latest/reference/autopartition#bdrautopartition_disable) -to disable AutoPartitioning on the given table. +to disable autopartitioning on the given table. From 7b13ecc74e618ed3cb7f72d200d20dba189393a2 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Mon, 8 Apr 2024 14:27:52 -0400 Subject: [PATCH 02/51] Update product_docs/docs/pgd/5/scaling.mdx --- product_docs/docs/pgd/5/scaling.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/scaling.mdx b/product_docs/docs/pgd/5/scaling.mdx index 99d7e80c0e6..2adccc33c09 100644 --- a/product_docs/docs/pgd/5/scaling.mdx +++ b/product_docs/docs/pgd/5/scaling.mdx @@ -83,7 +83,7 @@ bdr.autopartition('Orders', '1000000000', ``` -## Range-partitioned tables +## RANGE-partitioned tables A new partition is added for every `partition_increment` range of values, with lower and upper bound `partition_increment` apart. For tables with a partition From 642c68261fa9c74bbfc877ca65438247676c9ff3 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Mon, 8 Apr 2024 14:28:27 -0400 Subject: [PATCH 03/51] Update product_docs/docs/pgd/5/scaling.mdx --- product_docs/docs/pgd/5/scaling.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/scaling.mdx b/product_docs/docs/pgd/5/scaling.mdx index 2adccc33c09..608ff744dcd 100644 --- a/product_docs/docs/pgd/5/scaling.mdx +++ b/product_docs/docs/pgd/5/scaling.mdx @@ -26,7 +26,7 @@ definition. PGD AutoPartition in PGD 5 currently locks the actual table while performing new partition maintenance operations. -An error is raised if the table isn't range partitioned or a multi-column +An error is raised if the table isn't RANGE partitioned or a multi-column partition key is used. By default, AutoPartition manages partitions globally. In other words, when a From e6e64cdb34cd3b906ca39ba8016456e746b5aa3e Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Thu, 23 May 2024 14:56:54 -0400 Subject: [PATCH 04/51] Edits to Migration Portal PR5627 --- .../mp_4.9.0_rel_notes.mdx | 8 +++---- .../01_mp_overview_home.mdx | 2 +- .../02_mp_overview_project.mdx | 7 +++--- .../03_mp_using_portal/03_mp_quick_start.mdx | 4 ++-- .../mp_ai_copilot/ai_good_prompts.mdx | 22 +++++++++---------- .../mp_ai_copilot/enable_ai_copilot.mdx | 22 +++++++++---------- .../mp_ai_copilot/index.mdx | 20 ++++++++--------- .../02_mp_schema_assessment.mdx | 8 +++---- .../migration_portal/4/known_issues_notes.mdx | 6 ++--- 9 files changed, 48 insertions(+), 51 deletions(-) diff --git a/product_docs/docs/migration_portal/4/01_mp_release_notes/mp_4.9.0_rel_notes.mdx b/product_docs/docs/migration_portal/4/01_mp_release_notes/mp_4.9.0_rel_notes.mdx index a18cf601efd..b7704ac8312 100644 --- a/product_docs/docs/migration_portal/4/01_mp_release_notes/mp_4.9.0_rel_notes.mdx +++ b/product_docs/docs/migration_portal/4/01_mp_release_notes/mp_4.9.0_rel_notes.mdx @@ -9,9 +9,7 @@ New features, enhancements, bug fixes, and other changes in Migration Portal 4.9 | Type | Description | |--------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Feature | Added a new AI Copilot feature that you can use to ask questions related to database migrations and request suggestions to resolve specific Oracle to EPAS schema incompatibility issues. Consult the [Migration Portal AI Copilot documentation](/migration_portal/latest/03_mp_using_portal/mp_ai_copilot/) for more information. | -| Enhancement | Added a new repair handler ERH-2108 that removes the `PARALLEL/NOPARALLEL` clause from `PARTITIONED TABLE` and `MVIEW` definitions as it is not supported in EPAS. | -| Bug fix | Fixed an issue where Migration Portal was not able to remove the `XMLTYPE` column property clause, which is not supported in EPAS, from the `TABLE` definition. | +| Feature | Added the AI Copilot feature that you can use to ask questions related to database migrations and request suggestions to resolve specific Oracle-to-EDB Postgres Advanced Server schema incompatibility issues. See the [Migration Portal AI Copilot documentation](/migration_portal/latest/03_mp_using_portal/mp_ai_copilot/) for more information. | +| Enhancement | Added a repair handler ERH-2108 that removes the `PARALLEL/NOPARALLEL` clause from `PARTITIONED TABLE` and `MVIEW` definitions, as it isn't supported in EDB Postgres Advanced Server. | +| Bug fix | Fixed an issue where Migration Portal couldn't remove the `XMLTYPE` column property clause from the `TABLE` definition. This clause isn't supported in EDB Postgres Advanced Server. | | Bug fix | Fixed an issue where an end portion of DDL files was removed due to a parsing failure. | - - diff --git a/product_docs/docs/migration_portal/4/03_mp_using_portal/01_mp_overview_home.mdx b/product_docs/docs/migration_portal/4/03_mp_using_portal/01_mp_overview_home.mdx index 86411c489c4..82a39b3aba5 100644 --- a/product_docs/docs/migration_portal/4/03_mp_using_portal/01_mp_overview_home.mdx +++ b/product_docs/docs/migration_portal/4/03_mp_using_portal/01_mp_overview_home.mdx @@ -39,4 +39,4 @@ The Migration Portal home page allows access to the following Migration Portal f - **Portal Wiki**: Select **Portal Wiki** to access links to product information and more help guides. -- **AI Copilot**: Select [AI Copilot](/migration_portal/latest/03_mp_using_portal/mp_ai_copilot/) to interact with an AI chat interface to obtain information about DDL compatibility, Postgres query syntax, EPAS equivalents, and more. Prior to using the AI Copilot, users are prompted to agree to its terms of use in order to opt-in to using the feature. +- **AI Copilot**: Select [AI Copilot](/migration_portal/latest/03_mp_using_portal/mp_ai_copilot/) to interact with an AI chat interface. This interface helps you to get information about DDL compatibility, Postgres query syntax, EDB Postgres Advanced Server equivalents, and more. Before using the AI Copilot, you're prompted to agree to its terms of use to opt in to using the feature. diff --git a/product_docs/docs/migration_portal/4/03_mp_using_portal/02_mp_overview_project.mdx b/product_docs/docs/migration_portal/4/03_mp_using_portal/02_mp_overview_project.mdx index c0b2b3113d8..a9795f01567 100644 --- a/product_docs/docs/migration_portal/4/03_mp_using_portal/02_mp_overview_project.mdx +++ b/product_docs/docs/migration_portal/4/03_mp_using_portal/02_mp_overview_project.mdx @@ -15,7 +15,7 @@ The Migration Portal Projects page provides detailed information about your migr Use the following resources to gather information about your migration projects: -- **Compatible**: The **Project Compatibility** gauge displays the color on the basis of the compatibility percentage of the assessed schema. +- **Project Compatibility**: The **Project Compatibility** gauge displays the color based on the compatibility percentage of the assessed schema. - **Schema Count**: Displays the number of schemas in a project. @@ -23,7 +23,7 @@ Use the following resources to gather information about your migration projects: - **Search objects**: Use the **Search** box to search for objects. -- **Filters**: From the left panel of the Projects page, you can filter the failed, system-repaired, user-repaired, and automatically-passed objects. You can select one or more filter combinations to refine the information. +- **Filters**: From the left panel of the Projects page, you can filter the failed, system-repaired, user-repaired, and automatically passed objects. You can select one or more filter combinations to refine the information. - **Objects**: Tab that displays the objects for the selected schemas. @@ -39,5 +39,4 @@ Use the following resources to gather information about your migration projects: - **Quick help**: The Quick help panel displays links to Knowledge Base articles and repair handler documentation. Use the **Search** box to search the Knowledge Base entries or repair handler documentation for specific information. -- **AI Copilot**: Select [AI Copilot](/migration_portal/latest/03_mp_using_portal/mp_ai_copilot/) to interact with an AI interface to obtain information about DDL compatibility, Postgres query syntax, EPAS equivalents, and more. Prior to using the AI Copilot, users are prompted to agree to its terms of use in order to opt-in to using the feature. - +- **AI Copilot**: Select [AI Copilot](/migration_portal/latest/03_mp_using_portal/mp_ai_copilot/) to interact with an AI interface. This interface helps you to obtain information about DDL compatibility, Postgres query syntax, EDB Postgres Advanced Server equivalents, and more. Before using the AI Copilot, you're prompted to agree to its terms of use to opt in to using the feature. diff --git a/product_docs/docs/migration_portal/4/03_mp_using_portal/03_mp_quick_start.mdx b/product_docs/docs/migration_portal/4/03_mp_using_portal/03_mp_quick_start.mdx index 618fff960a9..d640383779d 100644 --- a/product_docs/docs/migration_portal/4/03_mp_using_portal/03_mp_quick_start.mdx +++ b/product_docs/docs/migration_portal/4/03_mp_using_portal/03_mp_quick_start.mdx @@ -22,9 +22,9 @@ To migrate Oracle schemas using Migration Portal: 1. Select the objects that aren't compatible with EDB Postgres Advanced Server. -1. Refer to the Knowledge Base or interact with the [AI Copilot](/migration_portal/latest/03_mp_using_portal/mp_ai_copilot/) to look up and understand possible workarounds for the objects that aren't compatible in EDB Postgres Advanced. +1. To look up and understand possible workarounds for the objects that aren't compatible in EDB Postgres Advanced Server, refer to the Knowledge Base or interact with the [AI Copilot](/migration_portal/latest/03_mp_using_portal/mp_ai_copilot/). - !!! Important Important + !!! Important Ensure you test all suggested solutions to confirm the converted schemas behave as expected. !!! diff --git a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/ai_good_prompts.mdx b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/ai_good_prompts.mdx index ee3a95a3557..1ff226ff43e 100644 --- a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/ai_good_prompts.mdx +++ b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/ai_good_prompts.mdx @@ -3,21 +3,21 @@ title: How to create good prompts --- -The quality and accuracy of the answers the AI Copilot provides depends highly on your prompt. +The quality and accuracy of the answers that the AI Copilot provides depend on the quality of your prompt. The AI Copilot works best if it has large context around a question, details or specifics about the situation, and examples. -The more detailed and precise you can make your prompt, the better answers you will obtain. +The more detailed and precise you can make your prompt, the better the answers will be. Like with other language models, you can use prompt engineering techniques with the AI Copilot to improve your query input. !!! Note Data considerations - All prompts inserted in the chatbox are stored in an EDB-managed backend database and stored by the AI service provider. + All prompts inserted in the chatbot are stored in an EDB-managed backend database and stored by the AI service provider. - Don't enter sensitive information like full names, user names, passwords, addresses, etc. into the chatbox. + Don't enter sensitive information like full names, user names, passwords, and addresses into the chatbot. ## Example queries -!!! Important Important +!!! Important Before applying any suggested solutions in production environments, we strongly recommend testing the solutions in a controlled test environment to ensure the proposed fixes align with your specific migration requirements. @@ -31,7 +31,7 @@ What tools are there to migrate databases? ``` Since the AI Copilot is trained on EDB product documentation and knowledge base, this general question will still -give you an answer in the context of EDB's product offerings. However, the chatbot focusses on the most common use case, +give you an answer in the context of EDB's product offerings. However, the chatbot focuses on the most common use case, migrating from an Oracle database to a Postgres database, which might not be your use case. To improve this prompt, be more specific about the source and target databases. @@ -39,7 +39,7 @@ To improve this prompt, be more specific about the source and target databases. ### Better prompt ``` -What tools does EDB provide to migrate from open-source Postgres to EPAS? +What tools does EDB provide to migrate from open-source Postgres to EDB Postgres Advanced Server? ``` This prompt will produce a more accurate answer, listing EDB's product offerings that support that specific use case. @@ -70,13 +70,13 @@ In this case, copy the entire query that contains the line with the issue: Can you correct the syntax of this query? `CREATE OR REPLACE VIEW HRPLUS.VW_STRING_LIST (STRING_VAL) AS SELECT COLUMN_VALUE FROM SYS.ODCIVARCHAR2LIST('a','b', 'c');` ``` -Alternatively, you can request an EPAS target equivalent for an Oracle source query: +Alternatively, you can request an EDB Postgres Advanced Server target equivalent for an Oracle source query: ``` -Can you provide an EPAS-compatible equivalent for `CREATE OR REPLACE FORCE EDITIONABLE VIEW "HRPLUS"."VW_STRING_LIST" ("STRING_VAL") AS SELECT "COLUMN_VALUE" FROM SYS.ODCIVARCHAR2LIST('a','b', 'c');` +Can you provide an EDB Postgres Advanced Server-compatible equivalent for `CREATE OR REPLACE FORCE EDITIONABLE VIEW "HRPLUS"."VW_STRING_LIST" ("STRING_VAL") AS SELECT "COLUMN_VALUE" FROM SYS.ODCIVARCHAR2LIST('a','b', 'c');` ``` -Both prompts will suggest a similar syntax to create an EPAS-compatible view: +Both prompts will suggest a similar syntax to create an EDB Postgres Advanced Server-compatible view: ``` CREATE OR REPLACE VIEW HRPLUS.VW_STRING_LIST (STRING_VAL) AS @@ -90,7 +90,7 @@ Sometimes, the Migration Portal provides an imprecise or vague error message for ![Imprecise syntax error message](../../images/mp_vague_error.png) -You can ask ask the AI Copilot to explain the issue: +You can ask the AI Copilot to explain the issue: ``` What is the issue with the following argument when used in a Postgres query? `PIVOT ( COUNT(expense_type_id) FOR expense_type_id IN (10, 20, 30) ) ORDER BY employee_ref;` diff --git a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx index 3ddb3678b70..b2202adbd9f 100644 --- a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx +++ b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx @@ -7,23 +7,23 @@ To use the AI Copilot, you first must agree to its terms and conditions. ## Enabling the AI Copilot -After logging in to the [Migration Portal](https://migration.enterprisedb.com/) from the web browser. -1. Select **AI Copilot** at the top menu bar. - You can also access the **AI Copilot** from the right-hand pane. This view is only available after selecting a Project and Schema. +1. Log in to the [Migration Portal](https://migration.enterprisedb.com/) from the web browser. +1. From the top menu bar, select **AI Copilot**. + You can also access the AI Copilot from the right-hand pane. This view is available only after selecting a project and schema. 1. Select **Opt in for AI Copilot**. - A pop-up window displays the terms and conditions for the usage of the AI Copilot. -1. Read the terms and conditions, check the **I have read and agree to the terms and conditions of the agreement**, and close the window. + A pop-up window displays the terms and conditions for the use of the AI Copilot. +1. Read the terms and conditions, select **I have read and agree to the terms and conditions of the agreement**, and close the window. -The AI Copilot is enabled. -You can now use the AI Copilot chat to ask questions about the Migration Portal, migration strategies, syntax error resolution, etc. -Find usage examples in [How to create good prompts](/migration_portal/4/03_mp_using_portal/mp_ai_copilot/ai_good_prompts/). +The AI Copilot is enabled. +You can now use the AI Copilot chat to ask questions about the Migration Portal, migration strategies, syntax error resolution, and so on. +For usage examples, see [How to create good prompts](/migration_portal/4/03_mp_using_portal/mp_ai_copilot/ai_good_prompts/). -## Disabling the AI Copilot +## Disabling the AI Copilot You can disable the AI Copilot at any time. -After logging in to the [Migration Portal](https://migration.enterprisedb.com/) from the web browser. -1. Select your user name at the top right corner of the Migration Portal. +1. Log in to the [Migration Portal](https://migration.enterprisedb.com/) from the web browser. +1. At the top-right corner of the Migration Portal, select your user name. 1. Select **Settings**. 1. Select **Opt out from the AI Copilot**. diff --git a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/index.mdx b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/index.mdx index 2ba99d4332d..33acb392382 100644 --- a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/index.mdx +++ b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/index.mdx @@ -5,31 +5,31 @@ navigation: - ai_good_prompts --- -The AI Copilot is an AI-driven chatbot tool, designed to assist you with issues that can occur during the schema assesment of DDL files. -It uses third-party AI services and is trained on both EDB's product documentation and support knowledge base to deliver use-case specific solutions. +The AI Copilot is an AI-driven chatbot tool, designed to assist you with issues that can occur during the schema assessment of DDL files. +It uses third-party AI services and is trained on both EDB's product documentation and Support knowledge base to deliver use-case-specific solutions. -The chatbot is embedded into the Migration Portal, offering an easily accesible interface where you can interact with the AI Copilot. +The chatbot is embedded into the Migration Portal, offering an easily accessible interface where you can interact with the AI Copilot. ## Use cases -Among others, you can ask questions about: +You can ask questions about topics such as: - Migration strategies -- Oracle and EDB Postgres Advanced Server (EPAS) or PostgreSQL DDL compatibility -- EPAS or PostgreSQL equivalents for Oracle queries +- Oracle and EDB Postgres Advanced Server or PostgreSQL DDL compatibility +- EDB Postgres Advanced Server or PostgreSQL equivalents for Oracle queries - Syntax errors - Usage examples for specific procedures or functions ## Use the AI Copilot -1. To start using the AI Copilot, [enable the service by agreeing to the terms and conditions](/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot/). +1. To start using the AI Copilot, [enable the service by agreeing to the terms and conditions](enable_ai_copilot/). -1. For usage examples, see [How to create good prompts](/migration_portal/4/03_mp_using_portal/mp_ai_copilot/ai_good_prompts/). +1. Enter questions. For examples, see [How to create good prompts](ai_good_prompts/). ## Feedback -You can provide general feedback on a specific AI Copilot answer by selecting the thumbs-up or thumbs-down icon depending on whether you thought the answer was helpful. +You can provide general feedback on a specific AI Copilot answer by selecting the thumbs-up or thumbs-down icon, depending on whether you thought the answer was helpful. -You can also provide more detailed feedback or report an issue by copying the conversation ID and sharing it with the support team. +You can also provide more detailed feedback or report an issue by copying the conversation ID and sharing it with the Support team. ![Feedback via conversation ID and icon](../../images/mp_ai_copilot_feedback_updated.png) diff --git a/product_docs/docs/migration_portal/4/04_mp_migrating_database/02_mp_schema_assessment.mdx b/product_docs/docs/migration_portal/4/04_mp_migrating_database/02_mp_schema_assessment.mdx index 568961053b6..083d104e201 100644 --- a/product_docs/docs/migration_portal/4/04_mp_migrating_database/02_mp_schema_assessment.mdx +++ b/product_docs/docs/migration_portal/4/04_mp_migrating_database/02_mp_schema_assessment.mdx @@ -50,13 +50,13 @@ You can assess an Oracle database schema for compatibility with EDB Postgres Adv ![Incompatible objects are identified](../images/mp_schema_assessment_incompatible.png) -1. Refer to the **Quick Help** (Knowledge Base) information or interact with the [AI Copilot](/migration_portal/latest/03_mp_using_portal/mp_ai_copilot/) in the right panel to locate the possible workarounds for the objects that aren't immediately compatible with EDB Postgres Advanced Server. +1. To locate the possible workarounds for the objects that aren't immediately compatible with EDB Postgres Advanced Server, refer to the **Quick Help** (Knowledge Base) information or interact with the [AI Copilot](/migration_portal/latest/03_mp_using_portal/mp_ai_copilot/) in the right panel. ![Incompatible objects are identified](../images/mp_schema_assessment_errors.png) When using the **Quick Help** (Knowledge Base), enter the error message for the incompatible objects with EDB Postgres Advanced Server and select **Search**. - When using the [AI Copilot](/migration_portal/latest/03_mp_using_portal/mp_ai_copilot/), use the chat window to interact with the AI enabled chat feature. You can ask questions about compatibility, enter the error message, request syntax Postgres equivalents for DDL queries, etc. + When using the [AI Copilot](/migration_portal/latest/03_mp_using_portal/mp_ai_copilot/), use the chat window to interact with the AI-enabled chat feature. You can ask questions about compatibility, enter the error message, request syntax Postgres equivalents for DDL queries, and so on. ![Suggestions provided by AI Copilot](../images/mp_schema_assessment_copilot.png) @@ -66,8 +66,8 @@ You can assess an Oracle database schema for compatibility with EDB Postgres Adv 1. Use the information you obtained from the Knowledge Base and AI Copilot to make all incompatible objects compatible. Manually make the changes in the **Target** pannel for that object and select **Reassess**. - !!! ImportantImportant - Ensure you test all suggested solutions to confirm the converted schemas behave as expected. + !!! Important + Ensure that you test all suggested solutions to confirm the converted schemas behave as expected. ![Workaround or resolution for incompatible objects](../images/mp_schema_assessment_workaround.png) diff --git a/product_docs/docs/migration_portal/4/known_issues_notes.mdx b/product_docs/docs/migration_portal/4/known_issues_notes.mdx index 695ba91fb88..c26d7892937 100644 --- a/product_docs/docs/migration_portal/4/known_issues_notes.mdx +++ b/product_docs/docs/migration_portal/4/known_issues_notes.mdx @@ -238,9 +238,9 @@ While using the Oracle default case, you may experience a lower compatibility ra ## AI Copilot -The AI Copilot is a tool designed to assist you with issues that come up during the migration of DDLs. -While this tool can greatly aid in problem-solving, it's important to understand that generative AI technology will sometimes generate inaccurate or irrelevant responses. -The accuracy and quality of recommended solutions is heavily influenced by the user's [prompt and query strategies](/03_mp_using_portal/ai_good_prompts/). +The AI Copilot is a tool designed to assist you with issues that come up while migrating DDLs. +While this tool can greatly aid in problem solving, it's important to understand that generative AI technology will sometimes generate inaccurate or irrelevant responses. +The accuracy and quality of recommended solutions is heavily influenced by your [prompt and query strategies](/03_mp_using_portal/ai_good_prompts/). Before applying any suggested solutions in production environments, we strongly recommend testing the solutions in a controlled test environment From 32046399e347d88c0af4b59fa3bdc9a27bcfb201 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Thu, 23 May 2024 15:06:54 -0400 Subject: [PATCH 05/51] Update product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx --- .../4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx index b2202adbd9f..92ebd5edca4 100644 --- a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx +++ b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx @@ -16,7 +16,7 @@ To use the AI Copilot, you first must agree to its terms and conditions. The AI Copilot is enabled. You can now use the AI Copilot chat to ask questions about the Migration Portal, migration strategies, syntax error resolution, and so on. -For usage examples, see [How to create good prompts](/migration_portal/4/03_mp_using_portal/mp_ai_copilot/ai_good_prompts/). +For usage examples, see [How to create good prompts](ai_good_prompts/). ## Disabling the AI Copilot From 79571a8df74e6dd42b43b38d1802b5efd5193128 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Tue, 28 May 2024 15:26:01 +0530 Subject: [PATCH 06/51] EDB_JOB_SCHEDULER - minor fixes Removed a wrong statement and fixed the catalog name --- advocacy_docs/pg_extensions/edb_job_scheduler/index.mdx | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/advocacy_docs/pg_extensions/edb_job_scheduler/index.mdx b/advocacy_docs/pg_extensions/edb_job_scheduler/index.mdx index b6ad476accf..437e60a77f0 100644 --- a/advocacy_docs/pg_extensions/edb_job_scheduler/index.mdx +++ b/advocacy_docs/pg_extensions/edb_job_scheduler/index.mdx @@ -6,8 +6,6 @@ directoryDefaults: EDB Job Scheduler is an extension that runs the job scheduler as a background process for the `DBMS_SCHEDULER` and `DBMS_JOB` packages. -By default, the `edb_job_scheduler` extension resides in the `contrib/dbms_scheduler_ext` subdirectory under the EDB Postgres Advanced Server installation. - The extension has a main background process called the *launcher*. The launcher process starts when the database cluster loads. It forks the scheduler processes, creating one for each configured database. The databases are configured by the GUC `edb_job_scheduler.database_list`. If a database doesn't have any jobs to schedule or is done with all the schedules, after waiting for a minute, the scheduler process shuts down. Whenever a new job is added or there is any update to the existing jobs in `sys.jobs`, the launcher process starts again. @@ -22,7 +20,7 @@ All the recurring job scheduling is done through the `DBMS_SCHEDULER` and `DBMS_ - `job_run_details` — Holds information about the job status. The status can be `'r' - running`, `'s' - success`, or `'f' - failure` for the respective `jobid`. -The following are the columns in the `sys.job` table: +The following are the columns in the `sys.jobs` table: ``` Column | Type From 4ff7f86fde2692ccd186d7f49277f26f4cbd75eb Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Tue, 28 May 2024 10:39:43 -0400 Subject: [PATCH 07/51] Update product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/index.mdx Co-authored-by: gvasquezvargas --- .../4/03_mp_using_portal/mp_ai_copilot/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/index.mdx b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/index.mdx index 33acb392382..b34378f8f09 100644 --- a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/index.mdx +++ b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/index.mdx @@ -28,7 +28,7 @@ You can ask questions about topics such as: ## Feedback -You can provide general feedback on a specific AI Copilot answer by selecting the thumbs-up or thumbs-down icon, depending on whether you thought the answer was helpful. +You can provide general feedback on a specific AI Copilot answer by selecting the thumbs-up or thumbs-down icon. You can also provide more detailed feedback or report an issue by copying the conversation ID and sharing it with the Support team. From 1e4758f51fb15badc3202cd5cd51752eb8b111b2 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Tue, 28 May 2024 10:48:15 -0400 Subject: [PATCH 08/51] Update enable_ai_copilot.mdx --- .../4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx index 92ebd5edca4..396f505b869 100644 --- a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx +++ b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx @@ -7,8 +7,7 @@ To use the AI Copilot, you first must agree to its terms and conditions. ## Enabling the AI Copilot -1. Log in to the [Migration Portal](https://migration.enterprisedb.com/) from the web browser. -1. From the top menu bar, select **AI Copilot**. +1. In the top menu bar of the [Migration Portal](https://migration.enterprisedb.com/), select **AI Copilot**. You can also access the AI Copilot from the right-hand pane. This view is available only after selecting a project and schema. 1. Select **Opt in for AI Copilot**. A pop-up window displays the terms and conditions for the use of the AI Copilot. @@ -22,7 +21,6 @@ For usage examples, see [How to create good prompts](ai_good_prompts/). You can disable the AI Copilot at any time. -1. Log in to the [Migration Portal](https://migration.enterprisedb.com/) from the web browser. 1. At the top-right corner of the Migration Portal, select your user name. 1. Select **Settings**. 1. Select **Opt out from the AI Copilot**. From 283cdaab6e5a1da238b576bbf7ad4b9b00855aa6 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Tue, 28 May 2024 10:51:00 -0400 Subject: [PATCH 09/51] Update product_docs/docs/migration_portal/4/03_mp_using_portal/02_mp_overview_project.mdx --- .../4/03_mp_using_portal/02_mp_overview_project.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/migration_portal/4/03_mp_using_portal/02_mp_overview_project.mdx b/product_docs/docs/migration_portal/4/03_mp_using_portal/02_mp_overview_project.mdx index a9795f01567..a1b457f5f9d 100644 --- a/product_docs/docs/migration_portal/4/03_mp_using_portal/02_mp_overview_project.mdx +++ b/product_docs/docs/migration_portal/4/03_mp_using_portal/02_mp_overview_project.mdx @@ -15,7 +15,7 @@ The Migration Portal Projects page provides detailed information about your migr Use the following resources to gather information about your migration projects: -- **Project Compatibility**: The **Project Compatibility** gauge displays the color based on the compatibility percentage of the assessed schema. +- **Project Compatibility**: The **Project Compatibility** symbol shows how compatible the assessed schema is, displaying a numeric percentage and color gradient to show the range. - **Schema Count**: Displays the number of schemas in a project. From 4581e33b400fcfe08293d617ad09718a657c82a4 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Mon, 3 Jun 2024 13:34:33 +0100 Subject: [PATCH 10/51] Tiny date fix Signed-off-by: Dj Walker-Morgan --- product_docs/docs/pgd/5/rel_notes/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/rel_notes/index.mdx b/product_docs/docs/pgd/5/rel_notes/index.mdx index 3b4d614d3e0..28d6ded0412 100644 --- a/product_docs/docs/pgd/5/rel_notes/index.mdx +++ b/product_docs/docs/pgd/5/rel_notes/index.mdx @@ -26,7 +26,7 @@ that introduced the feature. | Release Date | EDB Postgres Distributed | BDR extension | PGD CLI | PGD Proxy | |--------------|------------------------------|---------------|---------|-----------| -| 31 Mar 2024 | [5.5.1](pgd_5.5.1_rel_notes) | 5.5.1 | 5.5.0 | 5.5.0 | +| 31 May 2024 | [5.5.1](pgd_5.5.1_rel_notes) | 5.5.1 | 5.5.0 | 5.5.0 | | 16 May 2024 | [5.5.0](pgd_5.5.0_rel_notes) | 5.5.0 | 5.5.0 | 5.5.0 | | 03 Apr 2024 | [5.4.1](pgd_5.4.1_rel_notes) | 5.4.1 | 5.4.0 | 5.4.0 | | 05 Mar 2024 | [5.4.0](pgd_5.4.0_rel_notes) | 5.4.0 | 5.4.0 | 5.4.0 | From 88fc066cb6009a2c56934c26e2f4527d5ae4f7aa Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Mon, 3 Jun 2024 11:09:22 -0400 Subject: [PATCH 11/51] Update product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx --- .../4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx index 396f505b869..672e3692bf7 100644 --- a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx +++ b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx @@ -8,7 +8,7 @@ To use the AI Copilot, you first must agree to its terms and conditions. ## Enabling the AI Copilot 1. In the top menu bar of the [Migration Portal](https://migration.enterprisedb.com/), select **AI Copilot**. - You can also access the AI Copilot from the right-hand pane. This view is available only after selecting a project and schema. + You can also access AI Copilot from the right-hand pane. This view is available only after selecting a project and schema. 1. Select **Opt in for AI Copilot**. A pop-up window displays the terms and conditions for the use of the AI Copilot. 1. Read the terms and conditions, select **I have read and agree to the terms and conditions of the agreement**, and close the window. From 893b808471e5b462622b67f419ccf7b4052ea40f Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Mon, 3 Jun 2024 11:09:30 -0400 Subject: [PATCH 12/51] Update product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx --- .../4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx index 672e3692bf7..812990d967f 100644 --- a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx +++ b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx @@ -13,7 +13,7 @@ To use the AI Copilot, you first must agree to its terms and conditions. A pop-up window displays the terms and conditions for the use of the AI Copilot. 1. Read the terms and conditions, select **I have read and agree to the terms and conditions of the agreement**, and close the window. -The AI Copilot is enabled. +AI Copilot is enabled. You can now use the AI Copilot chat to ask questions about the Migration Portal, migration strategies, syntax error resolution, and so on. For usage examples, see [How to create good prompts](ai_good_prompts/). From 131293a425dd1a79a5e7ecb6a6fd760cb8b531e4 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Mon, 3 Jun 2024 11:12:37 -0400 Subject: [PATCH 13/51] Apply suggestions from code review --- .../4/03_mp_using_portal/01_mp_overview_home.mdx | 2 +- .../4/03_mp_using_portal/02_mp_overview_project.mdx | 2 +- .../mp_ai_copilot/ai_good_prompts.mdx | 10 +++++----- .../mp_ai_copilot/enable_ai_copilot.mdx | 10 +++++----- .../4/03_mp_using_portal/mp_ai_copilot/index.mdx | 8 ++++---- .../docs/migration_portal/4/known_issues_notes.mdx | 2 +- 6 files changed, 17 insertions(+), 17 deletions(-) diff --git a/product_docs/docs/migration_portal/4/03_mp_using_portal/01_mp_overview_home.mdx b/product_docs/docs/migration_portal/4/03_mp_using_portal/01_mp_overview_home.mdx index 82a39b3aba5..5ab24451740 100644 --- a/product_docs/docs/migration_portal/4/03_mp_using_portal/01_mp_overview_home.mdx +++ b/product_docs/docs/migration_portal/4/03_mp_using_portal/01_mp_overview_home.mdx @@ -39,4 +39,4 @@ The Migration Portal home page allows access to the following Migration Portal f - **Portal Wiki**: Select **Portal Wiki** to access links to product information and more help guides. -- **AI Copilot**: Select [AI Copilot](/migration_portal/latest/03_mp_using_portal/mp_ai_copilot/) to interact with an AI chat interface. This interface helps you to get information about DDL compatibility, Postgres query syntax, EDB Postgres Advanced Server equivalents, and more. Before using the AI Copilot, you're prompted to agree to its terms of use to opt in to using the feature. +- **AI Copilot**: Select [AI Copilot](/migration_portal/latest/03_mp_using_portal/mp_ai_copilot/) to interact with an AI chat interface. This interface helps you to get information about DDL compatibility, Postgres query syntax, EDB Postgres Advanced Server equivalents, and more. Before using AI Copilot, you're prompted to agree to its terms of use to opt in to using the feature. diff --git a/product_docs/docs/migration_portal/4/03_mp_using_portal/02_mp_overview_project.mdx b/product_docs/docs/migration_portal/4/03_mp_using_portal/02_mp_overview_project.mdx index a1b457f5f9d..ec9ca9ac3c6 100644 --- a/product_docs/docs/migration_portal/4/03_mp_using_portal/02_mp_overview_project.mdx +++ b/product_docs/docs/migration_portal/4/03_mp_using_portal/02_mp_overview_project.mdx @@ -39,4 +39,4 @@ Use the following resources to gather information about your migration projects: - **Quick help**: The Quick help panel displays links to Knowledge Base articles and repair handler documentation. Use the **Search** box to search the Knowledge Base entries or repair handler documentation for specific information. -- **AI Copilot**: Select [AI Copilot](/migration_portal/latest/03_mp_using_portal/mp_ai_copilot/) to interact with an AI interface. This interface helps you to obtain information about DDL compatibility, Postgres query syntax, EDB Postgres Advanced Server equivalents, and more. Before using the AI Copilot, you're prompted to agree to its terms of use to opt in to using the feature. +- **AI Copilot**: Select [AI Copilot](/migration_portal/latest/03_mp_using_portal/mp_ai_copilot/) to interact with an AI interface. This interface helps you to obtain information about DDL compatibility, Postgres query syntax, EDB Postgres Advanced Server equivalents, and more. Before using AI Copilot, you're prompted to agree to its terms of use to opt in to using the feature. diff --git a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/ai_good_prompts.mdx b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/ai_good_prompts.mdx index 1ff226ff43e..2dcd8df4ad6 100644 --- a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/ai_good_prompts.mdx +++ b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/ai_good_prompts.mdx @@ -3,11 +3,11 @@ title: How to create good prompts --- -The quality and accuracy of the answers that the AI Copilot provides depend on the quality of your prompt. -The AI Copilot works best if it has large context around a question, details or specifics about the situation, and examples. +The quality and accuracy of the answers that AI Copilot provides depend on the quality of your prompt. +AI Copilot works best if it has large context around a question, details or specifics about the situation, and examples. The more detailed and precise you can make your prompt, the better the answers will be. -Like with other language models, you can use prompt engineering techniques with the AI Copilot to improve your query input. +Like with other language models, you can use prompt engineering techniques with AI Copilot to improve your query input. !!! Note Data considerations All prompts inserted in the chatbot are stored in an EDB-managed backend database and stored by the AI service provider. @@ -30,7 +30,7 @@ Like with other language models, you can use prompt engineering techniques with What tools are there to migrate databases? ``` -Since the AI Copilot is trained on EDB product documentation and knowledge base, this general question will still +Since AI Copilot is trained on EDB product documentation and knowledge base, this general question will still give you an answer in the context of EDB's product offerings. However, the chatbot focuses on the most common use case, migrating from an Oracle database to a Postgres database, which might not be your use case. @@ -90,7 +90,7 @@ Sometimes, the Migration Portal provides an imprecise or vague error message for ![Imprecise syntax error message](../../images/mp_vague_error.png) -You can ask the AI Copilot to explain the issue: +You can ask AI Copilot to explain the issue: ``` What is the issue with the following argument when used in a Postgres query? `PIVOT ( COUNT(expense_type_id) FOR expense_type_id IN (10, 20, 30) ) ORDER BY employee_ref;` diff --git a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx index 812990d967f..aa08fddfe26 100644 --- a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx +++ b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx @@ -5,21 +5,21 @@ title: Enabling the AI Copilot To use the AI Copilot, you first must agree to its terms and conditions. -## Enabling the AI Copilot +## Enabling AI Copilot 1. In the top menu bar of the [Migration Portal](https://migration.enterprisedb.com/), select **AI Copilot**. You can also access AI Copilot from the right-hand pane. This view is available only after selecting a project and schema. 1. Select **Opt in for AI Copilot**. - A pop-up window displays the terms and conditions for the use of the AI Copilot. + A pop-up window displays the terms and conditions for the use of AI Copilot. 1. Read the terms and conditions, select **I have read and agree to the terms and conditions of the agreement**, and close the window. AI Copilot is enabled. -You can now use the AI Copilot chat to ask questions about the Migration Portal, migration strategies, syntax error resolution, and so on. +You can now use AI Copilot chat to ask questions about the Migration Portal, migration strategies, syntax error resolution, and so on. For usage examples, see [How to create good prompts](ai_good_prompts/). -## Disabling the AI Copilot +## Disabling AI Copilot -You can disable the AI Copilot at any time. +You can disable AI Copilot at any time. 1. At the top-right corner of the Migration Portal, select your user name. 1. Select **Settings**. diff --git a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/index.mdx b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/index.mdx index b34378f8f09..481d9dea311 100644 --- a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/index.mdx +++ b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/index.mdx @@ -5,10 +5,10 @@ navigation: - ai_good_prompts --- -The AI Copilot is an AI-driven chatbot tool, designed to assist you with issues that can occur during the schema assessment of DDL files. +AI Copilot is an AI-driven chatbot tool, designed to assist you with issues that can occur during the schema assessment of DDL files. It uses third-party AI services and is trained on both EDB's product documentation and Support knowledge base to deliver use-case-specific solutions. -The chatbot is embedded into the Migration Portal, offering an easily accessible interface where you can interact with the AI Copilot. +The chatbot is embedded into the Migration Portal, offering an easily accessible interface where you can interact with AI Copilot. ## Use cases @@ -20,9 +20,9 @@ You can ask questions about topics such as: - Syntax errors - Usage examples for specific procedures or functions -## Use the AI Copilot +## Use AI Copilot -1. To start using the AI Copilot, [enable the service by agreeing to the terms and conditions](enable_ai_copilot/). +1. To start using AI Copilot, [enable the service by agreeing to the terms and conditions](enable_ai_copilot/). 1. Enter questions. For examples, see [How to create good prompts](ai_good_prompts/). diff --git a/product_docs/docs/migration_portal/4/known_issues_notes.mdx b/product_docs/docs/migration_portal/4/known_issues_notes.mdx index c26d7892937..c336582544a 100644 --- a/product_docs/docs/migration_portal/4/known_issues_notes.mdx +++ b/product_docs/docs/migration_portal/4/known_issues_notes.mdx @@ -238,7 +238,7 @@ While using the Oracle default case, you may experience a lower compatibility ra ## AI Copilot -The AI Copilot is a tool designed to assist you with issues that come up while migrating DDLs. +AI Copilot is a tool designed to assist you with issues that come up while migrating DDLs. While this tool can greatly aid in problem solving, it's important to understand that generative AI technology will sometimes generate inaccurate or irrelevant responses. The accuracy and quality of recommended solutions is heavily influenced by your [prompt and query strategies](/03_mp_using_portal/ai_good_prompts/). From 89614e719058d731a1217d5dc184807dc665525b Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Mon, 3 Jun 2024 11:15:16 -0400 Subject: [PATCH 14/51] Update 03_mp_quick_start.mdx --- .../migration_portal/4/03_mp_using_portal/03_mp_quick_start.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/migration_portal/4/03_mp_using_portal/03_mp_quick_start.mdx b/product_docs/docs/migration_portal/4/03_mp_using_portal/03_mp_quick_start.mdx index d640383779d..2549d370b22 100644 --- a/product_docs/docs/migration_portal/4/03_mp_using_portal/03_mp_quick_start.mdx +++ b/product_docs/docs/migration_portal/4/03_mp_using_portal/03_mp_quick_start.mdx @@ -22,7 +22,7 @@ To migrate Oracle schemas using Migration Portal: 1. Select the objects that aren't compatible with EDB Postgres Advanced Server. -1. To look up and understand possible workarounds for the objects that aren't compatible in EDB Postgres Advanced Server, refer to the Knowledge Base or interact with the [AI Copilot](/migration_portal/latest/03_mp_using_portal/mp_ai_copilot/). +1. To look up and understand possible workarounds for the objects that aren't compatible in EDB Postgres Advanced Server, refer to the Knowledge Base or interact with [AI Copilot](/migration_portal/latest/03_mp_using_portal/mp_ai_copilot/). !!! Important Ensure you test all suggested solutions to confirm the converted schemas behave as expected. From 29d55f2f3d56fb1e0f81a730c94dd81fa0d80ef4 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Mon, 3 Jun 2024 11:16:17 -0400 Subject: [PATCH 15/51] Update ai_good_prompts.mdx --- .../4/03_mp_using_portal/mp_ai_copilot/ai_good_prompts.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/ai_good_prompts.mdx b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/ai_good_prompts.mdx index 2dcd8df4ad6..d0b61cc16b6 100644 --- a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/ai_good_prompts.mdx +++ b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/ai_good_prompts.mdx @@ -63,7 +63,7 @@ What are type casts in Postgres? How do I use `SYS.ODCIVARCHAR2LIST`? ``` -You can ask the AI Copilot to correct a query with a syntax error. +You can ask AI Copilot to correct a query with a syntax error. In this case, copy the entire query that contains the line with the issue: ``` @@ -102,7 +102,7 @@ And to suggest an equivalent: Can you provide a Postgres-compatible version of this Oracle query? `SELECT EMPLOYEE_REF,"10","20","30" FROM ( SELECT employee_ref, expense_type_id FROM expenses ) PIVOT ( COUNT(expense_type_id) FOR expense_type_id IN (10, 20, 30) ) ORDER BY employee_ref;` ``` -The suggestions generated by the AI Copilot for these example prompts are different. +The suggestions generated by AI Copilot for these example prompts are different. In this case, the recommended path is to use the provided suggestions to create an equivalent query. Then test the behavior of the original against the created query, and ensure the target query fulfills the original purpose before using it in production. From 42c62d09c154dcfc801e0976c94d119172b9fd6a Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Mon, 3 Jun 2024 11:17:37 -0400 Subject: [PATCH 16/51] Update enable_ai_copilot.mdx --- .../03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx index aa08fddfe26..2b74ec6744d 100644 --- a/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx +++ b/product_docs/docs/migration_portal/4/03_mp_using_portal/mp_ai_copilot/enable_ai_copilot.mdx @@ -1,9 +1,9 @@ --- -title: Enabling the AI Copilot +title: Enabling AI Copilot --- -To use the AI Copilot, you first must agree to its terms and conditions. +To use AI Copilot, you first must agree to its terms and conditions. ## Enabling AI Copilot @@ -25,4 +25,4 @@ You can disable AI Copilot at any time. 1. Select **Settings**. 1. Select **Opt out from the AI Copilot**. -The AI Copilot is disabled. +AI Copilot is disabled. From 1c0fb7741f0b8eefa746749bd3077de77172c04f Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Mon, 3 Jun 2024 16:45:06 +0100 Subject: [PATCH 17/51] Update scaling.mdx --- product_docs/docs/pgd/5/scaling.mdx | 1 + 1 file changed, 1 insertion(+) diff --git a/product_docs/docs/pgd/5/scaling.mdx b/product_docs/docs/pgd/5/scaling.mdx index 171c983c050..52a51bb5e51 100644 --- a/product_docs/docs/pgd/5/scaling.mdx +++ b/product_docs/docs/pgd/5/scaling.mdx @@ -185,3 +185,4 @@ enabled, then no action occurs. Similarly, use [`bdr.autopartition_disable()`](/pgd/latest/reference/autopartition#bdrautopartition_disable) to disable autopartitioning on the given table. + From c85d74fffb4e8aea022bcdf8efff36243b6ef33f Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Mon, 8 Apr 2024 15:03:40 -0400 Subject: [PATCH 18/51] Edits to PGD PR5436 --- product_docs/docs/pgd/5/reference/functions.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/pgd/5/reference/functions.mdx b/product_docs/docs/pgd/5/reference/functions.mdx index c0e7bbd7e74..de9ce918c55 100644 --- a/product_docs/docs/pgd/5/reference/functions.mdx +++ b/product_docs/docs/pgd/5/reference/functions.mdx @@ -257,9 +257,9 @@ issues). We therefore recommend that you always set a `statement_timeout` with `wait_for_completion` to prevent an infinite loop. The `node_group_name` is optional and can be used to specify the name of the node group where the -leadership transfer should happen. If not specified, it defaults to NULL which +leadership transfer happens. If not specified, it defaults to NULL, which is interpreted as the top-level group in the cluster. If the `node_group_name` is -specified, the function will only transfer leadership within the specified node +specified, the function transfers leadership only within the specified node group. ## Utility functions From d00fba2d741ea9e20c99fb2ab61b6d8044005df3 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Tue, 28 May 2024 14:56:35 -0400 Subject: [PATCH 19/51] Edits to EDB Postgres AI mostly for capitalization This pull request has makes the heads consistent with the style guide. Also some errant ASCii character issues are addressed and some other very basic edits (removal of "please," for esample. Fixed one additional typo Update advocacy_docs/edb-postgres-ai/analytics/concepts.mdx Co-authored-by: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Update advocacy_docs/edb-postgres-ai/analytics/concepts.mdx Co-authored-by: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Update advocacy_docs/edb-postgres-ai/analytics/concepts.mdx Co-authored-by: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Update advocacy_docs/edb-postgres-ai/analytics/concepts.mdx Co-authored-by: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Update advocacy_docs/edb-postgres-ai/analytics/index.mdx Co-authored-by: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Update install-tech-preview.mdx --- .../ai-ml/install-tech-preview.mdx | 9 ++-- .../additional_functions.mdx | 2 +- .../working-with-ai-data-in-S3.mdx | 4 +- .../working-with-ai-data-in-postgres.mdx | 14 +++--- .../edb-postgres-ai/analytics/concepts.mdx | 36 +++++++------- .../edb-postgres-ai/analytics/index.mdx | 33 +++++++------ .../edb-postgres-ai/analytics/quick_start.mdx | 36 +++++++------- .../edb-postgres-ai/analytics/reference.mdx | 48 +++++++++---------- .../console/agent/agent-as-a-service.mdx | 2 +- .../console/agent/install-agent.mdx | 2 +- .../edb-postgres-ai/console/estate.mdx | 4 +- .../edb-postgres-ai/console/getstarted.mdx | 6 +-- .../edb-postgres-ai/databases/databases.mdx | 12 ++--- .../edb-postgres-ai/databases/index.mdx | 4 +- .../edb-postgres-ai/databases/options.mdx | 14 +++--- .../edb-postgres-ai/overview/concepts.mdx | 20 ++++---- .../edb-postgres-ai/overview/guide.mdx | 3 +- .../edb-postgres-ai/overview/index.mdx | 6 +-- .../edb-postgres-ai/overview/releasenotes.mdx | 35 +++++++------- .../edb-postgres-ai/tools/backup.mdx | 6 +-- .../tools/migration-and-ai.mdx | 8 ++-- 21 files changed, 148 insertions(+), 156 deletions(-) diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx index 23c4d5a7843..719db4771ea 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx @@ -9,10 +9,9 @@ The preview release of pgai is distributed as a self-contained Docker container ## Configuring and running the container image -If you haven’t already, sign up for an EDB account and log in to the EDB container registry. +If you haven't already, sign up for an EDB account and log in to the EDB container registry. - -Log in to docker with your the username tech-preview and your EDB Repo 2.0 Subscription Token as your password: +Log in to Docker with the username tech-preview and your EDB Repo 2.0 subscription token as your password: ```shell docker login docker.enterprisedb.com -u tech-preview -p @@ -65,13 +64,13 @@ docker run -d --name pgai \ ## Connect to Postgres -If you haven’t yet, install the Postgres command-line tools. If you’re on a Mac, using Homebrew, you can install it as follows: +If you haven't yet, install the Postgres command-line tools. If you're on a Mac, using Homebrew, you can install it as follows: ```shell brew install libpq ``` -Connect to the tech preview PostgreSQL running in the container. Note that this relies on $PGPASSWORD being set - if you’re using a different terminal for this part, make sure you re-export the password: +Connect to the tech preview PostgreSQL running in the container. Note that this relies on $PGPASSWORD being set - if you're using a different terminal for this part, make sure you re-export the password: ```shell psql -h localhost -p 15432 -U postgres postgres diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/additional_functions.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/additional_functions.mdx index 893657651e0..ddf57844f58 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/additional_functions.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/additional_functions.mdx @@ -1,5 +1,5 @@ --- -title: Additional functions and stand-alone embedding in pgai +title: Additional functions and standalone embedding in pgai navTitle: Additional functions description: Other pgai extension functions and how to generate embeddings for images and text. --- diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-S3.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-S3.mdx index fab9c220596..e3de4b43a57 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-S3.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-S3.mdx @@ -8,7 +8,7 @@ We recommend you to prepare your own S3 compatible object storage bucket with so In addition we use image data and an according image encoder LLM in this example instead of text data. But you could also use plain text data on object storage similar to the examples in the previous section. -First let’s create a retriever for images stored on s3-compatible object storage as the source. We specify torsten as the bucket name and an endpoint URL where the bucket is created. We specify an empty string as prefix because we want all the objects in that bucket. We use the [`clip-vit-base-patch32`](https://huggingface.co/openai/clip-vit-base-patch32) open encoder model for image data from HuggingFace. We provide a name for the retriever so that we can identify and reference it subsequent operations: +First let's create a retriever for images stored on s3-compatible object storage as the source. We specify torsten as the bucket name and an endpoint URL where the bucket is created. We specify an empty string as prefix because we want all the objects in that bucket. We use the [`clip-vit-base-patch32`](https://huggingface.co/openai/clip-vit-base-patch32) open encoder model for image data from HuggingFace. We provide a name for the retriever so that we can identify and reference it subsequent operations: ```sql SELECT pgai.create_s3_retriever( @@ -39,7 +39,7 @@ __OUTPUT__ (1 row) ``` -Finally, run the retrieve_via_s3 function with the required parameters to retrieve the top K most relevant (most similar) AI data items. Please be aware that the object type is currently limited to image and text files. +Finally, run the retrieve_via_s3 function with the required parameters to retrieve the top K most relevant (most similar) AI data items. Be aware that the object type is currently limited to image and text files. ```sql SELECT data from pgai.retrieve_via_s3( diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-postgres.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-postgres.mdx index aec055f14c4..863ea18a1ec 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-postgres.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-postgres.mdx @@ -6,9 +6,9 @@ description: How to work with AI data stored in Postgres tables using the pgai e We will first look at working with AI data stored in columns in the Postgres table. -To see how to use AI data stored in S3-compatible object storage, please skip to the next section. +To see how to use AI data stored in S3-compatible object storage, skip to the next section. -First let’s create a Postgres table for some test AI data: +First let's create a Postgres table for some test AI data: ```sql CREATE TABLE products ( @@ -22,7 +22,7 @@ CREATE TABLE ``` -Now let’s create a retriever with the just created products table as the source. We specify product_id as the unique key column to and we define the product_name and description columns to use for the similarity search by the retriever. We use the `all-MiniLM-L6-v2` open encoder model from HuggingFace. We set `auto_embedding` to True so that any future insert, update or delete to the source table will automatically generate, update or delete also the corresponding embedding. We provide a name for the retriever so that we can identify and reference it subsequent operations: +Now let's create a retriever with the just created products table as the source. We specify product_id as the unique key column to and we define the product_name and description columns to use for the similarity search by the retriever. We use the `all-MiniLM-L6-v2` open encoder model from HuggingFace. We set `auto_embedding` to True so that any future insert, update or delete to the source table will automatically generate, update or delete also the corresponding embedding. We provide a name for the retriever so that we can identify and reference it subsequent operations: ```sql SELECT pgai.create_pg_retriever( @@ -44,7 +44,7 @@ __OUTPUT__ -Now let’s insert some AI data records into the products table. Since we have set auto_embedding to True, the retriever will automatically generate all embeddings in real-time for each inserted record: +Now let's insert some AI data records into the products table. Since we have set auto_embedding to True, the retriever will automatically generate all embeddings in real-time for each inserted record: ```sql INSERT INTO products (product_name, description) VALUES @@ -80,7 +80,7 @@ __OUTPUT__ (5 rows) ``` -Now let’s try a retriever without auto embedding. This means that the application has control over when the embeddings are computed in a bulk fashion. For demonstration we can simply create a second retriever for the same products table that we just created above: +Now let's try a retriever without auto embedding. This means that the application has control over when the embeddings are computed in a bulk fashion. For demonstration we can simply create a second retriever for the same products table that we just created above: ```sql SELECT pgai.create_pg_retriever( @@ -115,7 +115,7 @@ __OUTPUT__ (0 rows) ``` -That’s why we first need to run a bulk generation of embeddings. This is achieved via the `refresh_retriever()` function: +That's why we first need to run a bulk generation of embeddings. This is achieved via the `refresh_retriever()` function: ```sql SELECT pgai.refresh_retriever( @@ -148,7 +148,7 @@ __OUTPUT__ (5 rows) ``` -Now let’s see what happens if we add additional AI data records: +Now let's see what happens if we add additional AI data records: ```sql INSERT INTO products (product_name, description) VALUES diff --git a/advocacy_docs/edb-postgres-ai/analytics/concepts.mdx b/advocacy_docs/edb-postgres-ai/analytics/concepts.mdx index 85b27735624..bb53a540f41 100644 --- a/advocacy_docs/edb-postgres-ai/analytics/concepts.mdx +++ b/advocacy_docs/edb-postgres-ai/analytics/concepts.mdx @@ -7,37 +7,37 @@ description: Learn about the ideas and terminology behind EDB Postgres Lakehouse EDB Postgres Lakehouse is the solution for running Rapid Analytics against operational data on the EDB Postgres® AI platform. -## Major Concepts +## Major concepts -* **Lakehouse Nodes** query **Lakehouse Tables** in **Managed Storage Locations**. -* **Lakehouse Sync** can create **Lakehouse Tables** from **Transactional Tables** in a source database. +* **Lakehouse nodes** query **Lakehouse tables** in **managed storage locations**. +* **Lakehouse Sync** can create **Lakehouse tables** from **Transactional tables** in a source database. Here's how it fits together: ![Level 50 basic architecture](./images/level-50-architecture.png) -### Lakehouse Node +### Lakehouse node -A Postgres Lakehouse Node is Postgres, with a Vectorized Query Engine that's -optimized to query Lakehouse Tables, but still fall back to Postgres for full +A Postgres Lakehouse node is Postgres, with a Vectorized Query Engine that's +optimized to query Lakehouse tables, but still fall back to Postgres for full compatibility. Lakehouse nodes are stateless and ephemeral. Scale them up or down based on workload requirements. -### Lakehouse Tables +### Lakehouse tables -Lakehouse Tables are stored using highly compresible, columnar storage formats +Lakehouse Tables are stored using highly compressible, columnar storage formats optimized for analytics and interoperable with the rest of the Analytics ecosystem. -Currently, Postgres Lakehouse Nodes can read tables stored using the Delta +Currently, Postgres Lakehouse nodes can read tables stored using the Delta Protocol ("delta tables"), and Lakehouse Sync can write them. -### Managed Storage Location +### Managed storage location -A Managed Storage Location is where you can organize Lakehouse Tables in +A *managed storage location* is where you can organize Lakehouse tables in object storage, so that Postgres Lakehouse can query them. -A "Managed Storage Location" is a location in object storage where we control +A managed storage location is a location in object storage where we control the file layout and write Lakehouse Tables on your behalf. Technically, it's an implementation detail that we store these in buckets. This is really a subset of an upcoming "Storage Location" feature that will also support @@ -45,7 +45,7 @@ of an upcoming "Storage Location" feature that will also support ### Lakehouse Sync -Lakehouse Sync is a Data Migration Service offered as part of the EDB +Lakehouse Sync is a data migration service offered as part of the EDB Postgres AI platform. It can "sync" tables from a transactional database, to Lakehouse Tables in a destination Storage Location. Currently, it supports source databases hosted in the EDB Postgres AI Cloud Service (formerly known as @@ -58,28 +58,28 @@ It's built using [Debezium](https://debezium.io). ### Lakehouse The -"[Lakehouse Architecture](https://15721.courses.cs.cmu.edu/spring2023/papers/02-modern/armbrust-cidr21.pdf)" +"[Lakehouse architecture](https://15721.courses.cs.cmu.edu/spring2023/papers/02-modern/armbrust-cidr21.pdf)" is a data engineering practice, which is a portmanteau of "Data _Lake_" and "Data Ware_house_," offering the best of both. The central tenet of the architecture is that data is stored in Object Storage, generally in columnar formats like Parquet, where different query engines can process it for their own specialized purposes, using the optimal compute resources for a given query. -### Vectorized Query Engine +### Vectorized query engine A vectorized query engine is a query engine that's optimized for running queries on columnar data. Most analytics engines use vectorized query execution. Postgres Lakehouse uses [Apache DataFusion](https://datafusion.apache.org/). -### Delta Tables +### Delta tables -We use the term "Lakehouse Tables" to avoid overcommitting to a particular +We use the term "Lakehouse tables" to avoid overcommitting to a particular format (since we might eventually support Iceberg or Hudi, for example). But technically, we're using [Delta Tables](https://delta.io/). A Delta Table is a well-defined container of Parquet files and JSON metadata, according to the "Delta Lake" spec and open protocol. Delta Lake is a Linux Foundation project. -## How it Works +## How it works Postgres Lakehouse is built using a number of technologies: diff --git a/advocacy_docs/edb-postgres-ai/analytics/index.mdx b/advocacy_docs/edb-postgres-ai/analytics/index.mdx index aa1f0279fee..2d4bff1e9d4 100644 --- a/advocacy_docs/edb-postgres-ai/analytics/index.mdx +++ b/advocacy_docs/edb-postgres-ai/analytics/index.mdx @@ -1,6 +1,6 @@ --- -title: Lakehouse Analytics -navTitle: Lakehouse Analytics +title: Lakehouse analytics +navTitle: Lakehouse analytics indexCards: simple iconName: Improve navigation: @@ -11,19 +11,19 @@ navigation: EDB Postgres Lakehouse extends the power of Postgres to analytical workloads, by adding a vectorized query engine and separating storage from compute. Building -a Data Lakehouse has never been easier – just use Postgres. +a data Lakehouse has never been easier: just use Postgres. -## Rapid Analytics for Postgres +## Rapid analytics for Postgres Postgres Lakehouse is a core offering of the EDB Postgres® AI platform, extending Postgres to support analytical queries over columnar data in object storage, while keeping the simplicity and ease of use that Postgres users love. -With Postgres Lakehouse, you can query your Postgres data with a Lakehouse Node, -an ephemeral, scale-to-zero compute resource powered by Postgres that’s optimized for +With Postgres Lakehouse, you can query your Postgres data with a Lakehouse node, +an ephemeral, scale-to-zero compute resource powered by Postgres that's optimized for vectorized query execution over columnar data. -## Postgres Native +## Postgres native Never leave the Postgres ecosystem. @@ -33,16 +33,16 @@ columnar tables in object storage using the open source Delta Lake protocol. EDB Postgres Lakehouse is “just Postgres” – you can query it with any Postgres client, and it fully supports all Postgres queries, functions and statements, so -there’s no need to change existing queries or reconfigure business +there's no need to change existing queries or reconfigure business intelligence software. -## Vectorized Execution +## Vectorized execution -Postgres Lakehouse uses Apache DataFusion’s vectorized SQL query engine to +Postgres Lakehouse uses Apache DataFusion's vectorized SQL query engine to execute analytical queries 5-100x faster (30x on average) compared to native Postgres, while still falling back to native execution when necessary. -## Columnar Storage +## Columnar storage Postgres Lakehouse is optimized to query "Lakehouse Tables" in object storage, extending the power of open source database to open table formats. Currently, @@ -54,10 +54,10 @@ You can sync your own data from tables in transactional sources (initially, EDB Postgres® AI Cloud Service databases) into Lakehouse Tables in Storage Locations (initially, managed locations in S3 object storage). -## Fully Managed Service +## Fully managed service You can launch Postgres Lakehouse nodes using the EDB Postgres AI Cloud -Service (formerly EDB BigAnimal). Point a Lakehouse Node at a storage bucket +Service (formerly EDB BigAnimal). Point a Lakehouse node at a storage bucket with some Delta Tables in it, and get results of analytical (OLAP) queries in less time than if you queried the same data in a transactional Postgres database. @@ -65,9 +65,8 @@ Postgres Lakehouse nodes are available now for customers using EDB Postgres AI - Hosted environments on AWS, and will be rolling out to additional cloud environments soon. -## Try Today +## Try it today -It’s easy to start using Postgres Lakehouse. Provision a Lakehouse Node in five -minutes, and start qureying pre-loaded benchmark data like TPC-H, TPC-DS, +It's easy to start using Postgres Lakehouse. Provision a Lakehouse node in five +minutes, and start querying pre-loaded benchmark data like TPC-H, TPC-DS, Clickbench, and the 1 Billion Row challenge. - diff --git a/advocacy_docs/edb-postgres-ai/analytics/quick_start.mdx b/advocacy_docs/edb-postgres-ai/analytics/quick_start.mdx index c2012886381..4e177f4bdf1 100644 --- a/advocacy_docs/edb-postgres-ai/analytics/quick_start.mdx +++ b/advocacy_docs/edb-postgres-ai/analytics/quick_start.mdx @@ -1,12 +1,12 @@ --- title: Quick Start - EDB Postgres Lakehouse navTitle: Quick Start -description: Launch a Lakehouse Node and query sample data. +description: Launch a Lakehouse node and query sample data. --- In this guide, you will: -1. Create a Lakehouse Node +1. Create a Lakehouse node 2. Connect to the node with your preferred Postgres client 3. Query sample data (TPC-H, TPC-DS, Clickbench, or 1BRC) in object storage @@ -14,7 +14,7 @@ For more details and advanced use cases, see [reference](./reference). ## Introduction -Postgres Lakehouse is a new type of Postgres “cluster” (it’s really just one +Postgres Lakehouse is a new type of Postgres “cluster” (it's really just one node) that you can provision in EDB Postgres® AI Cloud Services (formerly known as "BigAnimal"). It includes a vectorized query engine (based on Apache [DataFusion](https://github.com/apache/datafusion)) for fast queries over @@ -39,18 +39,18 @@ restarts and will be saved as part of backup/restore operations. Otherwise, Lakehouse tables will not be part of backups, since they are ultimately stored in object storage. -### Basic Architecture +### Basic architecture -Here's "what's in the box of a Lakehouse Node: +Here's what's in the box of a Lakehouse node: -![Level 300 Architecture of Postgres Lakehouse Node](./images/level-300-architecture.png) +![Level 300 Architecture of Postgres Lakehouse node](./images/level-300-architecture.png) -## Getting Started +## Getting started -You will need an EDB Postgres AI account. Once you’ve logged in and created +You will need an EDB Postgres AI account. Once you've logged in and created a project, you can create a cluster. -### Create a Lakehouse Node +### Create a Lakehouse node You will see a “Lakehouse Analytics” option under the “Create New” dropdown on your project page: @@ -79,13 +79,13 @@ block storage device and will survive a restart or backup/restore cycle. * Only Postgres 16 is supported. For more notes about supported instance sizes, -see [reference - supported AWS instances](./reference/#supported-aws-instances). +see [Reference - Supported AWS instances](./reference/#supported-aws-instances). -## Operating a Lakehouse Node +## Operating a Lakehouse node -### Connect to the Node +### Connect to the node -You can connect to the Lakehouse Node with any Postgres client, in the same way +You can connect to the Lakehouse node with any Postgres client, in the same way that you connect to any other cluster from EDB Postgres AI Cloud Service (formerly known as BigAnimal): navigate to the cluster detail page and copy its connection string. @@ -121,9 +121,9 @@ remain untouched. storage (but it supports write queries to system tables for creating users, etc.). You cannot write directly to object storage. You cannot create new tables. * If you want to load your own data into object storage, -see [reference - bring your own data](./reference/#advanced-bring-your-own-data). +see [Reference - Bring your own data](./reference/#advanced-bring-your-own-data). -## Inspect the Benchmark Datasets +## Inspect the benchmark datasets Inspect the Benchmark Datasets. Every cluster has some benchmarking data available out of the box. If you are using pgcli, you can run `\dn` to see @@ -137,9 +137,9 @@ The available benchmarking datsets are: * 1 Billion Row Challenge For more details on benchmark datasets, -see [reference - available benchmarking datasets](./reference/#available-benchmarking-datasets). +see Reference - Available benchmarking datasets](./reference/#available-benchmarking-datasets). -## Query the Benchmark Datasets +## Query the benchmark datasets You can try running some basic queries: @@ -164,5 +164,5 @@ SELECT 1 Time: 0.651s ``` -Note: Do not use `search_path`! Please read the [reference](./reference) +Note: Do not use `search_path`! Read the [reference](./reference) page for more gotchas and information about syntax/query compatibility. diff --git a/advocacy_docs/edb-postgres-ai/analytics/reference.mdx b/advocacy_docs/edb-postgres-ai/analytics/reference.mdx index 8ef4c8c4e16..40632349eef 100644 --- a/advocacy_docs/edb-postgres-ai/analytics/reference.mdx +++ b/advocacy_docs/edb-postgres-ai/analytics/reference.mdx @@ -10,17 +10,17 @@ limited in terms of where you can deploy it and what data you can query with it. To get the best experience with Postgres Lakehouse, you should follow the "quick start" guide to query benchmarking data. Then you can try loading your -own data with Lakehouse Sync. If you're intrigued, please reach out to us and +own data with Lakehouse Sync. If you're intrigued, reach out to us and we can talk more about your use case and potential opportunities. This page details some of the important bits to know. -## Supported Cloud Providers and Regions +## Supported cloud providers and regions -**AWS Only**: Currently, support for all Lakehouse features (Lakehouse Nodes, +**AWS only**: Currently, support for all Lakehouse features (Lakehouse nodes, Managed Storage Locations, and Lakehouse Sync) is limited to AWS. -**EDB-Hosted Only**: "Bring Your Own Account" (BYOA) regions are NOT currently +**EDB-hosted only**: "Bring Your Own Account" (BYOA) regions are NOT currently supported for Lakehouse resources. Support is limited to ONLY **EDB Postgres® AI - Hosted** environments on AWS (a.k.a. "EDB-Hosted AWS regions"). @@ -41,7 +41,7 @@ This means you can select from one of the following regions: To be precise: -* Lakehouse Nodes can only be provisioned in EDB-hosted AWS regions +* Lakehouse nodes can only be provisioned in EDB-hosted AWS regions * Managed Storage Locations can only be created in EDB-hosted AWS regions * Lakehouse Sync can only sync from source databases in EDB-hosted AWS regions @@ -49,9 +49,9 @@ These limitations will be removed as we continue to improve the product. Eventua we will support BYOA, as well as Azure and GCP, for all Lakehouse use cases. We will also add better support for "external" buckets ("bring your own bucket"). -## Supported AWS Instances +## Supported AWS instances -When deploying a Lakehouse Node, you must choose an instance type from +When deploying a Lakehouse node, you must choose an instance type from the `m6id` family of instances. Importantly, these instances come with NVMe drives attached to them. @@ -63,7 +63,7 @@ All data on the NVMe drives will be lost when the cluster is shutdown. *etc.) is stored in an attached block storage device, and will survive a pause/resume cycle. -**Supported Instances** +**Supported instances** | API Name | Memory | vCPUs | Cores | Storage | | --------------- | --------- | --------- | ----- | ------------------------------- | @@ -77,9 +77,9 @@ block storage device, and will survive a pause/resume cycle. | `m6id.24xlarge` | 384.0 GiB | 96 vCPUs | 48 | 5700 GB (4 \* 1425 GB NVMe SSD) | | `m6id.32xlarge` | 512.0 GiB | 128 vCPUs | 64 | 7600 GB (4 \* 1900 GB NVMe SSD) | -## Available Benchmarking Datasets +## Available benchmarking datasets -When you provision a Lakehouse Node, it comes pre-configured to point to a public +When you provision a Lakehouse node, it comes pre-configured to point to a public S3 bucket in its same region, containing sample benchmarking datasets. You can query tables in these datasets by referencing them with their schema @@ -110,9 +110,9 @@ but unquoted identifiers in Postgres are case-insensitive. For example: 🚫 `select Title from clickbench.hits;` !!! -## User Management +## User management -When you provision a Lakehouse Node, you must provide a password. We do not +When you provision a Lakehouse node, you must provide a password. We do not save this password. You will need it to login as the `edb_admin` user. This is not a superuser account, but it does have the ability to create users and roles and grants. Thus, you can either share the credentials for `edb_admin` itself, @@ -142,7 +142,7 @@ SELECT COUNT(*) FROM lineitem; SELECT COUNT(*) FROM tpch_sf_10.lineitem ``` -## Supported Queries +## Supported queries In general, **READ ONLY** queries are supported. You cannot write directly to object storage. This includes all Postgres built-in functions, statements @@ -160,7 +160,7 @@ roles, and grants. These tables are stored on the local block device, which is included in backups and restores. So you can `CREATE USER` or `CREATE ROLE` or `GRANT USAGE`, and these users/roles/grants will survive restarts and restores. -## DirectScan vs. Fallback Modes and EXPLAIN +## DirectScan vs. fallback modes and EXPLAIN Postgres Lakehouse is fastest when it can "push down" your entire query to DataFusion, the vectorized query used for handling queries when possible. (In the @@ -168,12 +168,12 @@ future, this will be more fine-grained as we add support for partial pushdowns.) Postgres Lakehouse can execute your query in two modes. First, it attempts to run the entire query using Seafowl (a dedicated columnar database based on -DataFusion). If Seafowl can’t run the entire query, for example, because it +DataFusion). If Seafowl can't run the entire query, for example, because it uses PostgreSQL-specific operations like JSON, then Postgres Lakehouse will fall back to using the PostgreSQL executor, with Seafowl streaming full table contents to it. -If your query is extremely slow, it’s possible that’s what’s happening. +If your query is extremely slow, it's possible that's what's happening. You can check which mode is being used by running an `EXPLAIN` on the query and making sure that the top-most query node is `SeafowlDirectScan`. For example: @@ -212,10 +212,10 @@ edb_admin=> explain select count from (select count(*) as count from tpch_sf_1.l Here, we can see the `SeafowlDirectScan` at the top, which means that Seafowl is running the entire query. -If you’re having trouble rewording your query to make it run fully on Seafowl, -please open a support ticket. +If you're having trouble rewording your query to make it run fully on Seafowl, +open a support ticket. -## Load Data with Lakehouse Sync +## Load data with Lakehouse sync If you have a transactional database running in EDB Postgres AI Cloud Service, then you can sync tables from this database into a Managed Storage Location. @@ -223,9 +223,9 @@ then you can sync tables from this database into a Managed Storage Location. A more detailed guide for this is forthcoming. If you want to try it yourself, look in the UI for "Migrations" or "Sync to Lakehouse." -## Advanced: Bring Your Own Data +## Advanced: Bring your own data -It's possible to point your Lakehouse Node at an arbitrary S3 bucket with Delta +It's possible to point your Lakehouse node at an arbitrary S3 bucket with Delta Tables inside of it. However, this comes with some major caveats (which will eventually be resolved): @@ -234,7 +234,7 @@ eventually be resolved): * The bucket must be publicly accessible. * If you want to use a private bucket, this is technically possible, but requires some manual action on our side and your side (to assign the correct -IAM policies). Please let us know if you want to try it. We will be adding +IAM policies). Let us know if you want to try it. We will be adding proper support for private, external buckets in the near future. * The tables must be stored as [Delta Tables](http://github.com/delta-io/delta/blob/master/PROTOCOL.md) within the location * A “Delta Table” is a folder of Parquet files along with some JSON metadata. @@ -268,7 +268,7 @@ export AWS_SECRET_ACCESS_KEY="..." ### Pointing to your bucket -By default, each Lakehouse Node is configured to point to a bucket with +By default, each Lakehouse node is configured to point to a bucket with benchmarking datasets inside. To point it to a different bucket, you can call the `seafowl.set_bucket_location` function: @@ -285,7 +285,7 @@ to query data in `my_schema.my_table`: SELECT * FROM some_table; ``` -Note that using an S3 bucket that isn’t in the same region as your node +Note that using an S3 bucket that isn't in the same region as your node will 1) be slow because of cross-region latencies, and 2) will incur AWS costs (between $0.01 and $0.02 / GB) for data transfer! Currently these egress costs are not passed through to you but we do track them and reserve diff --git a/advocacy_docs/edb-postgres-ai/console/agent/agent-as-a-service.mdx b/advocacy_docs/edb-postgres-ai/console/agent/agent-as-a-service.mdx index a25ab20e208..3ea57971429 100644 --- a/advocacy_docs/edb-postgres-ai/console/agent/agent-as-a-service.mdx +++ b/advocacy_docs/edb-postgres-ai/console/agent/agent-as-a-service.mdx @@ -27,7 +27,7 @@ What follows is an example of how to run Beacon Agent as a service, specifically exit ``` - To complete the setup for local authentication with Postgres, you need to ensure your `pg_hba.conf` file is configured to allow Unix-domain socket connections. Please verify or update the following line in your `pg_hba.conf` file: + To complete the setup for local authentication with Postgres, you need to ensure your `pg_hba.conf` file is configured to allow Unix-domain socket connections. Verify or update the following line in your `pg_hba.conf` file: ``` local all all peer diff --git a/advocacy_docs/edb-postgres-ai/console/agent/install-agent.mdx b/advocacy_docs/edb-postgres-ai/console/agent/install-agent.mdx index a0285208005..ec2763cbff2 100644 --- a/advocacy_docs/edb-postgres-ai/console/agent/install-agent.mdx +++ b/advocacy_docs/edb-postgres-ai/console/agent/install-agent.mdx @@ -69,7 +69,7 @@ Before you begin, you need to have the following: mkdir ${HOME}/.beacon ``` - Next, configure Beacon Agent by setting the access key (the one you obtained the [Creating a machine user](create_machine_user)) and project ID: + Next, configure Beacon Agent by setting the access key (the one you obtained while [Creating a machine user](create_machine_user)) and project ID: ``` export BEACON_AGENT_ACCESS_KEY= diff --git a/advocacy_docs/edb-postgres-ai/console/estate.mdx b/advocacy_docs/edb-postgres-ai/console/estate.mdx index 156bcce627e..46434c02703 100644 --- a/advocacy_docs/edb-postgres-ai/console/estate.mdx +++ b/advocacy_docs/edb-postgres-ai/console/estate.mdx @@ -1,9 +1,9 @@ --- title: EDB Postgres AI Console - Estate navTitle: Estate -description: How to manage and integrate EDB Postgres AI Databases and more with EDB Postgres AI Console's single pane of glass. +description: How to manage and integrate EDB Postgres AI databases and more with EDB Postgres AI Console's single pane of glass. --- -## What is EDB Postgres AI Estate +## What is EDB Postgres AI Estate? The EDB Postgres® AI Estate is a component of the EDB Postgres AI Console that provides a single pane of glass for managing and integrating EDB Postgres AI Databases and EDB Postgres AI Agents. The Estate provides a centralized location for managing the lifecycle of EDB Postgres AI Databases and EDB Postgres AI Agents, including provisioning, scaling, and monitoring. The Estate also provides a centralized location for managing the integration of EDB Postgres AI Databases and EDB Postgres AI Agents with the EDB Postgres AI Console's single pane of glass. \ No newline at end of file diff --git a/advocacy_docs/edb-postgres-ai/console/getstarted.mdx b/advocacy_docs/edb-postgres-ai/console/getstarted.mdx index ffb9e5ed18e..a04cde879ed 100644 --- a/advocacy_docs/edb-postgres-ai/console/getstarted.mdx +++ b/advocacy_docs/edb-postgres-ai/console/getstarted.mdx @@ -1,10 +1,10 @@ --- -title: EDB Postgres AI Console - Get Started -navTitle: Get Started +title: EDB Postgres AI Console - Get started +navTitle: Get started description: Get started with the EDB Postgres AI Console. --- -The EDB Postgres® AI Console is a web-based user interface that provides a single pane of glass for managing and monitoring EDB Postgres AI Database Cloud Service and EDB Postgres AI Databases. The EDB Postgres AI Console provides a unified view of the EDB Postgres AI Database Cloud Service and EDB Postgres AI Databases, allowing users to manage and monitor their databases, users, and resources from a single interface. +The EDB Postgres® AI Console is a web-based user interface that provides a single pane of glass for managing and monitoring EDB Postgres AI Database Cloud Service and EDB Postgres AI databases. The EDB Postgres AI Console provides a unified view of the EDB Postgres AI Database Cloud Service and EDB Postgres AI databases, allowing users to manage and monitor their databases, users, and resources from a single interface. ## Accessing the EDB Postgres AI Console diff --git a/advocacy_docs/edb-postgres-ai/databases/databases.mdx b/advocacy_docs/edb-postgres-ai/databases/databases.mdx index 33969ef56b7..171adf5f6ed 100644 --- a/advocacy_docs/edb-postgres-ai/databases/databases.mdx +++ b/advocacy_docs/edb-postgres-ai/databases/databases.mdx @@ -1,5 +1,5 @@ --- -title: EDB Postgres AI Databases +title: EDB Postgres AI databases navTitle: Databases description: Deploy EDB Postgres AI Databases on-premises with the EDB Postgres AI Estate and Agent components. --- @@ -8,20 +8,18 @@ EDB Postgres® databases are the core of the EDB Postgres AI platform. EDB Postg ## EDB Postgres Advanced Server (EPAS) -EDB Postgres Advanced Server (EPAS) is an enhanced version of PostgreSQL that is designed to meet the needs of large-scale, mission-critical enterprise workloads. EPAS is built on the open source PostgreSQL database, and includes additional enterprise-class features and capabilities that are critical for enterprise database deployments. These include Oracle compatibility and transparent data encryption. EPAS is available for self-managed deployment and on the EDB Postgres AI Cloud Service. +EDB Postgres Advanced Server is an enhanced version of PostgreSQL that is designed to meet the needs of large-scale, mission-critical enterprise workloads. EDB Postgres Advanced Server is built on the open source PostgreSQL database, and includes additional enterprise-class features and capabilities that are critical for enterprise database deployments. These include Oracle compatibility and transparent data encryption. EDB Postgres Advanced Server is available for self-managed deployment and on the EDB Postgres AI Cloud Service. * Read more about [EDB Postgres Advanced Server](/epas/latest/). ## EDB Postgres Extended Server (PGE) -EDB Postgres Extended Server (PGE) is an enhanced version of PostgreSQL that is designed to meet the needs of large-scale, mission-critical enterprise workloads. PGE is built on the open source PostgreSQL database, and includes additional enterprise-class features and capabilities that are critical for enterprise database deployments. This includes transparent data encryption. PGE is available for self-managed deployment and on the EDB Postgres AI Cloud Service. +EDB Postgres Extended Server is an enhanced version of PostgreSQL that is designed to meet the needs of large-scale, mission-critical enterprise workloads. PGE is built on the open source PostgreSQL database, and includes additional enterprise-class features and capabilities that are critical for enterprise database deployments. This includes transparent data encryption. PGE is available for self-managed deployment and on the EDB Postgres AI Cloud Service. * Read more about [EDB Postgres Extended Server](/pge/latest/). -## EDB Postgres Distributed +## EDB Postgres Distributed (PGD) -EDB Postgres Distributed (PGD) is a high availability solution for EDB Postgres databases. PGD provides a distributed database environment that is designed to ensure high availability and fault tolerance for mission-critical workloads. PGD can be used with EPAS, PGE or PostgreSQL databases. PGD is available for self-managed deployment and on the EDB Postgres AI Cloud Service (as the Distributed High Availability option). +EDB Postgres Distributed is a high availability solution for EDB Postgres databases. PGD provides a distributed database environment that is designed to ensure high availability and fault tolerance for mission-critical workloads. PGD can be used with EPAS, PGE or PostgreSQL databases. PGD is available for self-managed deployment and on the EDB Postgres AI Cloud Service (as the Distributed High Availability option). * Read more about [EDB Postgres Distributed](/pgd/latest/). - - diff --git a/advocacy_docs/edb-postgres-ai/databases/index.mdx b/advocacy_docs/edb-postgres-ai/databases/index.mdx index ef7bd16c72e..ff20568cef7 100644 --- a/advocacy_docs/edb-postgres-ai/databases/index.mdx +++ b/advocacy_docs/edb-postgres-ai/databases/index.mdx @@ -1,5 +1,5 @@ --- -title: EDB Postgres AI Databases +title: EDB Postgres AI databases navTitle: Databases indexCards: simple iconName: Database @@ -11,7 +11,7 @@ navigation: Building on decades of Postgres expertise, the EDB Postgres® databases are the core of the EDB Postgres AI platform. EDB Postgres Advanced Server can take on Oracle workloads, while EDB Postgres Extended Server is designed for large-scale, mission-critical enterprise workloads. EDB Postgres Distributed provides high availability and fault tolerance for mission-critical workloads. -For here you can read more about the [databases](databases) that power EDB Postgres AI, and how they can be deployed on-premises with the EDB Postgres AI Estate and Agent components. +From here you can read more about the [databases](databases) that power EDB Postgres AI, and how they can be deployed on-premises with the EDB Postgres AI Estate and Agent components. You can also learn about the [EDB Postgres AI Cloud Service](cloudservice) and how it can be used to manage your database estate. diff --git a/advocacy_docs/edb-postgres-ai/databases/options.mdx b/advocacy_docs/edb-postgres-ai/databases/options.mdx index 78068d666ad..3aaccfb577f 100644 --- a/advocacy_docs/edb-postgres-ai/databases/options.mdx +++ b/advocacy_docs/edb-postgres-ai/databases/options.mdx @@ -1,17 +1,17 @@ --- -title: EDB Postgres AI Databases - Deployment Options -navTitle: Deployment Options -description: High Availability and other options available for EDB Postgres AI Databases and on EDB Postgres AI Cloud Service. +title: EDB Postgres AI databases - Deployment options +navTitle: Deployment options +description: High availability and other options available for EDB Postgres AI databases and on EDB Postgres AI Cloud Service. deepToC: true --- -## Availability Options +## Availability options -### Single Instance +### Single instance Single instance databases are great for development and testing, but for production workloads, you need to consider high availability and fault tolerance. -### Primary/Secondary Replication +### Primary/secondary replication Primary/Secondary replication is a common high availability solution for databases. In this configuration, a primary database server is responsible for processing read and write requests. A secondary database server is configured to replicate the primary database server. If the primary database server fails, the secondary database server can take over and become the primary database server. @@ -19,7 +19,7 @@ This configuration provides fault tolerance and high availability in a particula This is a standard configuration option on EDB Postgres AI Cloud Service. -### Distributed High Availability +### Distributed high availability High availability is a critical requirement for mission-critical workloads. EDB Postgres Distributed (PGD) provides a distributed database environment that is designed to ensure high availability and fault tolerance for mission-critical workloads. PGD can be used with EPAS, PGE or PostgreSQL databases. PGD is available for self-managed deployment and on the EDB Postgres AI Cloud Service (as the Distributed High Availability option). diff --git a/advocacy_docs/edb-postgres-ai/overview/concepts.mdx b/advocacy_docs/edb-postgres-ai/overview/concepts.mdx index c2c54d8ad5f..712a5d5d54f 100644 --- a/advocacy_docs/edb-postgres-ai/overview/concepts.mdx +++ b/advocacy_docs/edb-postgres-ai/overview/concepts.mdx @@ -1,10 +1,10 @@ --- -title: EDB Postgres AI Overview - Concepts +title: EDB Postgres AI overview - Concepts navTitle: Concepts description: A look at the concepts that underpin EDB Postgres AI. --- -EDB Postgres® AI takes EDB’s leading expertise in Postgres and expands the scope of Postgres to address modern challenges. From simplifying your database estate management to infusing AI deep into Postgres and putting it to work to bring all your data under one analytical eye. +EDB Postgres® AI takes EDB's leading expertise in Postgres and expands the scope of Postgres to address modern challenges. From simplifying your database estate management to infusing AI deep into Postgres and putting it to work to bring all your data under one analytical eye. EDB Postgres AI is composed of multiple elements which come together to deliver a unified and powerful experience: @@ -15,7 +15,7 @@ EDB Postgres AI is composed of multiple elements which come together to deliver * ### [EDB Postgres AI Agent](/edb-postgres-ai/console/agent) * On premises databases can be brought under one manageable Console view with the Agent enabling an unprecedented view of diverse deployments. ## [EDB Postgres AI - Databases](/edb-postgres-ai/databases) -* All of EDB’s database expertise can be found in EDB Postgres Advanced Server and EDB Postgres Extended Server. +* All of EDB's database expertise can be found in EDB Postgres Advanced Server and EDB Postgres Extended Server. * Oracle compatibility, transparent data encryption and more. They provide the data fabric on which EDB Postgres AI operates. * Combined with EDB Postgres Distributed, they can also provide a high availability environment for your data. * All of these components are available on the EDB Postgres AI Cloud Service, and managed through the EDB Postgres AI Console. @@ -25,13 +25,13 @@ EDB Postgres AI is composed of multiple elements which come together to deliver * High availability with an active-active mesh of Postgres instances, EDB Postgres Distributed provides a robust and scalable environment for your data. * ### [EDB Postgres AI Cloud Service](/edb-postgres-ai/databases/cloudservice) * Not just databases, but driven by databases, Cloud Service provides a global platform for delivering new elements of EDB Postgres AI efficiently and effectively. -## [EDB Postgres AI - Lakehouse Analytics](/edb-postgres-ai/analytics) -* Filtering out the data noise and revealing insights and value, Lakehouse Analytics brings both structured relational data in Postgres and unstructured data in object storage together for exploration. At the heart of Analytics is a custom built store for this data: -* Built to bring structured and unstructured data together, Lakehouse Nodes support numerous formats to bring your data in from the cold, ready to be analyzed. +## [EDB Postgres AI - Lakehouse analytics](/edb-postgres-ai/analytics) +* Filtering out the data noise and revealing insights and value, Lakehouse analytics brings both structured relational data in Postgres and unstructured data in object storage together for exploration. At the heart of Analytics is a custom built store for this data: +* Built to bring structured and unstructured data together, Lakehouse nodes support numerous formats to bring your data in from the cold, ready to be analyzed. ## [EDB Postgres AI - AI/ML](/edb-postgres-ai/ai-ml) * Postgres has proven its capability as a flexible data environment, and Vector data, the core of generative AI, is already infused into EDB Postgres AI providing a platform for a range of practical and effective AI/ML solutions. A technical preview of this capability is available for the pgai extension. -## [EDB Postgres AI - Platforms and Tools](/edb-postgres-ai/tools) -* Postgres’s extensions are a source of its power and popularity, and are one of the categories that fall within this element of EDB Postgres AI. -* Extensions sit alongside existing applications like Postgres Enterprise Manager, Barman, and Query Advisor as tools that allow you to leverage Postgres’s capabilities. -* Also within this element are EDB’s Migration tools, Migration Toolkit and Migration Portal. The Migration Portal is among the first EDB tools to include embedded AI with an AI copilot that can assist users in developing migration strategies. +## [EDB Postgres AI - Platforms and tools](/edb-postgres-ai/tools) +* Postgres extensions are a source of its power and popularity, and are one of the categories that fall within this element of EDB Postgres AI. +* Extensions sit alongside existing applications like Postgres Enterprise Manager, Barman, and Query Advisor as tools that allow you to leverage Postgres's capabilities. +* Also within this element are EDB's Migration tools, Migration Toolkit and Migration Portal. The Migration Portal is among the first EDB tools to include embedded AI with an AI copilot that can assist users in developing migration strategies. diff --git a/advocacy_docs/edb-postgres-ai/overview/guide.mdx b/advocacy_docs/edb-postgres-ai/overview/guide.mdx index dde1ee8f16b..287b2076acd 100644 --- a/advocacy_docs/edb-postgres-ai/overview/guide.mdx +++ b/advocacy_docs/edb-postgres-ai/overview/guide.mdx @@ -1,5 +1,5 @@ --- -title: EDB Postgres AI Overview - Guide +title: EDB Postgres AI overview - Guide navTitle: Guide description: What do you want to use EDB Postgres AI for? Start navigating the documentation here. --- @@ -27,4 +27,3 @@ You'll want to look at the [EDB Postgres® AI Platform Agent](/edb-postgres-ai/c ## Do you want to know more about the EDB Postgres AI Cloud Service? You'll want to look at the [EDB Postgres® AI Cloud Service](/edb-postgres-ai/databases/cloudservice) documentation, which covers the Cloud Service and its databases. - diff --git a/advocacy_docs/edb-postgres-ai/overview/index.mdx b/advocacy_docs/edb-postgres-ai/overview/index.mdx index 79037cd1aed..0f5bfc7368c 100644 --- a/advocacy_docs/edb-postgres-ai/overview/index.mdx +++ b/advocacy_docs/edb-postgres-ai/overview/index.mdx @@ -1,17 +1,15 @@ --- -title: EDB Postgres AI Overview +title: EDB Postgres AI overview navTitle: Overview indexCards: simple iconName: Earth deepToC: true --- -EDB Postgres® AI is a new era for EDB. With EDB Postgres AI, customers can now leverage EDB’s enterprise-grade Postgres offerings to support not just their mission critical transactional workloads, but also their analytical and AI applications. This also means that, in addition to the core transactional database releases you have come to expect from EDB, we will now be delivering regular updates to our analytics, AI, and platform capabilities. +EDB Postgres® AI is a new era for EDB. With EDB Postgres AI, customers can now leverage EDB's enterprise-grade Postgres offerings to support not just their mission critical transactional workloads, but also their analytical and AI applications. This also means that, in addition to the core transactional database releases you have come to expect from EDB, we will now be delivering regular updates to our analytics, AI, and platform capabilities. In this overview section we will: * [Introduce the concepts that underpin EDB Postgres AI](concepts) * [Provide a guide to help you navigate the documentation](guide) * [Share the latest features released and updated in EDB Postgres AI](releasenotes) - - diff --git a/advocacy_docs/edb-postgres-ai/overview/releasenotes.mdx b/advocacy_docs/edb-postgres-ai/overview/releasenotes.mdx index d2cf0d36642..fb885564e59 100644 --- a/advocacy_docs/edb-postgres-ai/overview/releasenotes.mdx +++ b/advocacy_docs/edb-postgres-ai/overview/releasenotes.mdx @@ -1,11 +1,11 @@ --- -title: EDB Postgres AI Overview - Release Notes -navTitle: Release Notes +title: EDB Postgres AI Overview - Release notes +navTitle: Release notes description: The current features released and updated in EDB Postgres AI. --- EDB Postgres® AI is a a new era for EDB. With EDB Postgres AI, customers can now -leverage EDB’s enterprise-grade Postgres offerings to support not just their +leverage EDB's enterprise-grade Postgres offerings to support not just their mission critical transactional workloads, but also their analytical and AI applications. This also means that, in addition to the core transactional database releases you have come to expect from EDB, we will now be delivering @@ -16,7 +16,7 @@ functionality to support the platform's vision. This release includes analytical and vector database capabilities, single pane of glass management and observability for hybrid data estates, and an AI migration copilot. -## EDB Analytics and AI Updates +## EDB analytics and AI updates Customers can now launch Postgres Lakehouse nodes using the EDB Postgres AI Cloud Service (formerly EDB BigAnimal) to get results of analytical (OLAP) @@ -25,16 +25,16 @@ nodes are available now for customers using EDB Postgres AI - Hosted environments on AWS, and will be rolling out to additional cloud environments soon. -Postgres Lakehouse uses Apache DataFusion’s vectorized SQL query engine to +Postgres Lakehouse uses Apache DataFusion's vectorized SQL query engine to execute analytical queries 5-100x faster (30x on average) compared to native Postgres, while still falling back to native execution when necessary. Postgres -Lakehouse nodes run either EPAS or PGE as the Postgres engine, with data for +Lakehouse nodes run either EDB Postgres Advanced Server or PGE as the Postgres engine, with data for analytics stored as columnar tables in object storage using the open source Delta Lake protocol. Customers can sync tables from transactional sources -(initially, EDB Postgres AI Cloud Service databases) into Lakehouse Tables in -Managed Storage Locations (initially, S3 object storage buckets). +(initially, EDB Postgres AI Cloud Service databases) into Lakehouse tables in +managed storage locations (initially, S3 object storage buckets). -### Technical Preview of [EDB pgai extension](/edb-postgres-ai/ai-ml) +### Technical preview of [EDB pgai extension](/edb-postgres-ai/ai-ml) Customers can now access a technical preview of the new EDB pgai extension, which seamlessly integrates and manages AI data for enterprise workloads with @@ -51,7 +51,7 @@ applications utilize a powerful combination of retrieval systems and language models to provide accurate and context-aware responses to user queries. Learn more and enroll in the tech preview [here](https://info.enterprisedb.com/pgai-preview). -## EDB Platform updates +## EDB platform updates ### [EDB Postgres AI Platform Agent](/edb-postgres-ai/console) release and platform support @@ -63,21 +63,21 @@ Postgres AI Platform interface, with data collected from each database at a configurable level. Additionally, EDB Postgres All Platform is available on EDB-supported x86 Linux distros. -## [EDB Postgres AI Database](/edb-postgres-ai/databases) updates +## [EDB Postgres AI database](/edb-postgres-ai/databases) updates -### EDB Database Server updates +### EDB database server updates -As part of EDB’s support for the open source community’s quarterly release -schedule, we completed PGE and EPAS merge updates from the latest upstream +As part of EDB's support for the open source community's quarterly release +schedule, we completed PGE and EDB Postgres Advanced Server merge updates from the latest upstream PostgreSQL, including the following: -| Database Distributions | Versions Supported | +| Database distributions | Versions supported | |------------------------------|------------------------------------------------| | PostgreSQL | 16.3, 15.7, 14.12, 13.15 and 12.19 | | EDB Postgres Extended Server | 16.3.0, 15.7.0, 14.12, 13.15 and 12.19 | | EDB Postgres Advanced Server | 16.3.0, 15.7.0, 14.12.0, 13.15.21 and 12.19.24 | -### EDB Postgres® Distributed 5.5 Release Enhancements +### EDB Postgres® Distributed 5.5 release enhancements #### Read scalability enhancements EDB Postgres Distributed users can now increase client application performance @@ -94,5 +94,4 @@ Distributed (and all EDB database version types), which enables other SELECT queries to be executed on the parent table while the DETACH operation is underway. -For all the Q2 EDB announcements, please visit the [EDB blog](https://www.enterprisedb.com/blog/edb-postgres-ai-q2-release-highlights). - +For all the Q2 EDB announcements, visit the [EDB blog](https://www.enterprisedb.com/blog/edb-postgres-ai-q2-release-highlights). diff --git a/advocacy_docs/edb-postgres-ai/tools/backup.mdx b/advocacy_docs/edb-postgres-ai/tools/backup.mdx index c8d85d56056..bbb704aa24b 100644 --- a/advocacy_docs/edb-postgres-ai/tools/backup.mdx +++ b/advocacy_docs/edb-postgres-ai/tools/backup.mdx @@ -1,7 +1,7 @@ --- -title: EDB Postgres AI Tools - Backup and Recovery -navTitle: Backup and Recovery -description: The backup and recovery tools available in EDB Postgres AI Tools +title: EDB Postgres AI Tools - Backup and recovery +navTitle: Backup and recovery +description: The backup and recovery tools available in EDB Postgres AI Tools. --- [Barman](/supported-open-source/barman/) is a tool for managing backup and recovery of PostgreSQL databases. It is designed for business critical databases and provides features such as backup catalogues, incremental backup, retention policies, and remote recovery. diff --git a/advocacy_docs/edb-postgres-ai/tools/migration-and-ai.mdx b/advocacy_docs/edb-postgres-ai/tools/migration-and-ai.mdx index 4ce65a71b7a..f8655c3373f 100644 --- a/advocacy_docs/edb-postgres-ai/tools/migration-and-ai.mdx +++ b/advocacy_docs/edb-postgres-ai/tools/migration-and-ai.mdx @@ -1,17 +1,17 @@ --- title: EDB Postgres AI Tools - Migration and AI navTitle: Migration and AI -description: The Migration offering of EDB Postgres AI Tools includes an innovative migration copilot. +description: The migration offering of EDB Postgres AI Tools includes an innovative migration copilot. --- EDB Postgres® AI Tools Migration Portal offers an [AI copilot](/migration_portal/latest/03_mp_using_portal/mp_ai_copilot/) to assist users who are migrating their databases to EDB Postgres. The AI copilot is an AI-driven chatbot tool that helps users with the migration process. The AI copilot is designed to help users with the following tasks: -- **General Migration Assistance**: The AI copilot can help users with general migration questions. +- **General migration assistance**: The AI copilot can help users with general migration questions. For example, users can request information about available tools, and source and target database compatibility. -- **Migration Planning**: The AI copilot can help users plan their migration, and obtain an overview of the end-to-end migration paths. -- **Migration Assessment**: The AI copilot can help users assess their migration readiness. +- **Migration planning**: The AI copilot can help users plan their migration, and obtain an overview of the end-to-end migration paths. +- **Migration assessment**: The AI copilot can help users assess their migration readiness. For example, if there are compatibility issues between source and target databases, the AI Copilot can suggest compatible query alternatives. The AI copilot is designed to be user-friendly and easy to use. Users can interact with the AI copilot using natural language and improve the quality of answers with [good prompting](/migration_portal/latest/03_mp_using_portal/mp_ai_copilot/ai_good_prompts/). The AI copilot is also designed to be context-aware, so it can provide users with relevant information based on the context of the conversation. \ No newline at end of file From 5f41200d4743c73472933fad8c560f59e0dd1d98 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Thu, 22 Feb 2024 16:05:08 -0500 Subject: [PATCH 20/51] Edits to datadog PR5238 --- .../third_party_integrations/datadog.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/datadog.mdx b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/datadog.mdx index 4fd8d07ea66..d7630aa9efa 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/datadog.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/datadog.mdx @@ -51,8 +51,8 @@ Datadog provides [usage metrics](https://docs.datadoghq.com/account_management/b Be aware of the following cost considerations: -* Datadog bills for each monitored Kubernetes node as an "Infrastructure Host". -* Datadog may also bill for each Postgres container and monitoring agent container at the "Container Monitoring" rate. +* Datadog bills for each monitored Kubernetes node as an "Infrastructure Host." +* Datadog might also bill for each Postgres container and monitoring agent container at the "Container Monitoring" rate. * Datadog counts some of the metrics sent by BigAnimal as custom metrics. Custom metrics dimensions above the free limit are billable at a rate set in the Datadog price list. * The Datadog [metrics without limits feature](https://docs.datadoghq.com/metrics/metrics-without-limits/) can limit cardinality-based billing for custom metrics. However, it enables ingestion-based billing instead, so the overall price might actually be greater. From ae2ede91f2c672b7738cf9f30f0548dd003f3488 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Thu, 7 Mar 2024 15:27:20 -0500 Subject: [PATCH 21/51] Edits to PGD PR5293 --- product_docs/docs/pgd/5/consistency/eager.mdx | 8 +-- .../pgd/5/durability/commit-scope-rules.mdx | 70 ++++++++++--------- .../docs/pgd/5/durability/group-commit.mdx | 52 +++++++------- .../pgd/5/durability/synchronous_commit.mdx | 17 +++-- product_docs/docs/pgd/5/known_issues.mdx | 4 +- .../reference/nodes-management-interfaces.mdx | 18 +++-- product_docs/docs/pgd/5/reference/routing.mdx | 1 - .../pgd/5/rel_notes/pgd_5.4.0_rel_notes.mdx | 52 +++++++------- 8 files changed, 109 insertions(+), 113 deletions(-) diff --git a/product_docs/docs/pgd/5/consistency/eager.mdx b/product_docs/docs/pgd/5/consistency/eager.mdx index 47d9285aa65..eff14926bec 100644 --- a/product_docs/docs/pgd/5/consistency/eager.mdx +++ b/product_docs/docs/pgd/5/consistency/eager.mdx @@ -40,7 +40,7 @@ SELECT bdr.add_commit_scope( !!! note Upgrading? The old `global` commit scope doesn't exist anymore. The above command creates a scope that's the same as the old `global` scope with `bdr.global_commit_timeout` set to `60s`. -The commit scope group for the eager conflict resolution rule can only be `ALL` or `MAJORITY`. Where `ALL` is used, the `commit_decision` setting must also be set to `raft`. +The commit scope group for the Eager conflict resolution rule can only be `ALL` or `MAJORITY`. Where `ALL` is used, the `commit_decision` setting must also be set to `raft`. ## Error handling @@ -54,9 +54,9 @@ In case of an origin node failure, the remaining nodes eventually (after at leas With single-node Postgres, or even with PGD in its default asynchronous replication mode, errors at `COMMIT` time are rare. The added synchronization -step due to the use of a commit scope using eager +step due to the use of a commit scope using Eager for conflict resolution also adds a source of errors. Applications need to be -prepared to properly handle such errors (usually by applying a retry loop). +prepared to properly handle such errors, usually by applying a retry loop. The rate of aborts depends solely on the workload. Large transactions changing many rows are much more likely to conflict with other concurrent transactions. @@ -66,7 +66,7 @@ The rate of aborts depends solely on the workload. Large transactions changing m Adding a synchronization step due to the use of a commit scope means more communication between the nodes, resulting in more latency at commit time. When -ALL is used in the commit scope, this also means that the availability of the +`ALL` is used in the commit scope, this also means that the availability of the system is reduced, since any node going down causes transactions to fail. If one or more nodes are lagging behind, the round-trip delay in getting diff --git a/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx b/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx index 26cfb4fcee5..bce995c9ce6 100644 --- a/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx +++ b/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx @@ -2,49 +2,49 @@ title: Commit scope rules --- -Commit scope rules are at the core of the commit scope mechanism. They define what the commit scope will enforce. +Commit scope rules are at the core of the commit scope mechanism. They define what the commit scope enforces. -Commit scope rules are composed of one or more operations combined with an `AND`. +Commit scope rules are composed of one or more operations connected by an `AND`. -Each operation is made up of two (or three) parts, the commit scope group, an optional confirmation level, and the kind of commit scope (which may have it's own parameters). +Each operation is made up of two or three parts: the commit scope group, an optional confirmation level, and the kind of commit scope, which can have its own parameters. ``` -commit_scope_group [ confimation_level ] commit_scope_kind +commit_scope_group [ confirmation_level ] commit_scope_kind ``` -A full formal syntax diagram is available in the [commit scope reference section.](/pgd/latest/reference/commit-scopes/#commit-scope-syntax). +A full formal syntax diagram is available in the [Commit scopes](/pgd/latest/reference/commit-scopes/#commit-scope-syntax) reference. -If we look at a typical commit scope rule, we can now break it down into its components: +This typical commit scope rule can be broken down into its components: ``` ANY 2 (group) GROUP COMMIT ``` -The `ANY 2 (group)` is the commit scope group, specifying, for the rule, of which nodes need to respond, confirming they have processed the transaction. Specifically, here, any two nodes from the named group must confirm. +The `ANY 2 (group)` is the commit scope group specifying, for the rule, which nodes need to respond and confirm they processed the transaction. In this example, any two nodes from the named group must confirm. -There is no confirmation level here, but that only means that the default is used. You can think of the rule in full as: +No confirmation level is specified, which means that the default is used. You can think of the rule in full, then, as: ``` ANY 2 (group) ON visible GROUP COMMIT ``` -The visible setting means the nodes are able to confirm once the all the transaction's changes are flushed to disk and visible to other transactions. +The `visible` setting means the nodes can confirm once all the transaction's changes are flushed to disk and visible to other transactions. -The last part of this operation is the commit scope kind, which here is `GROUP COMMIT`, a synchronous two-phase commit which will be confirmed when any two nodes in the named group confirm they have flushed and made visible the transactions changes. +The last part of this operation is the commit scope kind, which in this example is `GROUP COMMIT`. `GROUP COMMIT` is a synchronous two-phase commit that's confirmed when any two nodes in the named group confirm they've flushed the transactions changes and made them visible. ## The commit scope group -There are three kinds of commit scope group, `ANY`, `ALL` and `MAJORITY`. They are all followed by a parenthesized list of one or more groups, which combine to make a pool of nodes that this operation will apply to. This list can be preceded by `NOT` which inverts to pool to all other groups apart from those in the list. Witness nodes are not eligible to be included in this pool as they do not replicate data. +There are three kinds of commit scope groups: `ANY`, `ALL`, and `MAJORITY`. They're all followed by a list of one or more groups in parentheses. This list of groups combines to make a pool of nodes this this operation applies to. This list can be preceded by `NOT`, which inverts to pool to all other groups apart from those in the list. Witness nodes aren't eligible to be included in this pool, as they don't replicate data. -- `ANY n` — is followed by an integer value, "n". It translates to any "n" nodes in the listed groups nodes. -- `ALL` — is followed by the groups and translates to all nodes in the listed groups nodes. -- `MAJORITY` — is followed by the groups and translates to requiring a half, plus one, of the listed groups nodes to confirm to give a majority. -- `ANY n NOT` — is followed by an integer value, "n". It translates to any "n" nodes that are not in the listed groups nodes. -- `ALL NOT` — is followed by the groups and translates to all nodes that are not in the listed group's nodes. -- `MAJORITY NOT` — is followed by the groups and translates to requiring a half, plus one, of the nodes that are not in the listed groups nodes to confirm to give a majority. +- `ANY n` is followed by an integer value, `n`. It translates to any `n` nodes in the listed group's nodes. +- `ALL` is followed by the groups and translates to all nodes in the listed groups nodes. +- `MAJORITY` is followed by the groups and translates to requiring a half, plus one, of the listed group's nodes to confirm, to give a majority. +- `ANY n NOT` is followed by an integer value, `n`. It translates to any `n` nodes that aren't in the listed group's nodes. +- `ALL NOT` is followed by the groups and translates to all nodes that aren't in the listed group's nodes. +- `MAJORITY NOT` is followed by the groups and translates to requiring a half, plus one, of the nodes that aren't in the listed group's nodes to confirm, to give a majority. -## The confirmation Level +## The confirmation level PGD nodes can send confirmations for a transaction at different times. In increasing levels of protection, from the perspective of the confirming node, these are: @@ -53,52 +53,54 @@ PGD nodes can send confirmations for a transaction at different times. In increa - `durable` — Confirms the transaction after all of its changes are flushed to disk. - `visible` (default) — Confirms the transaction after all of its changes are flushed to disk and it's visible to concurrent transactions. -In rules for commit scopes, you can append these confirmation levels to the node group definition in parentheses with `ON` as follows: +In rules for commit scopes, you can append these confirmation levels to the node group definition in parentheses with `ON`, as follows: - `ANY 2 (right_dc) ON replicated` - `ALL (left_dc) ON visible` (default) - `ALL (left_dc) ON received AND ANY 1 (right_dc) ON durable` !!! Note -If you are familiar with Postgresql's `synchronous_standby_names` feature, be aware that while the grammar for `synchronous_standby_names` and commit scopes can look similar, there is a subtle difference. The former doesn't account for the origin node, but the latter does. For example `synchronous_standby_names = 'ANY 1 (..)'` is equivalent to a commit scope of `ANY 2 (...)`. This difference makes reasoning about majority easier and reflects that the origin node also contributes to the durability and visibility of the transaction. +If you're familiar with PostgreSQL's `synchronous_standby_names` feature, be aware that while the grammar for `synchronous_standby_names` and commit scopes can look similar, there's a subtle difference. The former doesn't account for the origin node, but the latter does. For example, `synchronous_standby_names = 'ANY 1 (..)'` is equivalent to a commit scope of `ANY 2 (...)`. This difference makes reasoning about majority easier and reflects that the origin node also contributes to the durability and visibility of the transaction. !!! -## The Commit Scope kind +## The commit scope kinds -There are, currently, four commit scope kinds. Each of them has their own page, so we'll be summarizing and linking to them here: +Currently, there are four commit scope kinds. The following is a summary, with links to more details. ### `GROUP COMMIT` -Group commit is a synchronous two-phase commit which will be confirmed according to the requirements of the commit scope group. `GROUP COMMIT` has a number of options which control whether transactions should be tracked over interruptions (boolean, defaults to off), how conflicts should be resolved (`async` or `eager`, defaults to `async`) and how a consensus is obtained (`group`, `partner` or `raft`, defaults to `group`). +Group Commit is a synchronous, two-phase commit that's confirmed according to the requirements of the commit scope group. `GROUP COMMIT` has options that control: -For further details see [`GROUP COMMIT`](group-commit). +- Whether to track transactions over interruptions (Boolean, defaults to off) +- How to resolve conflicts (`async` or `eager`, defaults to `async`) +- How to obtain a consensus (`group`, `partner` or `raft`, defaults to `group`) + +For more details, see [`GROUP COMMIT`](group-commit). ### `CAMO` -Camo, Commit At Most Once, allows the client/application, origin node and partner node to ensure that a transaction is committed to the database at most once. Because the client is involved in the process, the application will require modifications to participate in the CAMO process. +Commit At Most Once, or CAMO, allows the client/application, origin node, and partner node to ensure that a transaction is committed to the database at most once. Because the client is involved in the process, the application requires modifications to participate in the CAMO process. -For further details see [`CAMO`](camo). +For more details, see [`CAMO`](camo). ### `LAG CONTROL` -With Lag control, when the system's replication performance exceeds specified limits, a commit delay can be automatically injected into client interaction with the database, providing a back pressure on clients. Lag control has parameters to set the maximum commit delay that can be exerted, and limits in terms of time to process or queue size which will trigger increases in that commit delay. +With Lag Control, when the system's replication performance exceeds specified limits, a commit delay can be automatically injected into client interaction with the database, providing a back pressure on clients. Lag Control has parameters to set the maximum commit delay that can be exerted. It also has limits in terms of time to process or queue size that trigger increases in that commit delay. -For further details see [`LAG CONTROL`](lag-control) +For more details, see [`LAG CONTROL`](lag-control). ### `SYNCHRONOUS_COMMIT` -Synchronous Commit is a commit scope option which is designed to be like the legacy `synchronous_commit` option, but accessible within the commit scope environment. Unlike `GROUP COMMIT` it is a synchronous non-two-phase commit operation, with no parameters. The preceding commit scope group controls what groups and confirmation requirements the `SYNCHRONOUS_COMMIT` will use. +Synchronous Commit is a commit scope option that's designed to be like the legacy `synchronous_commit` option, but it's accessible in the commit scope environment. Unlike `GROUP COMMIT`, it's a synchronous non-two-phase commit operation, and it has no parameters. The preceding commit scope group controls the groups and confirmation requirements the `SYNCHRONOUS_COMMIT` uses. -For further details see [`SYNCHRONOUS_COMMIT`](synchronous_commit) +For more details, see [`SYNCHRONOUS_COMMIT`](synchronous_commit). ## Combining rules -A rule can have multiple operations, combined with an `AND` to form a single rule. For example, +A rule can have multiple operations connected by an `AND` to form a single rule. For example: ``` MAJORITY (Region_A) SYNCHRONOUS_COMMIT AND ANY 1 (Region_A) LAG CONTROL (MAX_LAG_SIZE = '50MB') ``` -The first operation sets up a synchronous commit against a majority of `Region_A`, the second operation adds lag control which will start pushing the commit delay up when any one of the nodes in `Region_A` has more than 50MB of lag. This combination of operations allows the lag control to operate when any node is lagging. - - +The first operation sets up a synchronous commit against a majority of `Region_A`. The second operation adds lag control that starts pushing the commit delay up when any one of the nodes in `Region_A` has more than 50MB of lag. This combination of operations allows the lag control to operate when any node is lagging. diff --git a/product_docs/docs/pgd/5/durability/group-commit.mdx b/product_docs/docs/pgd/5/durability/group-commit.mdx index f705ff2b6f7..2c0c095dff2 100644 --- a/product_docs/docs/pgd/5/durability/group-commit.mdx +++ b/product_docs/docs/pgd/5/durability/group-commit.mdx @@ -56,7 +56,7 @@ determines the PGD nodes involved in the commit of a transaction. ## Confirmation - Confirmation Level | Group Commit Handling + Confirmation Level | Group Commit handling -------------------------|------------------------------- `received` | A remote PGD node confirms the transaction immediately after receiving it, prior to starting the local application. `replicated` | Confirms after applying changes of the transaction but before flushing them to disk. @@ -86,11 +86,11 @@ to commit something. This approach requires application changes to use the CAMO transaction protocol to work correctly, as the application is in some way part of the consensus. For more on this approach, see [CAMO](camo). -The `raft` decision uses PGDs built-in raft consensus for commit decisions. Use of the `raft` decision can reduce performance. It is currently only required when using GROUP COMMIT +The `raft` decision uses PGDs built-in raft consensus for commit decisions. Use of the `raft` decision can reduce performance. It's currently required only when using `GROUP COMMIT` with an ALL commit scope group. Using an ALL commit scope group requires that the commit decision must be set to -raft to avoid [reconciliation](#transaction-reconciliation) issues. +`raft` to avoid [reconciliation](#transaction-reconciliation) issues. ### Conflict resolution @@ -124,46 +124,44 @@ does eventually COMMIT even though the client might receive an abort message. See also [Limitations](limitations). -### Transaction Reconciliation +### Transaction reconciliation -A group commit transaction’s commit on the origin node is implicitly converted -into a two phase commit. +A Group Commit transaction's commit on the origin node is implicitly converted +into a two-phase commit. -In the first phase (prepare) the transaction is prepared locally and made ready to commit. -The data is made durable but uncomitted at this stage so other transactions -cannot see the changes made by this transaction. This prepared transaction gets +In the first phase (prepare), the transaction is prepared locally and made ready to commit. +The data is made durable but is uncomitted at this stage, so other transactions +can't see the changes made by this transaction. This prepared transaction gets copied to all remaining nodes through normal logical replication. -The origin node seeks confirmations from other nodes, as per rules in the group -commit grammar. If it gets confirmations from minimum required nodes in the -cluster, it decides to commit this transaction moving onto the second phase (commit) -where it also sends this decision via replication to other nodes which will also eventually commit on getting this message. +The origin node seeks confirmations from other nodes, as per rules in the Group +Commit grammar. If it gets confirmations from the minimum required nodes in the +cluster, it decides to commit this transaction moving onto the second phase (commit). +In the commit phase, it also sends this decision by way of replication to other nodes. Those nodes will also eventually commit on getting this message. -There is a possibility of failure at various stages. For example, the origin +There's a possibility of failure at various stages. For example, the origin node may crash after preparing the transaction. Or the origin and one or more -replicas may crash. +replicas may crash. This leaves the prepared transactions in the system. The `pg_prepared_xacts` view in Postgres can show prepared transactions on a system. The prepared -transactions could be holding locks and other resources and they therefore need -to be either aborted or committed. That decision has to be made with a consensus +transactions might be holding locks and other resources and they therefore need +to be either aborted or committed. That decision must be made with a consensus of nodes. -When commit_decision is `raft` then, raft acts as the reconcilator and these -transactions are eventually, automatically, reconciled. +When `commit_decision` is `raft`, then, Raft acts as the reconciliator, and these +transactions are eventually reconciled automatically. -When the commit_decision is `group` then, transactions do not use raft. Instead +When the `commit_decision` is `group`, then, transactions don't use Raft. Instead the write lead in the cluster performs the role of reconciliator. This is because -it is the node that is most ahead with respect to changes in its sub-group. It -detects when a node is down and initiates reconciliation for such node by looking -for prepared transactions it may have with the down node as the origin. +it's the node that's most ahead with respect to changes in its subgroup. It +detects when a node is down and initiates reconciliation for such a node by looking +for prepared transactions it has with the down node as the origin. For all such transactions, it sees if the nodes as per the rules of the commit scope have the prepared transaction, it takes a decision. This decision is -conveyed over raft and needs majority of the nodes to be up to do +conveyed over Raft and needs the majority of the nodes to be up to do reconciliation. -This process happens in the background and there is no user command needed to +This process happens in the background. There's no command for you to use to control or issue this. - - diff --git a/product_docs/docs/pgd/5/durability/synchronous_commit.mdx b/product_docs/docs/pgd/5/durability/synchronous_commit.mdx index 36007904e6f..654adbbc933 100644 --- a/product_docs/docs/pgd/5/durability/synchronous_commit.mdx +++ b/product_docs/docs/pgd/5/durability/synchronous_commit.mdx @@ -7,12 +7,14 @@ Commit scope kind: `SYNCHRONOUS_COMMIT` ## Overview -PGD's `SYNCHRONOUS_COMMIT` is a commit scope kind that works in a way that is more like PostgreSQL's [Synchronous commit](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-SYNCHRONOUS-COMMIT) option in its underlying operation. Unlike the PostgreSQL option though, it is configured as a commit scope and is easier to configure and interact with within PGD. +PGD's `SYNCHRONOUS_COMMIT` is a commit scope kind that works in a way that's more like PostgreSQL's [`synchronous_commit`](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-SYNCHRONOUS-COMMIT) option in its underlying operation. Unlike the PostgreSQL option, though, it's configured as a commit scope and is easier to configure and interact with in PGD. -Unlike other commit scope kinds such as GROUP COMMIT and CAMO, the transactions in a SYNCHRONOUS_COMMIT operation will not be transformed into a two phase commit (2PC) transaction, but work more like a Postgres Synchronous commit. +Unlike other commit scope kinds, such as `GROUP COMMIT` and `CAMO`, the transactions in a `SYNCHRONOUS_COMMIT` operation aren't transformed into a two-phase commit (2PC) transaction. They work more like a Postgres `synchronous_commit`. ## Example +In this example, when this commit scope is in use, any node in the `left_dc` group uses `SYNCHRONOUS_COMMIT` to replicate changes to the other nodes in the `left_dc` group. It looks for a majority of nodes in the `left_dc` group to confirm that they committed the transaction. + ``` SELECT bdr.add_commit_scope( commit_scope_name := 'example_sc_scope', @@ -22,22 +24,19 @@ SELECT bdr.add_commit_scope( ); ``` -In this example, any node in the `left_dc` group, when this commit scope is in use, will use `SYNCHRONOUS_COMMIT` to replicate changes to the other nodes in the `left_dc` group. It will look for a majority of nodes in the `left_dc` group to confirm that they have committed the transaction. - ## Configuration -There are no parameters for `SYNCHRONOUS_COMMIT` and therefore no configuration. +`SYNCHRONOUS_COMMIT` has no parameters to configure. ## Confirmation - Confirmation Level | PGD Synchronous Commit Handling + Confirmation level | PGD Synchronous Commit handling -------------------------|------------------------------- - `received` | A remote PGD node confirms the transaction once it's been fully received and is in in-memory write queue. + `received` | A remote PGD node confirms the transaction once it's been fully received and is in the in-memory write queue. `replicated` | Same behavior as `received`. `durable` | Confirms the transaction after all of its changes are flushed to disk. Analogous to `synchronous_commit = on` in legacy synchronous replication. `visible` (default) | Confirms the transaction after all of its changes are flushed to disk and it's visible to concurrent transactions. Analogous to `synchronous_commit = remote_apply` in legacy synchronous replication. ## Details -Currently `SYNCHRONOUS_COMMIT` does not use the confirmation levels of the commit scope rule syntax. - +Currently `SYNCHRONOUS_COMMIT` doesn't use the confirmation levels of the commit scope rule syntax. diff --git a/product_docs/docs/pgd/5/known_issues.mdx b/product_docs/docs/pgd/5/known_issues.mdx index c9c75df33b8..633290321d2 100644 --- a/product_docs/docs/pgd/5/known_issues.mdx +++ b/product_docs/docs/pgd/5/known_issues.mdx @@ -56,7 +56,7 @@ release. scope. Running transactions in a commit scope that's concurrently being altered or removed can lead to the transaction blocking or replication stalling completely due to an error on the downstream node attempting to - apply the transaction. Ensure that any transactions using a specific commit + apply the transaction. Make sure that any transactions using a specific commit scope have finished before altering or removing it. - The [PGD CLI](cli) can return stale data on the state of the cluster if it's @@ -68,7 +68,7 @@ release. connection. - When using - [`bdr.add_commit_scope`](/pgd/latest/reference/functions#bdradd_commit_scope) + [`bdr.add_commit_scope`](/pgd/latest/reference/functions#bdradd_commit_scope), if a new commit scope is added which has the same name as a commit scope on any group, then the commit scope silently overwrites the commit scope but retains the original group the scope was associated with (if any). To modify diff --git a/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx b/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx index 992776f83f6..0d5b20643d2 100644 --- a/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx +++ b/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx @@ -6,8 +6,6 @@ indexdepth: 2 You can add and remove nodes dynamically using the SQL interfaces. - - ## `bdr.alter_node_group_option` Modify a PGD node group configuration. @@ -55,14 +53,14 @@ An `ERROR` is raised if any of the provided parameters is invalid. ### Notes -The current state of node group options can be examined via the view +You can examine the current state of node group options by way of the view [`bdr.node_group_summary`](catalogs-visible#bdrnode_group_summary). This function passes a request to the group consensus mechanism to change the defaults. The changes made are replicated globally using the consensus mechanism. -The function isn't transactional. The request is processed in the background +The function isn't transactional. The request is processed in the background, so you can't roll back the function call. Also, the changes might not be immediately visible to the current transaction. @@ -113,11 +111,11 @@ bdr.alter_node_option(node_name text, - `config_value` — New value to be set for the given key. The node options that can be changed using this function are: -- `route_priority` — Relative routing priority of the node against other nodes in the same node group. Default is '-1'. -- `route_fence` — Whether the node is fenced from routing; when true, the node can't receive connections from PGD Proxy. Default is 'f' (false). -- `route_writes` — Whether writes can be routed to this node, that is, whether the node can become write leader. Default is 't' (true) for data nodes and 'f' (false) for other node types. -- `route_reads` — Whether read-only connections can be routed to this node. Currently reserved for future use. Default is 't' (true) for data and subscriber-only nodes, 'f' (false) for witness and standby nodes. -- `route_dsn` — The dsn that the proxy will use to connect to this node. This is optional; if not set it defaults to the node's node_dsn value. +- `route_priority` — Relative routing priority of the node against other nodes in the same node group. Default is `-1`. +- `route_fence` — Whether the node is fenced from routing. When true, the node can't receive connections from PGD Proxy. Default is `f` (false). +- `route_writes` — Whether writes can be routed to this node, that is, whether the node can become write leader. Default is `t` (true) for data nodes and `f` (false) for other node types. +- `route_reads` — Whether read-only connections can be routed to this node. Currently reserved for future use. Default is `t` (true) for data and subscriber-only nodes, `f` (false) for witness and standby nodes. +- `route_dsn` — The dsn for the proxy to use to connect to this node. This option is optional. If not set, it defaults to the node's `node_dsn` value. ## `bdr.alter_subscription_enable` @@ -301,7 +299,7 @@ bdr.join_node_group ( to do during the join. The default setting is `all`, which synchronizes the complete database structure, The other available setting is `none`, which doesn't synchronize any structure. However, it still synchronizes data (except for witness - nodes, which by design do not synchronize data). + nodes, which by design don't synchronize data). - `pause_in_standby` — Optionally tells the join process to join only as a logical standby node, which can be later promoted to a full member. This option is deprecated and will be disabled or removed in future diff --git a/product_docs/docs/pgd/5/reference/routing.mdx b/product_docs/docs/pgd/5/reference/routing.mdx index 50c3e19933f..76bdaaed46d 100644 --- a/product_docs/docs/pgd/5/reference/routing.mdx +++ b/product_docs/docs/pgd/5/reference/routing.mdx @@ -44,7 +44,6 @@ bdr.alter_proxy_option(proxy_name text, config_key text, config_value text); | `config_key` | text | | Key of the option in the proxy to be changed. | | `config_value` | text | | New value to be set for the given key. | - The table shows the proxy options (`config_key`) that can be changed using this function. | Option | Description | diff --git a/product_docs/docs/pgd/5/rel_notes/pgd_5.4.0_rel_notes.mdx b/product_docs/docs/pgd/5/rel_notes/pgd_5.4.0_rel_notes.mdx index 9eacf591222..88fc0325b3a 100644 --- a/product_docs/docs/pgd/5/rel_notes/pgd_5.4.0_rel_notes.mdx +++ b/product_docs/docs/pgd/5/rel_notes/pgd_5.4.0_rel_notes.mdx @@ -16,8 +16,8 @@ We recommend that all users of PGD 5 upgrade to PGD 5.4. See [PGD/TPA upgrades]( Highlights of this 5.4.0 release include improvements to: -* group commit, aiming to optimize performance by minimizing the effect of a node’s downtime and simplifying overall operating of PGD clusters. -* apply_delay, enabling the creation of a delayed read-only [replica](https://www.enterprisedb.com/docs/pgd/latest/node_management/subscriber_only/) for additional options for disaster recovery and to mitigate the impact of human error such as accidental DROP table statements +* Group Commit, aiming to optimize performance by minimizing the effect of a node's downtime and simplifying overall operating of PGD clusters. +* `apply_delay`, enabling the creation of a delayed read-only [replica](https://www.enterprisedb.com/docs/pgd/latest/node_management/subscriber_only/) for additional options for disaster recovery and to mitigate the impact of human error, such as accidental DROP table statements. ## Compatibility @@ -35,10 +35,10 @@ Postgres Distributed. | Component | Version | Description | Addresses | |-----------|---------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------| -| BDR | 5.4.0 | Automatically detect and synchronize all available nodes to the furthest ahead node for transactions originating from failed or disconnected node. | | -| BDR | 5.4.0 | Automatic resolution of pending group commit transactions when the originating node fails or disconnects, ensuring uninterrupted transaction processing within the cluster. | | -| BDR | 5.4.0 | Added ability to set `apply_delay` group option on sub-groups, enabling adding of delayed subscriber-only nodes. | | -| BDR | 5.4.0 | Loading data using EDB*Loader (except direct mode) is now supported. | | +| BDR | 5.4.0 | PGD now automatically detects and synchronizes all available nodes to the furthest ahead node for transactions originating from failed or disconnected node. | | +| BDR | 5.4.0 | PGD now automatically resolves pending Group Commit transactions when the originating node fails or disconnects, ensuring uninterrupted transaction processing within the cluster. | | +| BDR | 5.4.0 | Added ability to set the `apply_delay` group option on subgroups, enabling adding of delayed subscriber-only nodes. | | +| BDR | 5.4.0 | Loading data using EDB\*Loader (except direct mode) is now supported. | | ## Bug fixes @@ -47,28 +47,28 @@ Postgres Distributed. | BDR | 5.4.0 | Fixed memory leaks when running a query on some or all nodes. | | | BDR | 5.4.0 | Resolved an issue of high CPU usage for consensus processes. | RT97649 | | BDR | 5.4.0 | Improved WAL retention logic when a part_node occurs. | | -| BDR | 5.4.0 | Witness nodes will automatically not synchronize structure when joining a group. | | +| BDR | 5.4.0 | Witness nodes will now automatically not synchronize structure when joining a group. | | | BDR | 5.4.0 | bdr.create_node() / bdr.alter_node() now give a hint when an invalid node kind is used. | | | BDR | 5.4.0 | Fixed transactions PREPARE/COMMIT/ABORT order with Parallel Apply enabled. | | -| BDR | 5.4.0 | DDL replication now takes into account more of Postgres configuration options that are set in the original session or transaction in order to provide more consistent results of the DDL execution. Added `standard_conforming_strings`, `edb_redwood_date`, `default_with_rowids` and `check_function_bodies`. | | -| BDR | 5.4.0 | Improved `pgd_bench` cluster initialization and commandline help output. | | -| BDR | 5.4.0 | Restoring a node group from a consensus snapshot correctly applies option changes (number of writers, streaming and apply_delay) to local subscriptions. | | -| BDR | 5.4.0 | bdr_init_physical: fixed debug logging of pg_ctl enabling output capture for debugging purposes. | | -| BDR | 5.4.0 | Fix assertion failure when TargetColumnMissing conflict occurs in a Group Commit transaction. | | -| BDR | 5.4.0 | Fix detection of UpdateOriginChange conflict to be more accurate. | | -| BDR | 5.4.0 | Support timeout for normal Group Commit transaction. | | -| BDR | 5.4.0 | Fix error handling in writer when there are lock timeouts or conflicts or deadlocks with and without group commit transactions. | | -| BDR | 5.4.0 | Allow the origin of group commit transactions to wait for responses from all the required nodes before taking an abort decision. | | +| BDR | 5.4.0 | DDL replication now takes into account more of the Postgres configuration options that are set in the original session or transaction to provide more consistent results of the DDL execution. Added `standard_conforming_strings`, `edb_redwood_date`, `default_with_rowids`, and `check_function_bodies`. | | +| BDR | 5.4.0 | Improved `pgd_bench` cluster initialization and command line help output. | | +| BDR | 5.4.0 | Restoring a node group from a consensus snapshot now correctly applies option changes (number of writers, streaming, and apply_delay) to local subscriptions. | | +| BDR | 5.4.0 | Fixed debug logging of pg_ctl enabling output capture for debugging purposes in `bdr_init_physical`. | | +| BDR | 5.4.0 | Fixed assertion failure when TargetColumnMissing conflict occurs in a Group Commit transaction. | | +| BDR | 5.4.0 | Fixed detection of UpdateOriginChange conflict to be more accurate. | | +| BDR | 5.4.0 | Added support for timeout for normal Group Commit transaction. | | +| BDR | 5.4.0 | Fixed error handling in writer when there are lock timeouts, conflicts, or deadlocks with and without Group Commit transactions. | | +| BDR | 5.4.0 | Now allow the origin of Group Commit transactions to wait for responses from all the required nodes before taking an abort decision. | | | BDR | 5.4.0 | Eager transactions abort correctly after Raft was disabled or not working and has recovered. | RT101055 | -| BDR | 5.4.0 | Increase default bdr.raft_keep_min_entries to 1000 from 100. | | -| BDR | 5.4.0 | Allow the origin of group commit transactions to wait for responses from all the required nodes before taking an abort decision | | -| BDR | 5.4.0 | Run ANALYZE on the internal raft tables. | RT97735 | -| BDR | 5.4.0 | Fix segfault in I2PC concurrent abort case. | RT93962 | -| BDR | 5.4.0 | Avoid bypassing other extensions in BdrProcessUtility when processing COPY..TO. | RT99345 | -| BDR | 5.4.0 | Ensure that consensus connection are handled correctly. | RT97649 | -| BDR | 5.4.0 | Fix memory leaks while running monitoring queries. | RT99231, RT95314 | +| BDR | 5.4.0 | Increased default `bdr.raft_keep_min_entries` to 1000 from 100. | | +| BDR | 5.4.0 | Now allow the origin of Group Commit transactions to wait for responses from all the required nodes before taking an abort decision. | | +| BDR | 5.4.0 | Now run ANALYZE on the internal Raft tables. | RT97735 | +| BDR | 5.4.0 | Fixed segfault in I2PC concurrent abort case. | RT93962 | +| BDR | 5.4.0 | Now avoid bypassing other extensions in BdrProcessUtility when processing COPY..TO. | RT99345 | +| BDR | 5.4.0 | Ensured that consensus connection are handled correctly. | RT97649 | +| BDR | 5.4.0 | Fixed memory leaks while running monitoring queries. | RT99231, RT95314 | | BDR | 5.4.0 | The `bdr.metrics_otel_http_url` and `bdr.trace_otel_http_url` options are now validated at assignment time. | | -| BDR | 5.4.0 | When `bdr.metrics_otel_http_url` and `bdr.trace_otel_http_url` don't include paths, `/v1/metrics` and `/v1/traces` are used respectively. | | +| BDR | 5.4.0 | When `bdr.metrics_otel_http_url` and `bdr.trace_otel_http_url` don't include paths, `/v1/metrics` and `/v1/traces` are used, respectively. | | | BDR | 5.4.0 | Setting `bdr.trace_enable` to `true` is no longer required to enable OTEL metrics collection. | | -| Proxy | 5.4.0 | Use route_dsn and perform sslpassword processing while extracting write leader address. | RT99700 | -| Proxy | 5.4.0 | Log client and server addresses at debug level in proxy logs. | | +| Proxy | 5.4.0 | Now use route_dsn and perform sslpassword processing while extracting write leader address. | RT99700 | +| Proxy | 5.4.0 | Now log client and server addresses at debug level in proxy logs. | | From dfb6d5fe74fa005872fcc5945a2804496e2c62d6 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Tue, 12 Mar 2024 10:19:31 -0400 Subject: [PATCH 22/51] Update product_docs/docs/pgd/5/durability/commit-scope-rules.mdx Co-authored-by: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> --- product_docs/docs/pgd/5/durability/commit-scope-rules.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx b/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx index bce995c9ce6..0dc92cf7a38 100644 --- a/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx +++ b/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx @@ -14,7 +14,7 @@ commit_scope_group [ confirmation_level ] commit_scope_kind A full formal syntax diagram is available in the [Commit scopes](/pgd/latest/reference/commit-scopes/#commit-scope-syntax) reference. -This typical commit scope rule can be broken down into its components: +A typical commit scope rule, such as `ANY 2 (group) GROUP COMMIT`, can be broken down into its components: ``` ANY 2 (group) GROUP COMMIT From afda10c8831fd859c9ca038d4af5161da943eeaa Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Tue, 12 Mar 2024 10:21:45 -0400 Subject: [PATCH 23/51] Update commit-scope-rules.mdx --- product_docs/docs/pgd/5/durability/commit-scope-rules.mdx | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx b/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx index 0dc92cf7a38..5bd4a73731c 100644 --- a/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx +++ b/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx @@ -14,13 +14,7 @@ commit_scope_group [ confirmation_level ] commit_scope_kind A full formal syntax diagram is available in the [Commit scopes](/pgd/latest/reference/commit-scopes/#commit-scope-syntax) reference. -A typical commit scope rule, such as `ANY 2 (group) GROUP COMMIT`, can be broken down into its components: - -``` -ANY 2 (group) GROUP COMMIT -``` - -The `ANY 2 (group)` is the commit scope group specifying, for the rule, which nodes need to respond and confirm they processed the transaction. In this example, any two nodes from the named group must confirm. +A typical commit scope rule, such as `ANY 2 (group) GROUP COMMIT`, can be broken down into its components. `ANY 2 (group)` is the commit scope group specifying, for the rule, which nodes need to respond and confirm they processed the transaction. In this example, any two nodes from the named group must confirm. No confirmation level is specified, which means that the default is used. You can think of the rule in full, then, as: From 684914d53a45e85c973d8848e9cb7409940704c0 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Tue, 12 Mar 2024 10:25:38 -0400 Subject: [PATCH 24/51] Update commit-scope-rules.mdx --- .../docs/pgd/5/durability/commit-scope-rules.mdx | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx b/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx index 5bd4a73731c..8e2de1a2cba 100644 --- a/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx +++ b/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx @@ -30,12 +30,12 @@ The last part of this operation is the commit scope kind, which in this example There are three kinds of commit scope groups: `ANY`, `ALL`, and `MAJORITY`. They're all followed by a list of one or more groups in parentheses. This list of groups combines to make a pool of nodes this this operation applies to. This list can be preceded by `NOT`, which inverts to pool to all other groups apart from those in the list. Witness nodes aren't eligible to be included in this pool, as they don't replicate data. -- `ANY n` is followed by an integer value, `n`. It translates to any `n` nodes in the listed group's nodes. -- `ALL` is followed by the groups and translates to all nodes in the listed groups nodes. -- `MAJORITY` is followed by the groups and translates to requiring a half, plus one, of the listed group's nodes to confirm, to give a majority. -- `ANY n NOT` is followed by an integer value, `n`. It translates to any `n` nodes that aren't in the listed group's nodes. -- `ALL NOT` is followed by the groups and translates to all nodes that aren't in the listed group's nodes. -- `MAJORITY NOT` is followed by the groups and translates to requiring a half, plus one, of the nodes that aren't in the listed group's nodes to confirm, to give a majority. +- `ANY n` is followed by an integer value, `n`. It translates to any `n` nodes in the listed groups' nodes. +- `ALL` is followed by the groups and translates to all nodes in the listed groups' nodes. +- `MAJORITY` is followed by the groups and translates to requiring a half, plus one, of the listed groups' nodes to confirm, to give a majority. +- `ANY n NOT` is followed by an integer value, `n`. It translates to any `n` nodes that aren't in the listed groups' nodes. +- `ALL NOT` is followed by the groups and translates to all nodes that aren't in the listed groups' nodes. +- `MAJORITY NOT` is followed by the groups and translates to requiring a half, plus one, of the nodes that aren't in the listed groups' nodes to confirm, to give a majority. ## The confirmation level From 606cfa57c8037f3c4b845539290ebea31840066c Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Tue, 12 Mar 2024 10:28:53 -0400 Subject: [PATCH 25/51] Update product_docs/docs/pgd/5/durability/commit-scope-rules.mdx Co-authored-by: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> --- product_docs/docs/pgd/5/durability/commit-scope-rules.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx b/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx index 8e2de1a2cba..af66be71411 100644 --- a/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx +++ b/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx @@ -73,7 +73,7 @@ For more details, see [`GROUP COMMIT`](group-commit). ### `CAMO` -Commit At Most Once, or CAMO, allows the client/application, origin node, and partner node to ensure that a transaction is committed to the database at most once. Because the client is involved in the process, the application requires modifications to participate in the CAMO process. +Commit At Most Once, or CAMO, allows the client/application, origin node, and partner node to ensure that a transaction is committed to the database at most once. Because the client is involved in the process, an application will require modifications to participate in the CAMO process. For more details, see [`CAMO`](camo). From 0949ecac73a6891400d5ccebe21f87aa87fdf596 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Tue, 12 Mar 2024 10:30:52 -0400 Subject: [PATCH 26/51] Update product_docs/docs/pgd/5/durability/commit-scope-rules.mdx A little easier to follow the logic this way. --- product_docs/docs/pgd/5/durability/commit-scope-rules.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx b/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx index af66be71411..f531849495c 100644 --- a/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx +++ b/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx @@ -85,7 +85,7 @@ For more details, see [`LAG CONTROL`](lag-control). ### `SYNCHRONOUS_COMMIT` -Synchronous Commit is a commit scope option that's designed to be like the legacy `synchronous_commit` option, but it's accessible in the commit scope environment. Unlike `GROUP COMMIT`, it's a synchronous non-two-phase commit operation, and it has no parameters. The preceding commit scope group controls the groups and confirmation requirements the `SYNCHRONOUS_COMMIT` uses. +Synchronous Commit is a commit scope option that's designed to be like the legacy `synchronous_commit` option, but it's accessible in the commit scope environment. Unlike `GROUP COMMIT`, it's a synchronous non-two-phase commit operation, and it has no parameters. The commit commit scope group that comes before this option controls the groups and confirmation requirements the `SYNCHRONOUS_COMMIT` uses. For more details, see [`SYNCHRONOUS_COMMIT`](synchronous_commit). From e9f3d02d26aedadb129b40035d9640241769177a Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Tue, 12 Mar 2024 10:42:19 -0400 Subject: [PATCH 27/51] Update product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx --- .../docs/pgd/5/reference/nodes-management-interfaces.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx b/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx index 0d5b20643d2..be5e50fcf3f 100644 --- a/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx +++ b/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx @@ -111,7 +111,7 @@ bdr.alter_node_option(node_name text, - `config_value` — New value to be set for the given key. The node options that can be changed using this function are: -- `route_priority` — Relative routing priority of the node against other nodes in the same node group. Default is `-1`. +- `route_priority` — Relative routing priority of the node against other nodes in the same node group. Default is `'-1'`. - `route_fence` — Whether the node is fenced from routing. When true, the node can't receive connections from PGD Proxy. Default is `f` (false). - `route_writes` — Whether writes can be routed to this node, that is, whether the node can become write leader. Default is `t` (true) for data nodes and `f` (false) for other node types. - `route_reads` — Whether read-only connections can be routed to this node. Currently reserved for future use. Default is `t` (true) for data and subscriber-only nodes, `f` (false) for witness and standby nodes. From 013e55f4c8398c9878d860c0a6db7b45483d1c9a Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Tue, 12 Mar 2024 10:46:09 -0400 Subject: [PATCH 28/51] Apply suggestions from code review --- .../docs/pgd/5/reference/nodes-management-interfaces.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx b/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx index be5e50fcf3f..36d06d3657d 100644 --- a/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx +++ b/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx @@ -112,9 +112,9 @@ bdr.alter_node_option(node_name text, The node options that can be changed using this function are: - `route_priority` — Relative routing priority of the node against other nodes in the same node group. Default is `'-1'`. -- `route_fence` — Whether the node is fenced from routing. When true, the node can't receive connections from PGD Proxy. Default is `f` (false). -- `route_writes` — Whether writes can be routed to this node, that is, whether the node can become write leader. Default is `t` (true) for data nodes and `f` (false) for other node types. -- `route_reads` — Whether read-only connections can be routed to this node. Currently reserved for future use. Default is `t` (true) for data and subscriber-only nodes, `f` (false) for witness and standby nodes. +- `route_fence` — Whether the node is fenced from routing. When true, the node can't receive connections from PGD Proxy. Default is `'f'` (false). +- `route_writes` — Whether writes can be routed to this node, that is, whether the node can become write leader. Default is `'t'` (true) for data nodes and `'f'` (false) for other node types. +- `route_reads` — Whether read-only connections can be routed to this node. Currently reserved for future use. Default is `'t'` (true) for data and subscriber-only nodes, `'f'` (false) for witness and standby nodes. - `route_dsn` — The dsn for the proxy to use to connect to this node. This option is optional. If not set, it defaults to the node's `node_dsn` value. ## `bdr.alter_subscription_enable` From 41f5d1e98d62a8d001d329f96ff020eb541ac017 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Tue, 9 Apr 2024 06:28:44 +0100 Subject: [PATCH 29/51] Update product_docs/docs/pgd/5/durability/commit-scope-rules.mdx --- product_docs/docs/pgd/5/durability/commit-scope-rules.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx b/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx index f531849495c..9903835380b 100644 --- a/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx +++ b/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx @@ -28,7 +28,7 @@ The last part of this operation is the commit scope kind, which in this example ## The commit scope group -There are three kinds of commit scope groups: `ANY`, `ALL`, and `MAJORITY`. They're all followed by a list of one or more groups in parentheses. This list of groups combines to make a pool of nodes this this operation applies to. This list can be preceded by `NOT`, which inverts to pool to all other groups apart from those in the list. Witness nodes aren't eligible to be included in this pool, as they don't replicate data. +There are three kinds of commit scope groups: `ANY`, `ALL`, and `MAJORITY`. They're all followed by a list of one or more groups in parentheses. This list of groups combines to make a pool of nodes this operation applies to. This list can be preceded by `NOT`, which inverts the pool to be all other groups that aren't in the list. Witness nodes aren't eligible to be included in this pool, as they don't replicate data. - `ANY n` is followed by an integer value, `n`. It translates to any `n` nodes in the listed groups' nodes. - `ALL` is followed by the groups and translates to all nodes in the listed groups' nodes. From 043618f707c1fde1b6b56e017d38d6d0c561bb84 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Tue, 9 Apr 2024 06:32:37 +0100 Subject: [PATCH 30/51] Update product_docs/docs/pgd/5/durability/group-commit.mdx --- product_docs/docs/pgd/5/durability/group-commit.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/durability/group-commit.mdx b/product_docs/docs/pgd/5/durability/group-commit.mdx index 2c0c095dff2..88f24e27958 100644 --- a/product_docs/docs/pgd/5/durability/group-commit.mdx +++ b/product_docs/docs/pgd/5/durability/group-commit.mdx @@ -145,7 +145,7 @@ replicas may crash. This leaves the prepared transactions in the system. The `pg_prepared_xacts` view in Postgres can show prepared transactions on a system. The prepared -transactions might be holding locks and other resources and they therefore need +transactions might be holding locks and other resources. To release those locks and resources, either abort or commit the transaction. to be either aborted or committed. That decision must be made with a consensus of nodes. From 8764ebfa0e2b9bff5cc7e359c3cf33a4b86657ce Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Tue, 9 Apr 2024 06:34:28 +0100 Subject: [PATCH 31/51] Update product_docs/docs/pgd/5/consistency/eager.mdx --- product_docs/docs/pgd/5/consistency/eager.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/consistency/eager.mdx b/product_docs/docs/pgd/5/consistency/eager.mdx index eff14926bec..2519faa1dfc 100644 --- a/product_docs/docs/pgd/5/consistency/eager.mdx +++ b/product_docs/docs/pgd/5/consistency/eager.mdx @@ -54,7 +54,7 @@ In case of an origin node failure, the remaining nodes eventually (after at leas With single-node Postgres, or even with PGD in its default asynchronous replication mode, errors at `COMMIT` time are rare. The added synchronization -step due to the use of a commit scope using Eager +step due to the use of a commit scope using `eager` for conflict resolution also adds a source of errors. Applications need to be prepared to properly handle such errors, usually by applying a retry loop. From 6258b0d242318eeb9bc36b8d6047c0ee7601b3c0 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Mon, 3 Jun 2024 13:06:10 -0400 Subject: [PATCH 32/51] Apply suggestions from code review --- product_docs/docs/pgd/5/durability/commit-scope-rules.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx b/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx index 9903835380b..5749fa76a2a 100644 --- a/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx +++ b/product_docs/docs/pgd/5/durability/commit-scope-rules.mdx @@ -4,7 +4,7 @@ title: Commit scope rules Commit scope rules are at the core of the commit scope mechanism. They define what the commit scope enforces. -Commit scope rules are composed of one or more operations connected by an `AND`. +Commit scope rules are composed of one or more operations that work in combination. Use an AND between rules. Each operation is made up of two or three parts: the commit scope group, an optional confirmation level, and the kind of commit scope, which can have its own parameters. @@ -91,7 +91,7 @@ For more details, see [`SYNCHRONOUS_COMMIT`](synchronous_commit). ## Combining rules -A rule can have multiple operations connected by an `AND` to form a single rule. For example: +Commit scope rules are composed of one or more operations that work in combination. Use an AND to form a single rule. For example: ``` MAJORITY (Region_A) SYNCHRONOUS_COMMIT AND ANY 1 (Region_A) LAG CONTROL (MAX_LAG_SIZE = '50MB') From 1f0feab022d6470ddbcaa480759e40d2f93df6f6 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Tue, 9 Apr 2024 10:55:27 -0400 Subject: [PATCH 33/51] Edits to BigAnimal PR5376 --- .../identity_provider/index.mdx | 26 +++++++++---------- 1 file changed, 12 insertions(+), 14 deletions(-) diff --git a/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx b/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx index 2833e9b4b9f..8be42123b31 100644 --- a/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx @@ -80,23 +80,22 @@ Once your identity provider is set up, you can view your connection status, ID, You need a verified domain so your users can have a streamlined login experience with their email address. 1. On the **Domains** tab, enter the domain name and select **Next: Verify Domain**. -2. Copy the TXT record and follow the instructions in the on screen verify box (repeated below), to add it as a TXT record on that domain within your DNS provider's management console. +2. To add it as a TXT record on that domain in your DNS provider's management console, copy the TXT record and follow the instructions in the on-screen verify box: - - Log in to your domain registrar or web host account. - - Navigate to the DNS settings for the domain you want to verify. - - Add a TXT record. - - In the Name field, enter @. - - In the Value field, enter the verification string provided, eg. - - “edb-biganimal-verification=VjpcxtIC57DujkKMtECSwo67FyfCExku” - - Save your changes and wait for the DNS propagation period to complete. This can take up to 48 hours. + 1. Log in to your domain registrar or web host account. + 1. Navigate to the DNS settings for the domain you want to verify. + 1. Add a TXT record. + 1. In the **Name** field, enter `@`. + 1. In the **Value** field, enter the verification string provided, for example, `"edb-biganimal-verification=VjpcxtIC57DujkKMtECSwo67FyfCExku"`. + 1. Save your changes and wait for the DNS propagation period to complete. This can take up to 48 hours. 3. Select **Done**. Your domain and its status appear on the **Domains** tab, where you can delete or verify it. Domains can take up to 48 hours for the change of the domain record by the DNS provider to propagate before you can verify it. -4. If your domain has not verified after a day, you can debug whether your domain has the matching verification text field. - Select **Verify** next to the domain at `/settings/domains` to check the exact value of the required TXT field. +4. If your domain hasn't verified after a day, you can debug whether your domain has the matching verification text field. + o check the exact value of the required TXT field, select **Verify** next to the domain at `/settings/domains`. Query your domain directly with DNS tools, such as nslookup, to check if you have an exact match for a text = "verification" field. Domains can have many TXT fields. As long as one matches, it should verify. @@ -113,7 +112,7 @@ mydomain.com text = “edb-biganimal-verification=VjpcxtIC57DujkKMtECSwo67FyfCEx To add another domain, select **Add Domain**. -When you have at least one verified domain (with Status = Verified, in green), the identity provider status becomes **Active** on the **Identity Providers** tab. +When you have at least one verified domain (with **Status = Verified**, in green), the identity provider status becomes **Active** on the **Identity Providers** tab. When the domain is no longer verified, the status becomes **Inactive**. !!! Note @@ -131,9 +130,8 @@ it appears as **Status = Expired** (in red). You can't reinstate an expired domain because expiry means you might no longer own the domain. You need to verify it again. -To delete the domain, select the bin icon. -To re-create it, select **Add Domain**. -Set a new verification key for the domain and update the TXT record for it in your DNS provider's management console, as described in [Add a doman](#add-a-domain). +- To delete the domain, select the bin icon. +- To re-create the domain, select **Add Domain**. Set a new verification key for the domain and update the TXT record for it in your DNS provider's management console, as described in [Add a doman](#add-a-domain). ### Manage roles for added users From 7dde352b21c417f4714665ca2dd991ab772d41cf Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Mon, 3 Jun 2024 13:10:26 -0400 Subject: [PATCH 34/51] Update product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx --- .../release/getting_started/identity_provider/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx b/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx index 8be42123b31..eb5f867d5e5 100644 --- a/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx @@ -86,7 +86,7 @@ You need a verified domain so your users can have a streamlined login experience 1. Navigate to the DNS settings for the domain you want to verify. 1. Add a TXT record. 1. In the **Name** field, enter `@`. - 1. In the **Value** field, enter the verification string provided, for example, `"edb-biganimal-verification=VjpcxtIC57DujkKMtECSwo67FyfCExku"`. + 1. In the **Value** field, enter the verification string provided, for example, `edb-biganimal-verification=VjpcxtIC57DujkKMtECSwo67FyfCExku`. 1. Save your changes and wait for the DNS propagation period to complete. This can take up to 48 hours. 3. Select **Done**. From 19332ab65b5655b506211294fb8d715e29d38dc0 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Thu, 18 Apr 2024 13:51:14 -0400 Subject: [PATCH 35/51] Edits to BigAnimal PR5479 --- .../getting_started/managing_cluster.mdx | 4 +- .../reference/cli/managing_clusters.mdx | 52 ++++++++++--------- 2 files changed, 29 insertions(+), 27 deletions(-) diff --git a/product_docs/docs/biganimal/release/getting_started/managing_cluster.mdx b/product_docs/docs/biganimal/release/getting_started/managing_cluster.mdx index 4c4b4dd568f..8afdf613c5b 100644 --- a/product_docs/docs/biganimal/release/getting_started/managing_cluster.mdx +++ b/product_docs/docs/biganimal/release/getting_started/managing_cluster.mdx @@ -16,7 +16,7 @@ While paused, clusters aren't upgraded or patched, but upgrades are applied when After seven days, single-node and high-availability clusters automatically resume. Resuming a cluster applies any pending maintenance upgrades. Monitoring begins again. -With CLI 3.7.0 and later, you can [pause and resume cluster using CLI](../../getting_started/managing_cluster/#pausing-a-cluster). +With CLI 3.7.0 and later, you can [pause and resume a cluster using the CLI](../../reference/cli/managing_clusters/#pausing-a-cluster). You can enable in-app inbox or email notifications to get alerted when the paused cluster is or will be reactivated. For more information, see [managing notifications](../administering_cluster/notifications/#manage-notifications). @@ -37,5 +37,5 @@ You can enable in-app inbox or email notifications to get alerted when the pause 3. Confirm that you want to resume the cluster. The process might take a few minutes. When it finishes, the cluster status appears as Healthy. !!!note -A TDE enabled cluster, resumes only if the TDE key status is ready or available. Clusters are automatically paused if there is any issue with the TDE key. You need to resolve/give permissions to the key in your respective cloud region. Resume the cluster manually after resolving the issues. +A TDE-enabled cluster resumes only if the TDE key status is ready or available. Clusters are automatically paused if there is any issue with the TDE key. You need to resolve/give permissions to the key in your respective cloud region. Resume the cluster manually after resolving the issues. !!! diff --git a/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx b/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx index 42c6ba48740..2ed283f55c2 100644 --- a/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx +++ b/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx @@ -298,7 +298,7 @@ To restore a deleted cluster, use the `--from-deleted` flag in the command. You can restore a cluster in a single cluster to a primary/standby high-availability cluster and vice versa. You can restore a distributed high-availability cluster only to a cluster using the same architecture. !!! -### Pausing a cluster +### Pause a cluster To pause a cluster, use the `cluster pause` command. The `cluster pause` command supports `flag` or `interactive` mode. The syntax for the command is: @@ -306,22 +306,23 @@ To pause a cluster, use the `cluster pause` command. The `cluster pause` command biganimal cluster pause {--id | --provider --region --name} ``` -Where `id` is a valid cluster ID. The `id` is mandatory. - `provider` is a cloud provider of the cluster. - `region` is the region of the cluster. - `name` is the name of the cluster. +Where: +- `id` is a valid cluster ID. The `id` is mandatory. +- `provider` is a cloud provider of the cluster. +- `region` is the cluster region. +- `name` is the name of the cluster. -If `id` of the cluster isn't known then use `--provider --region --name` to identify the cluster. +If you don't know the `id` of the cluster, use `--provider --region --name` to identify the cluster. -Examples: +The following examples show common uses of the `cluster pause` command. -Pausing a cluster using ID: +To pausing a cluster using the ID: ``` biganimal cluster pause --id p-c5fh47nf ``` -Pausing a cluster using name, provider, and region: +To pause a cluster using name, provider, and region: ``` biganimal cluster pause @@ -330,7 +331,7 @@ biganimal cluster pause --region eastus2 ``` -Pausing a cluster in interactive mode: +To pause a cluster in interactive mode: ``` ./biganimal cluster pause @@ -341,7 +342,7 @@ __OUTPUT__ Pause Cluster operation succeeded, "p-94pjd2w0ty" ``` -### Resuming a cluster +### Resume a cluster To resume a cluster, use the `cluster resume` command. The `cluster resume` command supports `flag` and `interactive` mode. The syntax for the command is: @@ -349,22 +350,23 @@ To resume a cluster, use the `cluster resume` command. The `cluster resume` comm biganimal cluster resume {--id | --provider --region --name} ``` -Where `id` is a valid cluster ID. The `id` is mandatory. - `provider` is a cloud provider of the cluster. - `region` is the region of the cluster. - `name` is the name of the cluster. +Where: +- `id` is a valid cluster ID. The `id` is mandatory. +- `provider` is a cloud provider of the cluster. +- `region` is the cluster region. +- `name` is the name of the cluster. -If `id` of the cluster isn't known then use `--provider --region --name` to identify the cluster. +You don't know the `id` of the cluster, use `--provider --region --name` to identify the cluster. -Examples: +The following examples show common uses of the `cluster pause` command. -Resuming a cluster using ID: +To resume a cluster using the ID: ``` biganimal cluster resume --id p-c5fh47nf ``` -Resuming a cluster using name, provider, and region: +To resuming a cluster using the name, provider, and region: ``` biganimal cluster resume @@ -373,7 +375,7 @@ biganimal cluster resume --region eastus2 ``` -Resuming a cluster using interactive mode: +To resume a cluster using interactive mode: ``` ./biganimal cluster resume @@ -582,9 +584,9 @@ The `--id` and `--group-id` flags are mandatory. For example: biganimal pgd delete-group --id clusterID --group-id clusterDataGroupID ``` -### Pausing a distributed high-availability cluster +### Pause a distributed high-availability cluster -To pause a distributed high-availability cluster, use `pgd pause` command. The `pgd pause` command supports the `flag` mode only. The syntax for the command is: +To pause a distributed high-availability cluster, use the `pgd pause` command. The `pgd pause` command supports `flag` mode only. The syntax for the command is: ``` biganimal pgd pause {--id} @@ -592,7 +594,7 @@ biganimal pgd pause {--id} Where `id` is a valid cluster ID. The `id` flag is mandatory. -Example: +For example: ``` biganimal pgd pause --id p-c5fh47nf @@ -606,9 +608,9 @@ To resume a distributed high-availability cluster, use the `pgd resume` command. biganimal pgd resume {--id} ``` -Where, `id` is a valid cluster ID. The `id` flag is mandatory. +Where `id` is a valid cluster ID. The `id` flag is mandatory. -Example: +For example: ``` biganimal pgd resume --id p-c5fh47nf From ef4e21deffb4cd663528d2e1a4780eeee854d04b Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Thu, 18 Apr 2024 13:53:59 -0400 Subject: [PATCH 36/51] Update product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx --- .../docs/biganimal/release/reference/cli/managing_clusters.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx b/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx index 2ed283f55c2..a26371973c9 100644 --- a/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx +++ b/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx @@ -322,7 +322,7 @@ To pausing a cluster using the ID: biganimal cluster pause --id p-c5fh47nf ``` -To pause a cluster using name, provider, and region: +To pause a cluster using the name, provider, and region: ``` biganimal cluster pause From 296bf55c4ca4051f5942ba2dd31b2a15cdb90a97 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Thu, 18 Apr 2024 13:55:15 -0400 Subject: [PATCH 37/51] Update product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx --- .../docs/biganimal/release/reference/cli/managing_clusters.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx b/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx index a26371973c9..dc38327ffa0 100644 --- a/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx +++ b/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx @@ -356,7 +356,7 @@ Where: - `region` is the cluster region. - `name` is the name of the cluster. -You don't know the `id` of the cluster, use `--provider --region --name` to identify the cluster. +If you don't know the `id` of the cluster, use `--provider --region --name` to identify the cluster. The following examples show common uses of the `cluster pause` command. From b44d1e097cf863a23be64b548f68c9b1a7d297c7 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Thu, 18 Apr 2024 13:55:52 -0400 Subject: [PATCH 38/51] Update product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx --- .../docs/biganimal/release/reference/cli/managing_clusters.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx b/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx index dc38327ffa0..161bfa434f4 100644 --- a/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx +++ b/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx @@ -358,7 +358,7 @@ Where: If you don't know the `id` of the cluster, use `--provider --region --name` to identify the cluster. -The following examples show common uses of the `cluster pause` command. +The following examples show common uses of the `cluster resume` command. To resume a cluster using the ID: From 15e8f9375e820c069904573c907aff37ab72ac9e Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Thu, 18 Apr 2024 13:57:02 -0400 Subject: [PATCH 39/51] Update product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx --- .../docs/biganimal/release/reference/cli/managing_clusters.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx b/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx index 161bfa434f4..5bf59a35dc1 100644 --- a/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx +++ b/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx @@ -592,7 +592,7 @@ To pause a distributed high-availability cluster, use the `pgd pause` command. T biganimal pgd pause {--id} ``` -Where `id` is a valid cluster ID. The `id` flag is mandatory. +Where `id` is a valid cluster ID. The `id` is mandatory. For example: From 3ef8c72f0ed6efc6abf68d60498b705a9cc9b486 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Thu, 18 Apr 2024 13:57:33 -0400 Subject: [PATCH 40/51] Update product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx --- .../docs/biganimal/release/reference/cli/managing_clusters.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx b/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx index 5bf59a35dc1..0a179dd2665 100644 --- a/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx +++ b/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx @@ -608,7 +608,7 @@ To resume a distributed high-availability cluster, use the `pgd resume` command. biganimal pgd resume {--id} ``` -Where `id` is a valid cluster ID. The `id` flag is mandatory. +Where `id` is a valid cluster ID. The `id` is mandatory. For example: From 8870be70707baf85f7372c4712fc9771f5bc826f Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Thu, 23 May 2024 16:35:38 -0400 Subject: [PATCH 41/51] Edits to BigAnimal PR5660 (health status --- .../health_status.mdx | 43 +++++++++---------- 1 file changed, 21 insertions(+), 22 deletions(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/health_status.mdx b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/health_status.mdx index f3e78c1028d..faa40baa537 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/health_status.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/health_status.mdx @@ -3,18 +3,18 @@ title: "Health Status" deepToC: true --- -The Health Status dashboard provides real-time insight into the topology and health of Postgres high-availability clusters. It supports both Primary/Standby and Distributed high-availability clusters. +The Health Status dashboard provides real-time insight into the topology and health of Postgres high-availability clusters. It supports both primary/standby and distributed high-availability clusters. The Health Status dashboard displays: - A set of cluster-wide health indicators that helps to draw your attention immediately to the critical issues. -- A schematic view of all the nodes organized in a cluster distributed across regions displaying health and role(s) of each node. +- A schematic view of all the nodes organized in a cluster distributed across regions displaying the health and roles of each node. - A replication status view displaying the status of each replication slot and the associated replication lags. ## Viewing Health Status dashboard To view the **Health Status** dashboard from the BigAnimal portal: -1. In the left navigation of BigAnimal Portal, go to **Clusters**. +1. In the left navigation of the BigAnimal portal, go to **Clusters**. 2. Select any ready high-availability or PGD cluster. @@ -27,41 +27,41 @@ The **Health Status** tab displays the dashboard with health status categorized ### Global cluster health -The global cluster health section displays the cluster-wide view including the following metrics: +The global cluster health section displays the cluster-wide view, including the following metrics: - **Raft Status** (PGD only) indicates whether the Raft consensus algorithm is running correctly in the cluster. It verifies that one node is elected as the global leader and the Raft roles such as RAFT_LEADER and RAFT_FOLLOWER are defined correctly across all the nodes. -- **Replication Slot Status** (PGD only) indicates whether all the replication slots are in streaming state or not. +- **Replication Slot Status** (PGD only) indicates whether all the replication slots are in streaming state. - **Clock Skew** indicates whether the node's clock is in sync and doesn't exceed a threshold of 60 seconds. -- **Proxy Status** (PGD only) provides the number of PGD proxies up and running as compared to the available proxies. +- **Proxy Status** (PGD only) provides the number of PGD proxies up and running compared to the available proxies. - **Node Status** provides the number of nodes that are up and running. -- **Transaction rate** provides the total number of transactions including committed and rolled back transactions per second in the cluster. +- **Transaction rate** provides the total number of transactions, including committed and rolled back transactions per second in the cluster. ### Regional nodes health and roles -The regional nodes health and roles section displays fine-grained health status at regional and node level. It is structured as an accordion, with each element representing a group of nodes deployed in the same region. Each item displays basic information including: -- **Proxy Status** (only PGD) indicates the number of active proxies compared to the available proxies in the specified regions. -- **Node Status** indicates the number of nodes up and running as compared to the available nodes in the specified region in a text chart. It provides the status of all nodes in the specified region using boolean indicator (green (OK)/red (KO)) in a text chart. +The regional nodes health and roles section displays fine-grained health status at the regional and node level. It's structured as an accordion, with each element representing a group of nodes deployed in the same region. Each item displays basic information including: +- **Proxy Status** (PGD only) indicates the number of active proxies compared to the available proxies in the specified regions. +- **Node Status** indicates the number of nodes up and running compared to the available nodes in the specified region in a text chart. It provides the status of all nodes in the specified region using a Boolean indicator (green (OK)/red (KO)) in a text chart. -On expanding each item, it provides a list of nodes with information like: -- Total number of active connections as compared to the maximum number of configured connections for each node. -- A **Node Ko** tag for each node if it is down. +Expanding each item provides a list of nodes with information like: +- Total number of active connections compared to the maximum number of configured connections for each node. +- A **Node Ko** tag for each node if it's down. - Memory usage percentage on a progress bar. - Storage usage percentage on a progress bar. For PGD, it provides the tags below the node name: -- **Raft Leader** indicates that the node is a Raft Leader locally in the region. -- **Raft Follower** indicates that the node is a Raft Follower locally in the region. -- **Global Raft Leader** indicates that the nodes is a Raft Leader globally in the cluster. -- **Global Raft Follower** indicates that the node is a Raft Follower globally in the cluster. -- **Witness** indicates that the node is a witness in the PGD cluster. See [witness node docs](/pgd/latest/node_management/witness_nodes/) for more information. +- **Raft Leader** indicates that the node is a Raft leader locally in the region. +- **Raft Follower** indicates that the node is a Raft follower locally in the region. +- **Global Raft Leader** indicates that the nodes is a Raft leader globally in the cluster. +- **Global Raft Follower** indicates that the node is a Raft follower globally in the cluster. +- **Witness** indicates that the node is a witness in the PGD cluster. See [Witness nodes](/pgd/latest/node_management/witness_nodes/) for more information. -For high-availability, it provides the tags below the node name: +For high availability, it provides the tags below the node name: - **Primary** indicates if the node role is primary. ### Replication status -The Replication Status section has a matrix displaying the replication lag across all the nodes of the cluster. The matrix provides different types of replication lags for **Write**, **Replay**, **Flush**, and **Sent**. It provides the lag in both bytes and time for **Write**, **Replay**, and **Flush** whereas only in bytes for **Sent**. +The replication Status section has a matrix displaying the replication lag across all the nodes of the cluster. The matrix provides different types of replication lags for **Write**, **Replay**, **Flush**, and **Sent**. It provides the lag in both bytes and time for **Write**, **Replay**, and **Flush**. It provides the lag only in bytes for **Sent**. !!!note - In high-availability clusters, replication occurs only from the primary (source) to the replicas (target). So the matrix displays only one row for the source and multiple columns for the targets. @@ -69,6 +69,5 @@ The Replication Status section has a matrix displaying the replication lag acros !!! !!!note -The data on the Health Status dashboard is dynamic and is updated continuously. However, the cluster architecture is based on a snapshot. If a new node is added or an existing node is removed, you must reload the Health Status dashboard by clicking the tab again or reloading the browser page. +The data on the Health Status dashboard is dynamic and is updated continuously. However, the cluster architecture is based on a snapshot. If a new node is added or an existing node is removed, you must reload the Health Status dashboard by selecting the tab again or reloading the browser page. !!! - From 4da93985f14fde32dc27670071584076863e1cca Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Tue, 30 Apr 2024 16:51:49 -0400 Subject: [PATCH 42/51] Edits to BigAnimal PR5621 --- .../administering_cluster/notifications.mdx | 18 +++++++++--------- .../biganimal/release/overview/updates.mdx | 2 +- .../release/reference/access_key/index.mdx | 2 +- .../third_party_integrations/index.mdx | 6 +++--- 4 files changed, 14 insertions(+), 14 deletions(-) diff --git a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx index 6a6fc4375ea..cf0b8ef4f93 100644 --- a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx +++ b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx @@ -6,27 +6,27 @@ description: "Use notifications to get an alert for the different types of event With BigAnimal, you can opt to get specific types of notifications and receive both in-app and email notifications. Different types of events are sent as notifications. Users with different roles can configure the preferences to receive these notifications in the in-app inbox, by email, or both. -The notifications are categorized into following preference sections: +The notifications are categorized into the following preference sections: - Account - Organizations - Projects -The notifications under **Account** preference section: +The notifications under the **Account** preference section are: - New project role assigned to you - Project role unassigned from you - Personal access key is about to expire -The notifications under **Organizations** preference section: +The notifications under the **Organizations** preference section are: -- Payment method added (specific to organizations that opted credit card payment option) +- Payment method added (specific to organizations that opted for credit card payment option) - Machine user access key is expiring !!!note This section is visible only to the organization owner. If the current user is owner of more than one organization, then this section lists the preferences for all the organizations. !!! -The notifications under **Projects** preference section: +The notifications under the **Projects** preference section are: - Upcoming cluster maintenance upgrade - Successful cluster maintenance upgrade @@ -54,18 +54,18 @@ Users in the following roles can view the notifications: - Project owners/editors can view the project notifications. - User can view their own account notifications. -Each notification indicates the level and project it belongs to for the user having multiple roles in BigAnimal. +For the user having multiple roles in BigAnimal, each notification indicates the level and project it belongs to. Select the bell at the top of your BigAnimal portal to view the in-app notifications. By selecting the bell, you can read the notification, mark it as unread, and archive it. To view the email notifications, check the inbox of your configured email addresses. -## Manage notifications +## Managing notifications To manage the notifications: 1. Log in to the BigAnimal portal. 1. From the menu under your name in the top-right panel, select **My Account**. 1. Select the **Notifications** tab. Notifications are grouped by account, organizations, and projects available to you. -1. Select account, organization, or project to manage the notifications. +1. Select **Account**, **Organization**, or **Project** to manage the notifications. - Enable/disable the notification for a particular event using the toggle. - - Select **Email** and **Inbox** next to an event to enable/disable the email and in-app notifications for the event. \ No newline at end of file + - To enable/disable the email and in-app notifications for the event, select **Email** and **Inbox** next to an event. \ No newline at end of file diff --git a/product_docs/docs/biganimal/release/overview/updates.mdx b/product_docs/docs/biganimal/release/overview/updates.mdx index edd07c97b78..e18d887171f 100644 --- a/product_docs/docs/biganimal/release/overview/updates.mdx +++ b/product_docs/docs/biganimal/release/overview/updates.mdx @@ -6,7 +6,7 @@ EDB performs periodic maintenance to ensure stability and security of your clust ## Notification of upcoming maintenance -You're notified in the BigAnimal portal before maintenance occurs. Details are available on the [BigAnimal status page](https://status.biganimal.com/). You can subscribe to get these updates in a feed by selecting **Subscribe to Updates** on the status page. You can also enable the notifications to receive in-app or email notifications for upcoming, successful, and failed cluster maintenance upgrade. For more information, see [managing notifications](../administering_cluster/notifications/#manage-notifications). +You're notified in the BigAnimal portal before maintenance occurs. Details are available on the [BigAnimal status page](https://status.biganimal.com/). You can subscribe to get these updates in a feed by selecting **Subscribe to Updates** on the status page. You can also enable the notifications to receive in-app or email notifications for upcoming, successful, and failed cluster maintenance upgrade. For more information, see [Managing notifications](../administering_cluster/notifications/#managing-notifications). EDB reserves the right to upgrade customers to the latest minor version without prior notice in an extraordinary circumstance. You can't configure minor versions. diff --git a/product_docs/docs/biganimal/release/reference/access_key/index.mdx b/product_docs/docs/biganimal/release/reference/access_key/index.mdx index 8f0ef1961e2..4d1357c1b00 100644 --- a/product_docs/docs/biganimal/release/reference/access_key/index.mdx +++ b/product_docs/docs/biganimal/release/reference/access_key/index.mdx @@ -27,7 +27,7 @@ To create an access key: Copy this access key and save it in a secure location. The access key is available only when you create it. If you lose your access key, you must delete it and create a new one. -You can enable in-app inbox or email notifications to get alerted when your personal key is about to expire. For more information, see [manage notifications](../../administering_cluster/notifications/#manage-notifications). +You can enable in-app inbox or email notifications so that you're alerted when your personal key is about to expire. For more information, see [Managing notifications](../../administering_cluster/notifications/#managing-notifications). ## Manage access key diff --git a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx index c54a218a28a..84f7376ad5c 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx @@ -8,9 +8,9 @@ Monitoring integrations are configured at the project level in BigAnimal. You ca By default, all the integrations are disabled. After creating the project, enable an integration using the **Integrations** tab. -All the metrics collected from all the clusters in the project are sent to the integrated tool and displayed in the BigAnimal [**Monitoring and logging** tab using BigAnimal's Observability](../monitoring_using_biganimal_observability). The collected logs are exported to the object storage by default. +All the metrics collected from all the clusters in the project are sent to the integrated tool and displayed in the BigAnimal **Monitoring and logging** tab [using BigAnimal's Observability](../monitoring_using_biganimal_observability). The collected logs are exported to the object storage by default. -You can enable in-app inbox or email notifications to get notified incase third-party monitoring integration fails. For more information, see [managing notifications](../../../administering_cluster/notifications/#manage-notifications). +You can enable in-app inbox or email notifications so that you are notified if third-party monitoring integration fails. For more information, see [Managing notifications](../../../administering_cluster/notifications/#managing-notifications). The third-party integrations available in BigAnimal are: @@ -21,7 +21,7 @@ The third-party integrations available in BigAnimal are: When metrics from BigAnimal are exported to third-party monitoring services, they're renamed according to the naming conventions of the target platform. -The following table provides a mapping between [BigAnimal metric names](/biganimal/release/using_cluster/05_monitoring_and_logging/metrics/) +The following table provides a mapping between [BigAnimal metric names](../metrics/) and the name that metric will be assigned when exported to a third-party service. !!! Note Kubernetes metrics From 216dec2b3ea0feceb2ffe03ab61b367ce709ac4f Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Tue, 30 Apr 2024 16:54:59 -0400 Subject: [PATCH 43/51] Update product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx --- .../third_party_integrations/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx index 84f7376ad5c..0400c5d2f30 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx @@ -10,7 +10,7 @@ By default, all the integrations are disabled. After creating the project, enabl All the metrics collected from all the clusters in the project are sent to the integrated tool and displayed in the BigAnimal **Monitoring and logging** tab [using BigAnimal's Observability](../monitoring_using_biganimal_observability). The collected logs are exported to the object storage by default. -You can enable in-app inbox or email notifications so that you are notified if third-party monitoring integration fails. For more information, see [Managing notifications](../../../administering_cluster/notifications/#managing-notifications). +You can enable in-app inbox or email notifications so that you're notified if third-party monitoring integration fails. For more information, see [Managing notifications](../../../administering_cluster/notifications/#managing-notifications). The third-party integrations available in BigAnimal are: From 8f52bc02b1f65942fc3ca031a5a7e51e83758954 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Thu, 25 Apr 2024 12:11:59 -0400 Subject: [PATCH 44/51] Edits to epas15 pr3884 --- .../planning/deployment_options/aws_epas.mdx | 24 +++++++++---------- 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/product_docs/docs/epas/15/planning/deployment_options/aws_epas.mdx b/product_docs/docs/epas/15/planning/deployment_options/aws_epas.mdx index 5d7930dbe73..78185045782 100644 --- a/product_docs/docs/epas/15/planning/deployment_options/aws_epas.mdx +++ b/product_docs/docs/epas/15/planning/deployment_options/aws_epas.mdx @@ -23,16 +23,16 @@ To deploy an EDB Postgres Advanced Server instance on AWS: 1. On the **Launch an instance** page, select **Choose an Amazon Machine Image(AMI)**. -1. On the **Choose an Amazon Machine Image(AMI)** page, go to **AWS Marketplace AMIs** tab, type **EDB** in the search bar and choose the EDB Postgres Advanced Server image. +1. On the **Choose an Amazon Machine Image(AMI)** page, go to the **AWS Marketplace AMIs** tab. Enter **EDB** in the search bar. -1. Select the **EDB Postgres Advanced Server image** and review the all the tabs: - - Overview - - Product details - - Pricing - - Usage - - Support +1. Select the EDB Postgres Advanced Server image and review the all the tabs: + - **Overview** + - **Product details** + - **Pricing** + - **Usage** + - **Support** -1. Select continue to go on the **Launch an instance** page, and specify the following: +1. Select **Continue**. On the **Launch an instance** page, specify the following: - **Name and tags** — Provide the name of the server, for example, `EDB test server`. @@ -48,7 +48,7 @@ To deploy an EDB Postgres Advanced Server instance on AWS: - **Key pair (login)** — Select an existing key pair or create a new key pair. If you create a new key pair, enter a key-pair name, select a key-pair type, and select a private-key file format. Download the new key pair and move it to a location where you can access it. You need a key pair to securely connect to your instance. - - **Network settings** — For **Firewall**, select an existing security group or create a new security group. For more information, see [Network settings](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-networking.html). + - **Network settings** — For **Firewall**, select an existing security group or create a new security group. For more information, see [Network settings](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-networking.html) in the Amazon documentation. - **Configure storage** — Allocate the amount of storage you need for your instance. @@ -56,11 +56,11 @@ To deploy an EDB Postgres Advanced Server instance on AWS: Review the instance details in the **Summary** section on the right panel, and select **Launch instance**. - At last, you see the success message along with the instance id. Select the instance id to view the instance and see the auto-assigned IP address. + You see the success message along with the instance id. Select the instance id to view the instance and see the auto-assigned IP address. ## Connecting to an instance -You need the auto-assigned IP address to connect to your instance. To find the IP address, select **Instances** in the navigation pane on the EC2 home page. To view the complete details of your instance, including the IP address, select the instance ID next to your instance name. +You need the auto-assigned IP address to connect to your instance. To find the IP address, in the navigation pane on the EC2 home page, select **Instances**. To view the complete details of your instance, including the IP address, select the instance id next to your instance name. 1. Open a terminal window. @@ -78,7 +78,7 @@ You need the auto-assigned IP address to connect to your instance. To find the I ssh -i your_key_pair ec2-user@instance_ip_address ``` - You are now connected to the AWS EC2 instance where EDB Postgres Advanced Server is installed. + You're now connected to the AWS EC2 instance where EDB Postgres Advanced Server is installed. ## Getting started with a cluster From 88f2a7f2b1e4c7e763baf640b2c3cceeb95e5c91 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Tue, 16 Apr 2024 12:58:23 -0400 Subject: [PATCH 45/51] Edits to BigAnimal PR5462 --- .../release/migration/dha_bulk_migration.mdx | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx b/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx index d07712871e9..f59184bed93 100644 --- a/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx +++ b/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx @@ -30,13 +30,13 @@ Make note of the target's proxy hostname (target-proxy) and port (target-port). The following instructions give examples for a cluster named `ab-cluster` with an `ab-group` subgroup and three nodes: `ab-node-1`, `ab-node-2`, and `ab-node3`. The cluster is accessed through a host named `ab-proxy` (the target-proxy). On BigAnimal, a cluster is configured, by default, with an `edb_admin` user (the target-user) that can be used for the bulk upload. -The target-password for the target-user will be available from the BigAnimal dashboard for the cluster. +The target-password for the target-user is available from the BigAnimal dashboard for the cluster. A database named `bdrdb` (the target-dbname) will also have been created. ## Identify your data source -You need the source hostname (source-host), port (source-port), database name (source-dbname), user , and password for your source database. +You need the source hostname (source-host), port (source-port), database name (source-dbname), user, and password for your source database. Also, you currently need a list of tables in the database that you want to migrate to the target database. @@ -112,7 +112,7 @@ target-proxy:target-port:target-dbname:target-user:target-password ``` Create the file in your home directory and change its permissions to read/write only for the owner. -Ensure that your passwords are appropriately escaped in the .pgpass file. If an entry needs to contain : or \\, escape this character with \\. +Ensure that your passwords are appropriately escaped in the `.pgpass` file. If an entry needs to contain : or \\, escape this character with \\. ```shell chmod 0600 $HOME/.pgpass @@ -156,7 +156,7 @@ See also [Installing PGD CLI](/pgd/latest/cli/installing_cli/). #### Installing Migration Toolkit -EDB's Migration Toolkit (MTK) is a command-line tool that can be used to migrate data from a source database to a target database. It's a Java application and requires a Java runtime environment to be installed. +EDB's Migration Toolkit (MTK) is a command-line tool you can use to migrate data from a source database to a target database. It's a Java application and requires a Java runtime environment to be installed. * Ubuntu ```shell @@ -173,7 +173,7 @@ See also [Installing Migration Toolkit](/migration_toolkit/latest/installing/) #### Installing LiveCompare -EDB LiveCompare is an application that can be used to compare two databases and generate a report of the differences. It will be used later on in this process to verify the data migration. +EDB LiveCompare is an application you can use to compare two databases and generate a report of the differences. You'll use later in this process to verify the data migration. * Ubuntu ``` @@ -213,7 +213,7 @@ The next time you connect with psql, you're directed to the write leader, which To minimize the possibility of disconnections, move the raft and write leader roles to the destination node. -Make the destination node the raft leader using `bdr.raft_leadership_transfer`. You need to specify the node and the group name that the node is a member of.: +Make the destination node the raft leader using `bdr.raft_leadership_transfer`. You need to specify the node and the group name that the node is a member of: ``` bdr.raft_leadership_transfer('ab-node-1',true,'ab-group'); @@ -326,7 +326,7 @@ Consult `predatarestore.log` to ensure that the restore was successful. If it fa ### Transferring role definitions -Use the `pg_dumpall` utility to dump the role definitions from the source database: +Use the pg_dumpall utility to dump the role definitions from the source database: ```shell pg_dumpall -r -h -p -U > roles.sql >> rolesdump.log @@ -460,7 +460,7 @@ dsn = host= port= dbname= user= port= dbname= user= ``` -This configuration file should be saved as `migrationcheck.ini`. The `[First Connection]` and `[Second Connection]` sections should be updated with the appropriate values, with the `[First Connection]` section pointing to the source database and the `[Second Connection]` section pointing to the target database. The `[Output Connection]` section defines a database where a `livecompare` schema will be created to store the comparison results. +Save this configuration file as `migrationcheck.ini`. Update the `[First Connection]` and `[Second Connection]` sections with the appropriate values, with the `[First Connection]` section pointing to the source database and the `[Second Connection]` section pointing to the target database. The `[Output Connection]` section defines a database where a `livecompare` schema will be created to store the comparison results. Run LiveCompare using the configuration file you created: @@ -468,7 +468,7 @@ Run LiveCompare using the configuration file you created: livecompare migrationcheck.ini --compare ``` -LiveCompare will compare the source and target databases and generate a report of the differences. +LiveCompare compares the source and target databases and generates a report of the differences. Review the report to ensure that the data migration was successful. Refer to the [LiveCompare](/livecompare/latest/) documentation for more information on using LiveCompare. From de4ea61dc4fbc4f4458623f4688d1960fe95ab32 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Tue, 16 Apr 2024 13:00:29 -0400 Subject: [PATCH 46/51] fixed a couple of typos --- .../docs/biganimal/release/migration/dha_bulk_migration.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx b/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx index f59184bed93..768f191ff94 100644 --- a/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx +++ b/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx @@ -31,7 +31,7 @@ The following instructions give examples for a cluster named `ab-cluster` with a On BigAnimal, a cluster is configured, by default, with an `edb_admin` user (the target-user) that can be used for the bulk upload. The target-password for the target-user is available from the BigAnimal dashboard for the cluster. -A database named `bdrdb` (the target-dbname) will also have been created. +A database named `bdrdb` (the target-dbname) was also created. ## Identify your data source @@ -173,7 +173,7 @@ See also [Installing Migration Toolkit](/migration_toolkit/latest/installing/) #### Installing LiveCompare -EDB LiveCompare is an application you can use to compare two databases and generate a report of the differences. You'll use later in this process to verify the data migration. +EDB LiveCompare is an application you can use to compare two databases and generate a report of the differences. You'll use it later in this process to verify the data migration. * Ubuntu ``` From 82b4af12344eb23192c61e86058a4f7f6dd9a25b Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Tue, 9 Apr 2024 11:24:05 -0400 Subject: [PATCH 47/51] Edits to BigAnimal PR5372 --- .../using_cluster/05c_upgrading_log_rep.mdx | 47 +++++++++---------- 1 file changed, 22 insertions(+), 25 deletions(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx index c44e4bf99f3..9fe6b6a4f8b 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx @@ -7,38 +7,38 @@ deepToC: true ## Using logical replication !!! Note -This procedure does not work with distributed high-availability BigAnimal instances. +This procedure doesn't work with distributed high-availability BigAnimal instances. !!! Logical replication is a common method for upgrading the Postgres major version on BigAnimal instances, enabling a transition with minimal downtime. -By replicating changes in real-time from an older version (source instance) to a newer one (target instance), this method provides a reliable upgrade path while maintaining database availability. +By replicating changes in real time from an older version (source instance) to a newer one (target instance), this method provides a reliable upgrade path while maintaining database availability. !!! Important -Depending on where your older and newer versioned BigAnimal instances are located, this procedure may accrue ingress and egress costs from your cloud service provider (CSP) for the migrated data. Please consult your CSP's pricing documentation to see how ingress and egress fees are calculated to determine any extra costs. +Depending on where your older and newer versioned BigAnimal instances are located, this procedure can accrue ingress and egress costs from your cloud service provider (CSP) for the migrated data. Consult your CSP's pricing documentation to see how ingress and egress fees are calculated to determine any extra costs. !!! ### Overview of upgrading -To perform a major version upgrade, use the following steps, explained in further detail below: +To perform a major version upgrade: -1. [Create a BigAnimal instance](#create-a-biganimal-instance) -1. [Gather instance information](#gather-instance-information) -1. [Confirm the Postgres versions before migration](#confirm-the-postgres-versions-before-migration) -1. [Migrate the database schema](#migrate-the-database-schema) -1. [Create a publication](#create-a-publication) -1. [Create a logical replication slot](#create-the-logical-replication-slot) -1. [Create a subscription](#create-a-subscription) -1. [Validate the migration](#validate-the-migration) +1. [Create a BigAnimal instance.](#create-a-biganimal-instance) +1. [Gather instance information.](#gather-instance-information) +1. [Confirm the Postgres versions before migration.](#confirm-the-postgres-versions-before-migration) +1. [Migrate the database schema.](#migrate-the-database-schema) +1. [Create a publication.](#create-a-publication) +1. [Create a logical replication slot.](#create-the-logical-replication-slot) +1. [Create a subscription.](#create-a-subscription) +1. [Validate the migration.](#validate-the-migration) ### Create a BigAnimal instance -To perform a major version upgrade, create a BigAnimal instance with your desired version of Postgres. This will be your target instance. +To perform a major version upgrade, create a BigAnimal instance with your desired version of Postgres. This is your target instance. Ensure your target instance is provisioned with a storage size equal to or greater than your source instance. -For detailed steps on creating a BigAnimal instance, see [this guide](../getting_started/creating_a_cluster.mdx). +For details on creating a BigAnimal instance, see [Creating a cluster](../getting_started/creating_a_cluster.mdx). ### Gather instance information @@ -46,14 +46,14 @@ Use the BigAnimal console to obtain the following information for your source an - Read/write URI - Database name -- Username +- Username - Read/write host Using the BigAnimal console: 1. Select the **Clusters** tab. 1. Select your source instance. -1. From the Connect tab, obtain the information from **Connection Info**. +1. From the **Connect** tab, obtain the information from **Connection Info**. ### Confirm the Postgres versions before migration @@ -80,7 +80,7 @@ On your source instance, use the `dt` command to view the details of the schema /dt+; ``` -Here is a sample database schema for this example: +Here's a sample database schema for this example: ``` List of relations @@ -92,7 +92,7 @@ Here is a sample database schema for this example: public | pgbench_tellers | table | edb_admin | permanent | heap | 120 kB | ``` -Use pg_dump with the `--schema-only` flag to copy the schema from your source to your target instance. For more information on using `pg_dump`, [see the Postgres documentation](https://www.postgresql.org/docs/current/app-pgdump.html). +Use pg_dump with the `--schema-only` flag to copy the schema from your source to your target instance. For more information on using pg_dump, [see the Postgres documentation](https://www.postgresql.org/docs/current/app-pgdump.html). ``` pg_dump --schema-only -h -U -d | psql -h -U -d @@ -148,7 +148,7 @@ The expected output is: `ALTER PUBLICATION`. ### Create the logical replication slot -Then, on the source instance, create a replication slot using the `pgoutput` plugin: +On the source instance, create a replication slot using the `pgoutput` plugin: ```sql SELECT pg_create_logical_replication_slot('','pgoutput'); @@ -178,7 +178,7 @@ Use the `CREATE SUBSCRIPTION` command to create a subscription on your target in CREATE SUBSCRIPTION CONNECTION 'user= host= sslmode=require port= dbname= password=' PUBLICATION WITH (enabled=true, copy_data = true, create_slot = false, slot_name=); ``` -Creating a subscription on a Postgres 16 instance to a publication on a Postgres 12 instance: +This example creates a subscription on a Postgres 16 instance to a publication on a Postgres 12 instance: ```sql CREATE SUBSCRIPTION v16_sub CONNECTION 'user=edb_admin host=p-x67kjhacc4.pg.biganimal.io sslmode=require port=5432 dbname=edb_admin password=XXX' PUBLICATION v12_pub WITH (enabled=true, copy_data = true, create_slot = false, slot_name=v12_pub); @@ -186,7 +186,7 @@ CREATE SUBSCRIPTION v16_sub CONNECTION 'user=edb_admin host=p-x67kjhacc4.pg.biga The expected output is: `CREATE SUBSCRIPTION`. -In this example, the subscription uses a connection string to specify the source database and includes options to copy existing data and to follow the publication identified by 'v12_pub'. +In this example, the subscription uses a connection string to specify the source database and includes options to copy existing data and to follow the publication identified by `v12_pub`. The subscriber pulls schema changes (with some exceptions, as noted in the PostgreSQL [documentation on Limitations of Logical Replication](https://www.postgresql.org/docs/current/logical-replication-restrictions.html)) and data from the source to the target database, effectively replicating the data. @@ -214,7 +214,7 @@ To validate the progress of the data migration, use `dt+` from the source and ta public | pgbench_tellers | table | edb_admin | permanent | heap | 0 bytes | ``` -If logical replication is running correctly, each time you run `\dt+;` you see that more data has been migrated: +If logical replication is running correctly, each time you run `\dt+;` you see that more data was migrated: ``` List of relations @@ -229,6 +229,3 @@ If logical replication is running correctly, each time you run `\dt+;` you see t !!! Note You can optionally use [LiveCompare](https://www.enterprisedb.com/docs/livecompare/latest/) to generate a comparison report of the source and target databases to validate that all database objects and data are consistent. !!! - - - From b452dc73de2e953849160c3e261e73f24c1407e4 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Tue, 26 Mar 2024 14:35:23 -0400 Subject: [PATCH 48/51] Edits to BigAnimal PR5422 --- .../fault_injection_testing/index.mdx | 29 ++++++++++--------- 1 file changed, 15 insertions(+), 14 deletions(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx b/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx index a74b2446f18..0fc58c2f5b3 100644 --- a/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx @@ -5,17 +5,17 @@ navigation: - Fault injection testing --- -You can test the fault tolerance of your cluster by deleting a VM in order to inject a fault. Once a VM is deleted, you can monitor +You can test the fault tolerance of your cluster by deleting a VM to inject a fault. Once a VM is deleted, you can monitor the availability and recovery of the cluster. ## Requirements -Ensure you meet the following requirements before using fault injection testing: +Before using fault injection testing, ensure you meet the following requirements: -+ You have connected your BigAnimal cloud account with your Azure subscription. See [Setting up your Azure Marketplace account](/biganimal/latest/getting_started/02_azure_market_setup/) for more information. ++ You've connected your BigAnimal cloud account with your Azure subscription. See [Setting up your Azure Marketplace account](/biganimal/latest/getting_started/02_azure_market_setup/) for more information. + You have permissions in your Azure subscription to view and delete VMs and also the ability to view Kubernetes pods via Azure Kubernetes Service RBAC Reader. + You have PGD CLI installed. See [Installing PGD CLI](/pgd/latest/cli/installing_cli/#) for more information. -+ You have created a `pgd-cli-config.yml` file in your home directory. See [Configuring PGD CLI](/pgd/latest/cli/configuring_cli/) for more information. ++ You've created a `pgd-cli-config.yml` file in your home directory. See [Configuring PGD CLI](/pgd/latest/cli/configuring_cli/) for more information. ## Fault injection testing steps @@ -26,9 +26,9 @@ Fault injection testing consists of the following steps: 3. Deleting a write leader node from your cluster 4. Monitoring cluster health -### Verifying Cluster Health +### Verifying cluster health -Use the following commands to monitor your cluster health, node info, raft, replication lag, and write leads. +Use the following commands to monitor your cluster health, node info, raft, replication lag, and write leads: ```shell pgd check-health -f pgd-cli-config.yml @@ -57,6 +57,7 @@ pgd help show-nodes ### Determining the write leader node for your cluster +This example shows the command for determining the write leader node for a cluster: ```shell pgd show-groups -f pgd-cli-config.yml @@ -68,7 +69,7 @@ p-x67kjp3fsq-a 2456382099 data world p-x67kjp3fsq-a-1 p-x67kjp3fsq-c 4147262499 data world p-x67kjp3fsq-d 3176957154 data world p-x67kjp3fsq-d-1 ``` -In this example, the write leader node is **p-x67kjp3fsq-a-1**. +In this example, the write leader node is `p-x67kjp3fsq-a-1`. ## Deleting a write leader node from your cluster @@ -76,19 +77,20 @@ In this example, the write leader node is **p-x67kjp3fsq-a-1**. To delete a write lead node from the cluster: 1. Log into BigAnimal. 2. In a separate browser window, log into your Microsoft Azure subscription. -3. In the left navigation of BigAnimal portal, choose **Clusters**. -4. Choose the cluster to test fault injection with and copy the string value from the URL. The string value is located after the underscore. +3. In the left navigation of BigAnimal portal, select **Clusters**. +4. Select the cluster to test fault injection with and copy the string value from the URL. The string value is located after the underscore. ![Delete a write lead](images/biganimal_faultinjectiontest_1.png) -5. In your Azure subscription, paste the string into the search and prefix it with **dp-** to search for the data plane. - * From the results, choose the Kubernetes service from the Azure Region that your cluster is deployed in. +5. To search for the data plane, in your Azure subscription, paste the string into the search and prefix it with `dp-`. + +6. From the results, select the Kubernetes service from the Azure region that your cluster is deployed in. ![Delete a write lead 2](images/biganimal_faultinjectiontest_2.png) -6. Identify the Kubernetes service for your cluster. +7. Identify the Kubernetes service for your cluster. ![Delete a write lead](images/biganimal_faultinjectiontest_4.png) @@ -97,11 +99,10 @@ To delete a write lead node from the cluster: Don't delete the Azure Kubernetes VMSS here or sub resources directly. !!! -7. Browse to the Data Plane, choose Workloads, and locate the Kubernetes resources for your cluster to delete a chosen node. +7. To delete a chosen node, browse to the data plane, select **Workloads**, and locate the Kubernetes resources for your cluster. ![Delete a write lead 3](images/biganimal_faultinjectiontest_3.png) ### Monitoring cluster health After deleting a cluster node, you can monitor the health of the cluster using the same PGD CLI commands that you used to verify cluster health. - From a446b6ae2de670658b4a6962d10a1b816717daa1 Mon Sep 17 00:00:00 2001 From: Josh Heyer Date: Mon, 3 Jun 2024 21:27:57 -0700 Subject: [PATCH 49/51] least-privilege: use normal GH_TOKEN for PR creation/update This allows a fine-grained PAT for source repo retrieval --- .github/workflows/sync-and-process-files.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/sync-and-process-files.yml b/.github/workflows/sync-and-process-files.yml index be2a682bdc2..0dd8d74ae84 100644 --- a/.github/workflows/sync-and-process-files.yml +++ b/.github/workflows/sync-and-process-files.yml @@ -53,4 +53,4 @@ jobs: path: destination/ reviewers: ${{ env.REVIEWERS }} title: ${{ env.TITLE }} - token: ${{ secrets.SYNC_FILES_TOKEN }} + token: ${{ secrets.GH_TOKEN }} From 2d13c807e75ca106c453dd116dcf17b0a2ec8310 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Tue, 4 Jun 2024 10:21:20 +0530 Subject: [PATCH 50/51] Update product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx --- .../docs/biganimal/release/reference/cli/managing_clusters.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx b/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx index 0a179dd2665..15cf3952d58 100644 --- a/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx +++ b/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx @@ -307,7 +307,7 @@ biganimal cluster pause {--id | --provider --region --name} ``` Where: -- `id` is a valid cluster ID. The `id` is mandatory. +- `id` is a valid cluster ID. - `provider` is a cloud provider of the cluster. - `region` is the cluster region. - `name` is the name of the cluster. From 98a0839273fac25f7b5e6da146ab50da41eb1a94 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Tue, 4 Jun 2024 10:21:48 +0530 Subject: [PATCH 51/51] Update product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx --- .../docs/biganimal/release/reference/cli/managing_clusters.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx b/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx index 15cf3952d58..169e7a5d852 100644 --- a/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx +++ b/product_docs/docs/biganimal/release/reference/cli/managing_clusters.mdx @@ -351,7 +351,7 @@ biganimal cluster resume {--id | --provider --region --name} ``` Where: -- `id` is a valid cluster ID. The `id` is mandatory. +- `id` is a valid cluster ID. - `provider` is a cloud provider of the cluster. - `region` is the cluster region. - `name` is the name of the cluster.