From c220510154812b06a204fae8605d30eee4808135 Mon Sep 17 00:00:00 2001 From: Ibrahim Serdar Acikgoz Date: Thu, 17 Aug 2023 20:18:46 +0300 Subject: [PATCH 01/13] guides: Add postgres migration guidelines --- source/guides/postgres-migration.rst | 204 +++++++++++++++++++++++++++ 1 file changed, 204 insertions(+) create mode 100644 source/guides/postgres-migration.rst diff --git a/source/guides/postgres-migration.rst b/source/guides/postgres-migration.rst new file mode 100644 index 00000000000..4e9115314d2 --- /dev/null +++ b/source/guides/postgres-migration.rst @@ -0,0 +1,204 @@ +Migration Guidelines from MySQL to PostgreSQL +============================================= + +.. include:: ../_static/badges/allplans-selfhosted.rst + :start-after: :nosearch: + +As of version 8.0, a significant decision has been made to establish PostgreSQL as the default database for Mattermost, a step taken to enhance the platform’s performance and capabilities. Recognizing the importance of supporting the community members who are interested in migrating from a MySQL database, we have taken proactive measures to provide them with some assistance. To streamline the migration process and alleviate any potential challenges, we have prepared a comprehensive set of basic guidelines to facilitate a smooth transition. Additionally, we want to offer recommendations for various tools that have proven to be highly effective in simplifying the migration efforts. + +Note that this guideline is still in development and we are working to streamline the migration process. We are planning to improve this guide by periodically updating it. Please use this guide as a starting point and always backup your database before starting the migration. + +Table of Contents +----------------- + +- `Required tools <#required-tools>`__ +- `Before the migration <#before-the-migration>`__ +- `Prepare target database <#prepare-target-database>`__ +- `Schema Differences <#schema-diffs>`__ +- `Migrate the data <#migrate-the-data>`__ +- `Compare the data <#compare-the-data>`__ +- `Notes <#notes>`__ + +Required tools +-------------- + +- Install ``pgLoader``. See the official `installation + guide `__. +- Install morph CLI via running the following command: + + - ``go install github.com/mattermost/morph/cmd/morph@v1`` + +- Optinally install ``dbcmp`` to compare the data after a migration: + + - ``go install github.com/mattermost/dbcmp/cmd/dbcmp@latest`` + +Before the migration +-------------------- + +- Backup your MySQL data. +- Find your mattermost version. You can look to the about modal from the web app. +- Determine migration window the process requires application to stop. +- See the `schema-diffs <#schema-diffs>`__ section to ensure data compatibility between schemas. +- Prepare your PostgreSQL environment by creating a database and user. See more info `here `__ + +Prepare target database +----------------------- + +- Clone mattermost repository for your specific version: + ``git clone -b git@github.com:mattermost/mattermost.git --depth=1`` +- ``cd`` into ``mattermost`` project*. +- Create a postgres database using morph CLI with the following command: + +.. code:: bash + + morph apply up --driver postgres --dsn "postgres://user:pass@localhost:5432/?sslmode=disable" --path ./db/migrations/postgres --number -1 + +\* After ``v8`` due to project re-organization, the migrations directory has been changed to ``./server/channels/db/migrations/postgres/`` relative to project root. Therefore ``cd`` into ``mattermost/server/channels``. + +Schema Diffs +------------ + +Before the migration, due to differences between two schemas some manual steps may required to have an error-free migration. + +Text to Character Varying +~~~~~~~~~~~~~~~~~~~~~~~~~ + +Since our MySQL schema uses ``text`` column type in the various tables instead of ``varchar`` represantation in the PostgreSQL schema, we encourage to check if the sizes are consistent within the Postgres schema limits. + +================ ================ ===================== +Table Column Data Type Casting +================ ================ ===================== +Audits Action text -> varchar(512) +Audits ExtraInfo text -> varchar(1024) +ClusterDiscovery HostName text -> varchar(512) +Commands IconURL text -> varchar(1024) +Commands AutoCompleteDesc text -> varchar(1024) +Commands AutoCompleteHint text -> varchar(1024) +Compliances Keywords text -> varchar(512) +Compliances Emails text -> varchar(1024) +FileInfo Path text -> varchar(512) +FileInfo ThumbnailPath text -> varchar(512) +FileInfo PreviewPath text -> varchar(512) +FileInfo Name text -> varchar(256) +FileInfo MimeType text -> varchar(256) +LinkMetadata URL text -> varchar(2048) +RemoteClusters SiteURL text -> varchar(512) +RemoteClusters Topics text -> varchar(512) +Sessions DeviceId text -> varchar(512) +Systems Value text -> varchar(1024) +UploadSessions FileName text -> varchar(256) +UploadSessions Path text -> varchar(512) +================ ================ ===================== + +As you can see there are several occurrences where schema can differ and data size constaints within the Postgres schema can result in errors. Several reports have been received from our community members that ``LinkMetadata`` and ``FileInfo`` tables indeed had some overflows so we recommend checking these tables in particular. Please do check if your data in MySQL schema exceed these limitations. You can check if there are any required deletions. For example to do so in the Audits table/Action column; run: + +.. code:: sql + + DELETE FROM mattermost.Audits where LENGTH(Action) > 512; + +Full-text indexes +~~~~~~~~~~~~~~~~~ + +There is a possibility where some words in the ``Posts`` ans ``FileInfo`` tables can exceed the `limits of the maximum token length `__ for full text search indexing. In that case we recommend dropping ``idx_posts_message_txt`` and ``idx_fileinfo_content_txt`` indexes from the PostgreSQL schema and creating these indexes after the migration by running following queries: + +Tp drop indexes run these before the migration: + +.. code:: sql + + DROP INDEX IF EXISTS idx_posts_message_txt; + DROP INDEX IF EXISTS idx_fileinfo_content_txt; + +To re-create indexes, run these once the migration is completed: + +.. code:: sql + + CREATE INDEX IF NOT EXISTS idx_posts_message_txt ON posts USING gin(to_tsvector('english', message)); + CREATE INDEX IF NOT EXISTS idx_fileinfo_content_txt ON fileinfo USING gin(to_tsvector('english', content)); + +Migrate the data +---------------- + +Now we set the schema to desired state and we can start migrating the **data** by running ``pgLoader`` \*\* + +\*\* Use the following configuration for the baseline of the data migration: + +.. code:: sql + + LOAD DATABASE + FROM mysql://{{ .mysql_user }}:{{ .mysql_password }}@mysql:3306/{{ .source_schema }} + INTO pgsql://{{ .pg_user }}:{{ .pg_password }}@postgres:5432/{{ .target_schema }} + + WITH data only, + workers = 8, concurrency = 1, + multiple readers per thread, rows per range = 50000, + create no tables, + create no indexes, + preserve index names + + SET PostgreSQL PARAMETERS + maintenance_work_mem to '128MB', + work_mem to '12MB' + + SET MySQL PARAMETERS + net_read_timeout = '120', + net_write_timeout = '120' + + CAST column Channels.Type to channel_type drop typemod, + column Teams.Type to team_type drop typemod, + column UploadSessions.Type to upload_session_type drop typemod, + column Drafts.Priority to text, + type int when (= precision 11) to integer drop typemod, + type bigint when (= precision 20) to bigint drop typemod, + type text to varchar drop typemod, + type tinyint when (<= precision 4) to boolean using tinyint-to-boolean, + type json to jsonb drop typemod + + MATERIALIZE VIEWS exclude_products + excluding table names matching ~, ~ + + BEFORE LOAD DO + $$ ALTER SCHEMA public RENAME TO {{ .source_schema }}; $$ + + AFTER LOAD DO + $$ UPDATE {{ .source_schema }}.db_migrations set name='add_createat_to_teamembers' where version=92; $$, + $$ ALTER SCHEMA {{ .source_schema }} RENAME TO public; $$; + +Once you save this configuration file eg. ``migration.load``, you can run the ``pgLoader`` with following command: + +.. code:: bash + + pgLoader migration.load > migration.log + +Feel free to contribute and/or report your findings through the migration. + +Compare the data +---------------- + +We internally developed a tool to simplify the process of comparing contents of two databases. The ``dbcmp`` tool compares every table and reports whether if there is a diversity between two schemas. + +The tool has a few flags needs to be supplied to run a comparison: + +.. code:: sh + + Usage: + dbcmp [flags] + + Flags: + --exclude strings exclude tables from comparison, takes comma-separated values. + -h, --help help for dbcmp + --source string source database dsn + --target string target database dsn + -v, --version version for dbcmp + +For our case we can simply run the following command: + +.. code:: sh + + dbcmp --source "${MYSQL_DSN}" --target "${POSTGRES_DSN}" --exclude="db_migrations","ir_","focalboard","systems" + +Note that the migration guide only covers the tables for Mattermost channels, the support for other plugins such as Boards and Playbooks will be added in the future. Another exlusion we are making is in the ``db_migrations`` table which has a small difference (a typo in a single migration name) creates a diff. Since we created the Postgres schema with morph and the official mattermost source, we can consider to skip it safely. On the other hand, ``systems`` table may contain additional diffs if there was extra keys added during some of the migrations. Consider excluding ``systems`` table if you run into issues and do a manual comparison as the data in the ``systems`` table is relatively smaller in size. + +Notes +----- + +Keep in mind that this migration guide primarily focuses on providing step-by-step instructions for the migration; however, it is essential to note that it does not encompass migration configurations for any plugins, such as Focalboard and Playbooks. If your system utilizes these plugins, we highly advise exercising patience until we incorporate the necessary configurations specifically tailored to ensure a smooth transition for those plugins as well. From 3ebe2c3d1f55404af36fddd8811e327e4a0051ea Mon Sep 17 00:00:00 2001 From: Ibrahim Serdar Acikgoz Date: Sat, 19 Aug 2023 08:49:23 +0300 Subject: [PATCH 02/13] Apply suggestions from code review Co-authored-by: Carrie Warner (Mattermost) <74422101+cwarnermm@users.noreply.github.com> --- source/guides/postgres-migration.rst | 48 +++++++++++++++------------- 1 file changed, 26 insertions(+), 22 deletions(-) diff --git a/source/guides/postgres-migration.rst b/source/guides/postgres-migration.rst index 4e9115314d2..77517a59006 100644 --- a/source/guides/postgres-migration.rst +++ b/source/guides/postgres-migration.rst @@ -1,12 +1,14 @@ -Migration Guidelines from MySQL to PostgreSQL +Migration guidelines from MySQL to PostgreSQL ============================================= .. include:: ../_static/badges/allplans-selfhosted.rst :start-after: :nosearch: -As of version 8.0, a significant decision has been made to establish PostgreSQL as the default database for Mattermost, a step taken to enhance the platform’s performance and capabilities. Recognizing the importance of supporting the community members who are interested in migrating from a MySQL database, we have taken proactive measures to provide them with some assistance. To streamline the migration process and alleviate any potential challenges, we have prepared a comprehensive set of basic guidelines to facilitate a smooth transition. Additionally, we want to offer recommendations for various tools that have proven to be highly effective in simplifying the migration efforts. +From Mattermost v8.0, PostgreSQL is our database of choice for Mattermost to enhance the platform’s performance and capabilities. Recognizing the importance of supporting the community members who are interested in migrating from a MySQL database, we have taken proactive measures to provide guidance and best practices. -Note that this guideline is still in development and we are working to streamline the migration process. We are planning to improve this guide by periodically updating it. Please use this guide as a starting point and always backup your database before starting the migration. +To streamline the migration process and alleviate any potential challenges, we have prepared a comprehensive set of guidelines to facilitate a smooth transition. Additionally, we want to offer recommendations for various tools that have proven to be highly effective in simplifying your migration efforts. + +Note that these guidelines are in development and we are working to streamline the migration process. We plan to improve this guide by updating it as new information becomes available. Please use this guide as a starting point and always backup your database before starting a migration. Table of Contents ----------------- @@ -36,18 +38,18 @@ Before the migration -------------------- - Backup your MySQL data. -- Find your mattermost version. You can look to the about modal from the web app. -- Determine migration window the process requires application to stop. +- Confirm your Mattermost version. See the **About** modal for details. +- Determine the migration window needed. This process requires you to stop the Mattermost Server during the migration. - See the `schema-diffs <#schema-diffs>`__ section to ensure data compatibility between schemas. -- Prepare your PostgreSQL environment by creating a database and user. See more info `here `__ +- Prepare your PostgreSQL environment by creating a database and user. See the `database `__ documentation for details. Prepare target database ----------------------- -- Clone mattermost repository for your specific version: +- Clone the ``mattermost`` repository for your specific version: ``git clone -b git@github.com:mattermost/mattermost.git --depth=1`` - ``cd`` into ``mattermost`` project*. -- Create a postgres database using morph CLI with the following command: +- Create a PostgreSQL database using morph CLI with the following command: .. code:: bash @@ -55,18 +57,18 @@ Prepare target database \* After ``v8`` due to project re-organization, the migrations directory has been changed to ``./server/channels/db/migrations/postgres/`` relative to project root. Therefore ``cd`` into ``mattermost/server/channels``. -Schema Diffs +Schema diffs ------------ -Before the migration, due to differences between two schemas some manual steps may required to have an error-free migration. +Before the migration, due to differences between two schemas, some manual steps may required to have an error-free migration. -Text to Character Varying +Text to character varying ~~~~~~~~~~~~~~~~~~~~~~~~~ -Since our MySQL schema uses ``text`` column type in the various tables instead of ``varchar`` represantation in the PostgreSQL schema, we encourage to check if the sizes are consistent within the Postgres schema limits. +Since the Mattermost MySQL schema uses the ``text`` column type in the various tables instead of ``varchar`` representation in the PostgreSQL schema, we encourage you to check if the sizes are consistent within the PostgreSQL schema limits. ================ ================ ===================== -Table Column Data Type Casting +Table Column Data type casting ================ ================ ===================== Audits Action text -> varchar(512) Audits ExtraInfo text -> varchar(1024) @@ -90,7 +92,7 @@ UploadSessions FileName text -> varchar(256) UploadSessions Path text -> varchar(512) ================ ================ ===================== -As you can see there are several occurrences where schema can differ and data size constaints within the Postgres schema can result in errors. Several reports have been received from our community members that ``LinkMetadata`` and ``FileInfo`` tables indeed had some overflows so we recommend checking these tables in particular. Please do check if your data in MySQL schema exceed these limitations. You can check if there are any required deletions. For example to do so in the Audits table/Action column; run: +As you can see, there are several occurrences where the schema can differ and data size constraints within the PostgreSQL schema can result in errors. Several reports have been received from our community that ``LinkMetadata`` and ``FileInfo`` tables had some overflows, so we recommend checking these tables in particular. Please do check if your data in the MySQL schema exceeds these limitations. You can check if there are any required deletions. For example, to do so in the ``Audits`` table/``Action`` column; run: .. code:: sql @@ -99,16 +101,16 @@ As you can see there are several occurrences where schema can differ and data si Full-text indexes ~~~~~~~~~~~~~~~~~ -There is a possibility where some words in the ``Posts`` ans ``FileInfo`` tables can exceed the `limits of the maximum token length `__ for full text search indexing. In that case we recommend dropping ``idx_posts_message_txt`` and ``idx_fileinfo_content_txt`` indexes from the PostgreSQL schema and creating these indexes after the migration by running following queries: +It's possible that some words in the ``Posts`` ans ``FileInfo`` tables can exceed the `limits of the maximum token length `__ for full text search indexing. In these cases, we recommend dropping the ``idx_posts_message_txt`` and ``idx_fileinfo_content_txt`` indexes from the PostgreSQL schema, and creating these indexes after the migration by running following queries: -Tp drop indexes run these before the migration: +To drop indexes, run the following commands before the migration: .. code:: sql DROP INDEX IF EXISTS idx_posts_message_txt; DROP INDEX IF EXISTS idx_fileinfo_content_txt; -To re-create indexes, run these once the migration is completed: +To re-create indexes, run the following once the migration is completed: .. code:: sql @@ -118,7 +120,7 @@ To re-create indexes, run these once the migration is completed: Migrate the data ---------------- -Now we set the schema to desired state and we can start migrating the **data** by running ``pgLoader`` \*\* +Once we set the schema to desired state, we can start migrating the **data** by running ``pgLoader`` \*\* \*\* Use the following configuration for the baseline of the data migration: @@ -163,20 +165,20 @@ Now we set the schema to desired state and we can start migrating the **data** b $$ UPDATE {{ .source_schema }}.db_migrations set name='add_createat_to_teamembers' where version=92; $$, $$ ALTER SCHEMA {{ .source_schema }} RENAME TO public; $$; -Once you save this configuration file eg. ``migration.load``, you can run the ``pgLoader`` with following command: +Once you save this configuration file, eg. ``migration.load``, you can run the ``pgLoader`` with following command: .. code:: bash pgLoader migration.load > migration.log -Feel free to contribute and/or report your findings through the migration. +Feel free to contribute to and/or report your findings through your migration to us. Compare the data ---------------- We internally developed a tool to simplify the process of comparing contents of two databases. The ``dbcmp`` tool compares every table and reports whether if there is a diversity between two schemas. -The tool has a few flags needs to be supplied to run a comparison: +The tool includes a few flags to run a comparison: .. code:: sh @@ -196,7 +198,9 @@ For our case we can simply run the following command: dbcmp --source "${MYSQL_DSN}" --target "${POSTGRES_DSN}" --exclude="db_migrations","ir_","focalboard","systems" -Note that the migration guide only covers the tables for Mattermost channels, the support for other plugins such as Boards and Playbooks will be added in the future. Another exlusion we are making is in the ``db_migrations`` table which has a small difference (a typo in a single migration name) creates a diff. Since we created the Postgres schema with morph and the official mattermost source, we can consider to skip it safely. On the other hand, ``systems`` table may contain additional diffs if there was extra keys added during some of the migrations. Consider excluding ``systems`` table if you run into issues and do a manual comparison as the data in the ``systems`` table is relatively smaller in size. +Note that this migration guide only covers the tables for Mattermost channels. Support for other plugins, such as Playbooks, will be added in the future. + +Another exclusion we are making is in the ``db_migrations`` table which has a small difference (a typo in a single migration name) creates a diff. Since we created the PostgreSQL schema with morph, and the official ``mattermost`` source, we can skip it safely without concerns. On the other hand, ``systems`` table may contain additional diffs if there were extra keys added during some of the migrations. Consider excluding the ``systems`` table if you run into issues, and perform a manual comparison as the data in the ``systems`` table is relatively smaller in size. Notes ----- From 50229b85f291528312f56ed9aeabc63db14bcb54 Mon Sep 17 00:00:00 2001 From: Ibrahim Serdar Acikgoz Date: Tue, 22 Aug 2023 10:03:27 +0300 Subject: [PATCH 03/13] reflect review comments --- source/guides/postgres-migration.rst | 29 ++++++++++------------------ 1 file changed, 10 insertions(+), 19 deletions(-) diff --git a/source/guides/postgres-migration.rst b/source/guides/postgres-migration.rst index 77517a59006..07832e605a6 100644 --- a/source/guides/postgres-migration.rst +++ b/source/guides/postgres-migration.rst @@ -8,18 +8,14 @@ From Mattermost v8.0, PostgreSQL is our database of choice for Mattermost to enh To streamline the migration process and alleviate any potential challenges, we have prepared a comprehensive set of guidelines to facilitate a smooth transition. Additionally, we want to offer recommendations for various tools that have proven to be highly effective in simplifying your migration efforts. -Note that these guidelines are in development and we are working to streamline the migration process. We plan to improve this guide by updating it as new information becomes available. Please use this guide as a starting point and always backup your database before starting a migration. +.. note:: -Table of Contents ------------------ + These guidelines are in development and we are working to streamline the migration process. We plan to improve this guide by updating it as new information becomes available. It is essential to note that it does not encompass migration configurations for any plugins, such as Focalboard and Playbooks. If your system utilizes these plugins, we highly advise exercising patience until we incorporate the necessary configurations specifically tailored to ensure a smooth transition for those plugins as well. Please use this guide as a starting point and always backup your database before starting a migration. -- `Required tools <#required-tools>`__ -- `Before the migration <#before-the-migration>`__ -- `Prepare target database <#prepare-target-database>`__ -- `Schema Differences <#schema-diffs>`__ -- `Migrate the data <#migrate-the-data>`__ -- `Compare the data <#compare-the-data>`__ -- `Notes <#notes>`__ +.. contents:: On this page: + :backlinks: top + :local: + :depth: 1 Required tools -------------- @@ -101,7 +97,7 @@ As you can see, there are several occurrences where the schema can differ and da Full-text indexes ~~~~~~~~~~~~~~~~~ -It's possible that some words in the ``Posts`` ans ``FileInfo`` tables can exceed the `limits of the maximum token length `__ for full text search indexing. In these cases, we recommend dropping the ``idx_posts_message_txt`` and ``idx_fileinfo_content_txt`` indexes from the PostgreSQL schema, and creating these indexes after the migration by running following queries: +It's possible that some words in the ``Posts`` and ``FileInfo`` tables can exceed the `limits of the maximum token length `__ for full text search indexing. In these cases, we recommend dropping the ``idx_posts_message_txt`` and ``idx_fileinfo_content_txt`` indexes from the PostgreSQL schema, and creating these indexes after the migration by running following queries: To drop indexes, run the following commands before the migration: @@ -124,7 +120,7 @@ Once we set the schema to desired state, we can start migrating the **data** by \*\* Use the following configuration for the baseline of the data migration: -.. code:: sql +.. code:: LOAD DATABASE FROM mysql://{{ .mysql_user }}:{{ .mysql_password }}@mysql:3306/{{ .source_schema }} @@ -176,7 +172,7 @@ Feel free to contribute to and/or report your findings through your migration to Compare the data ---------------- -We internally developed a tool to simplify the process of comparing contents of two databases. The ``dbcmp`` tool compares every table and reports whether if there is a diversity between two schemas. +We internally developed a tool to simplify the process of comparing contents of two databases. The ``dbcmp`` tool compares every table and reports whether if there is a diversion between two schemas. The tool includes a few flags to run a comparison: @@ -196,13 +192,8 @@ For our case we can simply run the following command: .. code:: sh - dbcmp --source "${MYSQL_DSN}" --target "${POSTGRES_DSN}" --exclude="db_migrations","ir_","focalboard","systems" + dbcmp --source "${MYSQL_DSN}" --target "${POSTGRES_DSN}" --exclude="db_migrations,ir_,focalboard,systems" Note that this migration guide only covers the tables for Mattermost channels. Support for other plugins, such as Playbooks, will be added in the future. Another exclusion we are making is in the ``db_migrations`` table which has a small difference (a typo in a single migration name) creates a diff. Since we created the PostgreSQL schema with morph, and the official ``mattermost`` source, we can skip it safely without concerns. On the other hand, ``systems`` table may contain additional diffs if there were extra keys added during some of the migrations. Consider excluding the ``systems`` table if you run into issues, and perform a manual comparison as the data in the ``systems`` table is relatively smaller in size. - -Notes ------ - -Keep in mind that this migration guide primarily focuses on providing step-by-step instructions for the migration; however, it is essential to note that it does not encompass migration configurations for any plugins, such as Focalboard and Playbooks. If your system utilizes these plugins, we highly advise exercising patience until we incorporate the necessary configurations specifically tailored to ensure a smooth transition for those plugins as well. From cfe431f7b91175ed536128f81ece232a97cd74e5 Mon Sep 17 00:00:00 2001 From: Ibrahim Serdar Acikgoz Date: Tue, 22 Aug 2023 12:42:08 +0300 Subject: [PATCH 04/13] reflect review comments --- source/guides/postgres-migration.rst | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/source/guides/postgres-migration.rst b/source/guides/postgres-migration.rst index 07832e605a6..6687fb12aeb 100644 --- a/source/guides/postgres-migration.rst +++ b/source/guides/postgres-migration.rst @@ -33,6 +33,9 @@ Required tools Before the migration -------------------- +.. note:: + This guide requires at least a schema of v6.4. So, if you have an earlier version and planning to migrate, please update your Mattermost Server to v6.4 at least. + - Backup your MySQL data. - Confirm your Mattermost version. See the **About** modal for details. - Determine the migration window needed. This process requires you to stop the Mattermost Server during the migration. @@ -106,13 +109,6 @@ To drop indexes, run the following commands before the migration: DROP INDEX IF EXISTS idx_posts_message_txt; DROP INDEX IF EXISTS idx_fileinfo_content_txt; -To re-create indexes, run the following once the migration is completed: - -.. code:: sql - - CREATE INDEX IF NOT EXISTS idx_posts_message_txt ON posts USING gin(to_tsvector('english', message)); - CREATE INDEX IF NOT EXISTS idx_fileinfo_content_txt ON fileinfo USING gin(to_tsvector('english', content)); - Migrate the data ---------------- @@ -167,6 +163,13 @@ Once you save this configuration file, eg. ``migration.load``, you can run the ` pgLoader migration.load > migration.log +To re-create indexes that has been removed before the migration, run the following once the migration is completed: + +.. code:: sql + + CREATE INDEX IF NOT EXISTS idx_posts_message_txt ON posts USING gin(to_tsvector('english', message)); + CREATE INDEX IF NOT EXISTS idx_fileinfo_content_txt ON fileinfo USING gin(to_tsvector('english', content)); + Feel free to contribute to and/or report your findings through your migration to us. Compare the data From 5c8261402d54c51c0e01173602674b0b1cefd3f4 Mon Sep 17 00:00:00 2001 From: Ibrahim Serdar Acikgoz Date: Tue, 22 Aug 2023 18:34:28 +0300 Subject: [PATCH 05/13] Apply suggestions from code review Co-authored-by: Carrie Warner (Mattermost) <74422101+cwarnermm@users.noreply.github.com> --- source/guides/postgres-migration.rst | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/source/guides/postgres-migration.rst b/source/guides/postgres-migration.rst index 6687fb12aeb..3b5ea241a21 100644 --- a/source/guides/postgres-migration.rst +++ b/source/guides/postgres-migration.rst @@ -22,7 +22,7 @@ Required tools - Install ``pgLoader``. See the official `installation guide `__. -- Install morph CLI via running the following command: +- Install morph CLI by running the following command: - ``go install github.com/mattermost/morph/cmd/morph@v1`` @@ -34,9 +34,9 @@ Before the migration -------------------- .. note:: - This guide requires at least a schema of v6.4. So, if you have an earlier version and planning to migrate, please update your Mattermost Server to v6.4 at least. + This guide requires a schema of v6.4 or later. So, if you have an earlier version and planning to migrate, please update your Mattermost Server to v6.4 at a minimum. -- Backup your MySQL data. +- Back up your MySQL data. - Confirm your Mattermost version. See the **About** modal for details. - Determine the migration window needed. This process requires you to stop the Mattermost Server during the migration. - See the `schema-diffs <#schema-diffs>`__ section to ensure data compatibility between schemas. @@ -59,7 +59,7 @@ Prepare target database Schema diffs ------------ -Before the migration, due to differences between two schemas, some manual steps may required to have an error-free migration. +Before the migration, due to differences between two schemas, some manual steps may be required for an error-free migration. Text to character varying ~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -100,7 +100,7 @@ As you can see, there are several occurrences where the schema can differ and da Full-text indexes ~~~~~~~~~~~~~~~~~ -It's possible that some words in the ``Posts`` and ``FileInfo`` tables can exceed the `limits of the maximum token length `__ for full text search indexing. In these cases, we recommend dropping the ``idx_posts_message_txt`` and ``idx_fileinfo_content_txt`` indexes from the PostgreSQL schema, and creating these indexes after the migration by running following queries: +It's possible that some words in the ``Posts`` and ``FileInfo`` tables can exceed the `limits of the maximum token length `__ for full text search indexing. In these cases, we recommend dropping the ``idx_posts_message_txt`` and ``idx_fileinfo_content_txt`` indexes from the PostgreSQL schema, and creating these indexes after the migration by running the following queries: To drop indexes, run the following commands before the migration: @@ -112,7 +112,7 @@ To drop indexes, run the following commands before the migration: Migrate the data ---------------- -Once we set the schema to desired state, we can start migrating the **data** by running ``pgLoader`` \*\* +Once we set the schema to a desired state, we can start migrating the **data** by running ``pgLoader`` \*\* \*\* Use the following configuration for the baseline of the data migration: @@ -157,7 +157,7 @@ Once we set the schema to desired state, we can start migrating the **data** by $$ UPDATE {{ .source_schema }}.db_migrations set name='add_createat_to_teamembers' where version=92; $$, $$ ALTER SCHEMA {{ .source_schema }} RENAME TO public; $$; -Once you save this configuration file, eg. ``migration.load``, you can run the ``pgLoader`` with following command: +Once you save this configuration file, e.g. ``migration.load``, you can run the ``pgLoader`` with the following command: .. code:: bash From da9980019e789d36c0001732d8e96d2981403f20 Mon Sep 17 00:00:00 2001 From: Ibrahim Serdar Acikgoz Date: Tue, 29 Aug 2023 11:43:54 +0300 Subject: [PATCH 06/13] fine tune the script --- source/guides/postgres-migration.rst | 27 +++++++++++---------------- 1 file changed, 11 insertions(+), 16 deletions(-) diff --git a/source/guides/postgres-migration.rst b/source/guides/postgres-migration.rst index 3b5ea241a21..ad49211c225 100644 --- a/source/guides/postgres-migration.rst +++ b/source/guides/postgres-migration.rst @@ -102,7 +102,7 @@ Full-text indexes It's possible that some words in the ``Posts`` and ``FileInfo`` tables can exceed the `limits of the maximum token length `__ for full text search indexing. In these cases, we recommend dropping the ``idx_posts_message_txt`` and ``idx_fileinfo_content_txt`` indexes from the PostgreSQL schema, and creating these indexes after the migration by running the following queries: -To drop indexes, run the following commands before the migration: +To drop indexes, run the following commands before the migration (These are included in the script, so you may not need to run these manually): .. code:: sql @@ -125,8 +125,7 @@ Once we set the schema to a desired state, we can start migrating the **data** b WITH data only, workers = 8, concurrency = 1, multiple readers per thread, rows per range = 50000, - create no tables, - create no indexes, + create no tables, create no indexes, preserve index names SET PostgreSQL PARAMETERS @@ -137,9 +136,9 @@ Once we set the schema to a desired state, we can start migrating the **data** b net_read_timeout = '120', net_write_timeout = '120' - CAST column Channels.Type to channel_type drop typemod, - column Teams.Type to team_type drop typemod, - column UploadSessions.Type to upload_session_type drop typemod, + CAST column Channels.Type to "channel_type" drop typemod, + column Teams.Type to "team_type" drop typemod, + column UploadSessions.Type to "upload_session_type" drop typemod, column Drafts.Priority to text, type int when (= precision 11) to integer drop typemod, type bigint when (= precision 20) to bigint drop typemod, @@ -147,14 +146,17 @@ Once we set the schema to a desired state, we can start migrating the **data** b type tinyint when (<= precision 4) to boolean using tinyint-to-boolean, type json to jsonb drop typemod - MATERIALIZE VIEWS exclude_products - excluding table names matching ~, ~ + EXCLUDING TABLE NAMES MATCHING ~, ~ BEFORE LOAD DO - $$ ALTER SCHEMA public RENAME TO {{ .source_schema }}; $$ + $$ ALTER SCHEMA public RENAME TO {{ .source_schema }}; $$, + $$ DROP INDEX IF EXISTS idx_posts_message_txt; $$, + $$ DROP INDEX IF EXISTS idx_fileinfo_content_txt; $$ AFTER LOAD DO $$ UPDATE {{ .source_schema }}.db_migrations set name='add_createat_to_teamembers' where version=92; $$, + $$ CREATE INDEX IF NOT EXISTS idx_posts_message_txt ON {{ .source_schema }}.posts USING gin(to_tsvector('english', message)); $$, + $$ CREATE INDEX IF NOT EXISTS idx_fileinfo_content_txt ON {{ .source_schema }}.fileinfo USING gin(to_tsvector('english', content)); $$, $$ ALTER SCHEMA {{ .source_schema }} RENAME TO public; $$; Once you save this configuration file, e.g. ``migration.load``, you can run the ``pgLoader`` with the following command: @@ -163,13 +165,6 @@ Once you save this configuration file, e.g. ``migration.load``, you can run the pgLoader migration.load > migration.log -To re-create indexes that has been removed before the migration, run the following once the migration is completed: - -.. code:: sql - - CREATE INDEX IF NOT EXISTS idx_posts_message_txt ON posts USING gin(to_tsvector('english', message)); - CREATE INDEX IF NOT EXISTS idx_fileinfo_content_txt ON fileinfo USING gin(to_tsvector('english', content)); - Feel free to contribute to and/or report your findings through your migration to us. Compare the data From 29c24e29ba09ddc4896320b4afd89a3a55637f1a Mon Sep 17 00:00:00 2001 From: Ibrahim Serdar Acikgoz Date: Tue, 29 Aug 2023 12:35:31 +0300 Subject: [PATCH 07/13] add migration guide for products --- source/guides/postgres-migration.rst | 172 ++++++++++++++++++++++++++- 1 file changed, 170 insertions(+), 2 deletions(-) diff --git a/source/guides/postgres-migration.rst b/source/guides/postgres-migration.rst index ad49211c225..9a0556ca046 100644 --- a/source/guides/postgres-migration.rst +++ b/source/guides/postgres-migration.rst @@ -10,7 +10,7 @@ To streamline the migration process and alleviate any potential challenges, we h .. note:: - These guidelines are in development and we are working to streamline the migration process. We plan to improve this guide by updating it as new information becomes available. It is essential to note that it does not encompass migration configurations for any plugins, such as Focalboard and Playbooks. If your system utilizes these plugins, we highly advise exercising patience until we incorporate the necessary configurations specifically tailored to ensure a smooth transition for those plugins as well. Please use this guide as a starting point and always backup your database before starting a migration. + These guidelines are in development and we are working to streamline the migration process. We plan to improve this guide by updating it as new information becomes available. Please use this guide as a starting point and always backup your database before starting a migration. .. contents:: On this page: :backlinks: top @@ -192,6 +192,174 @@ For our case we can simply run the following command: dbcmp --source "${MYSQL_DSN}" --target "${POSTGRES_DSN}" --exclude="db_migrations,ir_,focalboard,systems" -Note that this migration guide only covers the tables for Mattermost channels. Support for other plugins, such as Playbooks, will be added in the future. +Note that this migration guide only covers the tables for Mattermost products. Another exclusion we are making is in the ``db_migrations`` table which has a small difference (a typo in a single migration name) creates a diff. Since we created the PostgreSQL schema with morph, and the official ``mattermost`` source, we can skip it safely without concerns. On the other hand, ``systems`` table may contain additional diffs if there were extra keys added during some of the migrations. Consider excluding the ``systems`` table if you run into issues, and perform a manual comparison as the data in the ``systems`` table is relatively smaller in size. + +Plugin migrations +----------------- + +On the plugin side, we are going to take a different approach from what we have done above. We are not going to use ``morph`` tool to create tables and indexes this time. We are going to utilize ``pgloader`` to create the tables on behalf of us. The reason for doing so is Boards and Playbooks are leveraging application logic to facilitate SQL queries. But we don't want to use any level of application at this point. + +Playbooks +~~~~~~~~~ + +Once we are ready to migrate, we can start migrating the **schema** and the **data** by running ``pgLoader`` \*\* + +\*\* Use the following configuration for the baseline of the data migration: + +.. code:: + + LOAD DATABASE + FROM mysql://{{ .mysql_user }}:{{ .mysql_password }}@mysql:3306/{{ .source_schema }} + INTO pgsql://{{ .pg_user }}:{{ .pg_password }}@postgres:5432/{{ .target_schema }} + + WITH include drop, create tables, create indexes, no foreign keys, + workers = 8, concurrency = 1, + multiple readers per thread, rows per range = 50000, + preserve index names + + SET PostgreSQL PARAMETERS + maintenance_work_mem to '128MB', + work_mem to '12MB' + + SET MySQL PARAMETERS + net_read_timeout = '120', + net_write_timeout = '120' + + CAST column IR_ChannelAction.ActionType to text drop typemod, + column IR_ChannelAction.TriggerType to text drop typemod, + column IR_Incident.ChecklistsJSON to "json" drop typemod + + INCLUDING ONLY TABLE NAMES MATCHING + ~/IR_/ + + BEFORE LOAD DO + $$ ALTER SCHEMA public RENAME TO {{ .source_schema }}; $$ + + AFTER LOAD DO + $$ ALTER TABLE {{ .source_schema }}.IR_ChannelAction ALTER COLUMN ActionType TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_ChannelAction ALTER COLUMN TriggerType TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ReminderMessageTemplate TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ReminderMessageTemplate SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedInvitedUserIDs TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedInvitedUserIDs SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedWebhookOnCreationURLs TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedWebhookOnCreationURLs SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedInvitedGroupIDs TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedInvitedGroupIDs SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN Retrospective TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN Retrospective SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN MessageOnJoin TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN MessageOnJoin SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedWebhookOnStatusUpdateURLs TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedWebhookOnStatusUpdateURLs SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN CategoryName TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN CategoryName SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedBroadcastChannelIds TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedBroadcastChannelIds SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ChannelIDToRootID TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ChannelIDToRootID SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ReminderMessageTemplate TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ReminderMessageTemplate SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedInvitedUserIDs TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedInvitedUserIDs SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedWebhookOnCreationURLs TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedWebhookOnCreationURLs SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedInvitedGroupIDs TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedInvitedGroupIDs SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN MessageOnJoin TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN MessageOnJoin SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN RetrospectiveTemplate TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN RetrospectiveTemplate SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedWebhookOnStatusUpdateURLs TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedWebhookOnStatusUpdateURLs SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedSignalAnyKeywords TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedSignalAnyKeywords SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN CategoryName TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN CategoryName SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedBroadcastChannelIds TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedBroadcastChannelIds SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN RunSummaryTemplate TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN RunSummaryTemplate SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ChannelNameTemplate TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ChannelNameTemplate SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_PlaybookMember ALTER COLUMN Roles TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Category_Item ADD CONSTRAINT ir_category_item_categoryid FOREIGN KEY (CategoryId) REFERENCES {{ .source_schema }}.IR_Category(Id); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Metric ADD CONSTRAINT ir_metric_metricconfigid FOREIGN KEY (MetricConfigId) REFERENCES {{ .source_schema }}.IR_MetricConfig(Id); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Metric ADD CONSTRAINT ir_metric_incidentid FOREIGN KEY (IncidentId) REFERENCES {{ .source_schema }}.IR_Incident(Id); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_MetricConfig ADD CONSTRAINT ir_metricconfig_playbookid FOREIGN KEY (PlaybookId) REFERENCES {{ .source_schema }}.IR_Playbook(Id); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_PlaybookAutoFollow ADD CONSTRAINT ir_playbookautofollow_playbookid FOREIGN KEY (PlaybookId) REFERENCES {{ .source_schema }}.IR_Playbook(Id); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_PlaybookMember ADD CONSTRAINT ir_playbookmember_playbookid FOREIGN KEY (PlaybookId) REFERENCES {{ .source_schema }}.IR_Playbook(Id); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Run_Participants ADD CONSTRAINT ir_run_participants_incidentid FOREIGN KEY (IncidentId) REFERENCES {{ .source_schema }}.IR_Incident(Id); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_StatusPosts ADD CONSTRAINT ir_statusposts_incidentid FOREIGN KEY (IncidentId) REFERENCES {{ .source_schema }}.IR_Incident(Id); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_TimelineEvent ADD CONSTRAINT ir_timelineevent_incidentid FOREIGN KEY (IncidentId) REFERENCES {{ .source_schema }}.IR_Incident(Id); $$, + $$ CREATE UNIQUE INDEX IF NOT EXISTS ir_playbookmember_playbookid_memberid_key on {{ .source_schema }}.IR_PlaybookMember(PlaybookId,MemberId); $$, + $$ CREATE INDEX IF NOT EXISTS ir_statusposts_incidentid_postid_key on {{ .source_schema }}.IR_StatusPosts(IncidentId,PostId); $$, + $$ CREATE INDEX IF NOT EXISTS ir_playbookmember_playbookid on {{ .source_schema }}.IR_PlaybookMember(PlaybookId); $$, + $$ ALTER SCHEMA {{ .source_schema }} RENAME TO public; $$, + $$ SELECT pg_catalog.set_config('search_path', '"$user", public', false); $$, + $$ ALTER USER mmuser SET SEARCH_PATH TO 'public'; $$; + +.. code:: bash + + pgLoader playbooks.load > playbooks_migration.log + +Boards +~~~~~~ + +As of ``v9.0`` Boards will transition to being fully community supported. Hence this guide covers only the version ``v7.10.x`` of the schema. `Official announcement `__. + +Once we are ready to migrate, we can start migrating the **schema** and the **data** by running ``pgLoader`` \*\* + +\*\* Use the following configuration for the baseline of the data migration: + +.. code:: + + LOAD DATABASE + FROM mysql://{{ .mysql_user }}:{{ .mysql_password }}@mysql:3306/{{ .source_schema }} + INTO pgsql://{{ .pg_user }}:{{ .pg_password }}@postgres:5432/{{ .target_schema }} + + WITH include drop, create tables, create indexes, reset sequences, + workers = 8, concurrency = 1, + multiple readers per thread, rows per range = 50000, + preserve index names + + SET PostgreSQL PARAMETERS + maintenance_work_mem to '128MB', + work_mem to '12MB' + + SET MySQL PARAMETERS + net_read_timeout = '120', + net_write_timeout = '120' + + CAST column focalboard_blocks.fields to "json" drop typemod, + column focalboard_blocks_history.fields to "json" drop typemod, + column focalboard_schema_migrations.name to "varchar" drop typemod, + column focalboard_sessions.props to "json" drop typemod, + column focalboard_teams.settings to "json" drop typemod, + column focalboard_users.props to "json" drop typemod, + type int when (= precision 11) to int4 drop typemod, + type json to jsonb drop typemod + + INCLUDING ONLY TABLE NAMES MATCHING + ~/focalboard/ + + BEFORE LOAD DO + $$ ALTER SCHEMA public RENAME TO {{ .source_schema }}; $$ + + AFTER LOAD DO + $$ ALTER SCHEMA {{ .source_schema }} RENAME TO public; $$, + $$ SELECT pg_catalog.set_config('search_path', '"$user", public', false); $$, + $$ ALTER USER mmuser SET SEARCH_PATH TO 'public'; $$; + +.. code:: bash + + pgLoader focalboard.load > focalboard_migration.log + +Compare the plugin data +~~~~~~~~~~~~~~~~~~~~~~~ + +.. code:: sh + + dbcmp --source "${MYSQL_DSN}" --target "${POSTGRES_DSN}" --exclude="db_migrations,systems" From a19e44d7050863ca9118b54ece114f61c511ad49 Mon Sep 17 00:00:00 2001 From: Ibrahim Serdar Acikgoz Date: Tue, 29 Aug 2023 14:14:31 +0300 Subject: [PATCH 08/13] add playbooks version --- source/guides/postgres-migration.rst | 2 ++ 1 file changed, 2 insertions(+) diff --git a/source/guides/postgres-migration.rst b/source/guides/postgres-migration.rst index 9a0556ca046..4273f6c972c 100644 --- a/source/guides/postgres-migration.rst +++ b/source/guides/postgres-migration.rst @@ -204,6 +204,8 @@ On the plugin side, we are going to take a different approach from what we have Playbooks ~~~~~~~~~ +The ``pgloader`` configuration provided for Playbooks is based on ``v1.38.1`` and the plugin should be at least ``v1.36.0`` to perform migration. + Once we are ready to migrate, we can start migrating the **schema** and the **data** by running ``pgLoader`` \*\* \*\* Use the following configuration for the baseline of the data migration: From cd72089a6af6ffe948a4f0ddcb4aaffc606c57eb Mon Sep 17 00:00:00 2001 From: Ibrahim Serdar Acikgoz Date: Thu, 31 Aug 2023 13:59:51 +0300 Subject: [PATCH 09/13] add a json fix for focalboard --- source/guides/postgres-migration.rst | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/source/guides/postgres-migration.rst b/source/guides/postgres-migration.rst index 4273f6c972c..10e48abd2a1 100644 --- a/source/guides/postgres-migration.rst +++ b/source/guides/postgres-migration.rst @@ -351,6 +351,11 @@ Once we are ready to migrate, we can start migrating the **schema** and the **da $$ ALTER SCHEMA public RENAME TO {{ .source_schema }}; $$ AFTER LOAD DO + $$ UPDATE {{ .source_schema }}.focalboard_blocks SET `fields` = "{}" WHERE `fields` = ""; $$, + $$ UPDATE {{ .source_schema }}.focalboard_blocks_history SET `fields` = "{}" WHERE `fields` = ""; $$, + $$ UPDATE {{ .source_schema }}.focalboard_sessions SET `props` = "{}" WHERE `fields` = ""; $$, + $$ UPDATE {{ .source_schema }}.focalboard_teams SET `settings` = "{}" WHERE `fields` = ""; $$, + $$ UPDATE {{ .source_schema }}.focalboard_users SET `props` = "{}" WHERE `fields` = ""; $$, $$ ALTER SCHEMA {{ .source_schema }} RENAME TO public; $$, $$ SELECT pg_catalog.set_config('search_path', '"$user", public', false); $$, $$ ALTER USER mmuser SET SEARCH_PATH TO 'public'; $$; From ecde1da3cf6c933e06d5f46e0292bb22a6d15d50 Mon Sep 17 00:00:00 2001 From: Ibrahim Serdar Acikgoz Date: Thu, 31 Aug 2023 14:47:05 +0300 Subject: [PATCH 10/13] a fix on search path for the base migration script --- source/guides/postgres-migration.rst | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/source/guides/postgres-migration.rst b/source/guides/postgres-migration.rst index 10e48abd2a1..e600c56bb96 100644 --- a/source/guides/postgres-migration.rst +++ b/source/guides/postgres-migration.rst @@ -157,7 +157,9 @@ Once we set the schema to a desired state, we can start migrating the **data** b $$ UPDATE {{ .source_schema }}.db_migrations set name='add_createat_to_teamembers' where version=92; $$, $$ CREATE INDEX IF NOT EXISTS idx_posts_message_txt ON {{ .source_schema }}.posts USING gin(to_tsvector('english', message)); $$, $$ CREATE INDEX IF NOT EXISTS idx_fileinfo_content_txt ON {{ .source_schema }}.fileinfo USING gin(to_tsvector('english', content)); $$, - $$ ALTER SCHEMA {{ .source_schema }} RENAME TO public; $$; + $$ ALTER SCHEMA {{ .source_schema }} RENAME TO public; $$, + $$ SELECT pg_catalog.set_config('search_path', '"$user", public', false); $$, + $$ ALTER USER {{ .pg_user }} SET SEARCH_PATH TO 'public'; $$; Once you save this configuration file, e.g. ``migration.load``, you can run the ``pgLoader`` with the following command: @@ -301,7 +303,7 @@ Once we are ready to migrate, we can start migrating the **schema** and the **da $$ CREATE INDEX IF NOT EXISTS ir_playbookmember_playbookid on {{ .source_schema }}.IR_PlaybookMember(PlaybookId); $$, $$ ALTER SCHEMA {{ .source_schema }} RENAME TO public; $$, $$ SELECT pg_catalog.set_config('search_path', '"$user", public', false); $$, - $$ ALTER USER mmuser SET SEARCH_PATH TO 'public'; $$; + $$ ALTER USER {{ .pg_user }} SET SEARCH_PATH TO 'public'; $$; .. code:: bash @@ -358,7 +360,7 @@ Once we are ready to migrate, we can start migrating the **schema** and the **da $$ UPDATE {{ .source_schema }}.focalboard_users SET `props` = "{}" WHERE `fields` = ""; $$, $$ ALTER SCHEMA {{ .source_schema }} RENAME TO public; $$, $$ SELECT pg_catalog.set_config('search_path', '"$user", public', false); $$, - $$ ALTER USER mmuser SET SEARCH_PATH TO 'public'; $$; + $$ ALTER USER {{ .pg_user }} SET SEARCH_PATH TO 'public'; $$; .. code:: bash From fcca85a894701c5a70817394d2244a5327a5e929 Mon Sep 17 00:00:00 2001 From: "Carrie Warner (Mattermost)" <74422101+cwarnermm@users.noreply.github.com> Date: Wed, 6 Sep 2023 13:04:17 -0400 Subject: [PATCH 11/13] Update source/guides/postgres-migration.rst --- source/guides/postgres-migration.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/source/guides/postgres-migration.rst b/source/guides/postgres-migration.rst index e600c56bb96..47a2d42100a 100644 --- a/source/guides/postgres-migration.rst +++ b/source/guides/postgres-migration.rst @@ -309,7 +309,7 @@ Once we are ready to migrate, we can start migrating the **schema** and the **da pgLoader playbooks.load > playbooks_migration.log -Boards +Focalboard ~~~~~~ As of ``v9.0`` Boards will transition to being fully community supported. Hence this guide covers only the version ``v7.10.x`` of the schema. `Official announcement `__. From 139c574bb77cc45825478b83eaa8c79a8c4a5974 Mon Sep 17 00:00:00 2001 From: "Carrie Warner (Mattermost)" <74422101+cwarnermm@users.noreply.github.com> Date: Wed, 6 Sep 2023 13:04:23 -0400 Subject: [PATCH 12/13] Update source/guides/postgres-migration.rst --- source/guides/postgres-migration.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/source/guides/postgres-migration.rst b/source/guides/postgres-migration.rst index 47a2d42100a..44031537feb 100644 --- a/source/guides/postgres-migration.rst +++ b/source/guides/postgres-migration.rst @@ -312,7 +312,7 @@ Once we are ready to migrate, we can start migrating the **schema** and the **da Focalboard ~~~~~~ -As of ``v9.0`` Boards will transition to being fully community supported. Hence this guide covers only the version ``v7.10.x`` of the schema. `Official announcement `__. +As of ``v9.0`` Boards will transition to being fully community supported as the Focalboard plugin. Hence this guide covers only the version ``v7.10.x`` of the schema. `Official announcement `__. Once we are ready to migrate, we can start migrating the **schema** and the **data** by running ``pgLoader`` \*\* From 86698014cc13c8a18e3fbb83cab0a1c5379d3f79 Mon Sep 17 00:00:00 2001 From: "Carrie Warner (Mattermost)" <74422101+cwarnermm@users.noreply.github.com> Date: Wed, 6 Sep 2023 13:11:32 -0400 Subject: [PATCH 13/13] Incorporated Engineering updates & moved page loc --- source/deploy/postgres-migration.rst | 183 ++++++++++++- source/guides/postgres-migration.rst | 374 --------------------------- 2 files changed, 180 insertions(+), 377 deletions(-) delete mode 100644 source/guides/postgres-migration.rst diff --git a/source/deploy/postgres-migration.rst b/source/deploy/postgres-migration.rst index ad49211c225..dcc20c6d2e5 100644 --- a/source/deploy/postgres-migration.rst +++ b/source/deploy/postgres-migration.rst @@ -10,7 +10,7 @@ To streamline the migration process and alleviate any potential challenges, we h .. note:: - These guidelines are in development and we are working to streamline the migration process. We plan to improve this guide by updating it as new information becomes available. It is essential to note that it does not encompass migration configurations for any plugins, such as Focalboard and Playbooks. If your system utilizes these plugins, we highly advise exercising patience until we incorporate the necessary configurations specifically tailored to ensure a smooth transition for those plugins as well. Please use this guide as a starting point and always backup your database before starting a migration. + These guidelines are in development and we are working to streamline the migration process. We plan to improve this guide by updating it as new information becomes available. Please use this guide as a starting point and always backup your database before starting a migration. .. contents:: On this page: :backlinks: top @@ -157,7 +157,9 @@ Once we set the schema to a desired state, we can start migrating the **data** b $$ UPDATE {{ .source_schema }}.db_migrations set name='add_createat_to_teamembers' where version=92; $$, $$ CREATE INDEX IF NOT EXISTS idx_posts_message_txt ON {{ .source_schema }}.posts USING gin(to_tsvector('english', message)); $$, $$ CREATE INDEX IF NOT EXISTS idx_fileinfo_content_txt ON {{ .source_schema }}.fileinfo USING gin(to_tsvector('english', content)); $$, - $$ ALTER SCHEMA {{ .source_schema }} RENAME TO public; $$; + $$ ALTER SCHEMA {{ .source_schema }} RENAME TO public; $$, + $$ SELECT pg_catalog.set_config('search_path', '"$user", public', false); $$, + $$ ALTER USER {{ .pg_user }} SET SEARCH_PATH TO 'public'; $$; Once you save this configuration file, e.g. ``migration.load``, you can run the ``pgLoader`` with the following command: @@ -192,6 +194,181 @@ For our case we can simply run the following command: dbcmp --source "${MYSQL_DSN}" --target "${POSTGRES_DSN}" --exclude="db_migrations,ir_,focalboard,systems" -Note that this migration guide only covers the tables for Mattermost channels. Support for other plugins, such as Playbooks, will be added in the future. +Note that this migration guide only covers the tables for Mattermost products. Another exclusion we are making is in the ``db_migrations`` table which has a small difference (a typo in a single migration name) creates a diff. Since we created the PostgreSQL schema with morph, and the official ``mattermost`` source, we can skip it safely without concerns. On the other hand, ``systems`` table may contain additional diffs if there were extra keys added during some of the migrations. Consider excluding the ``systems`` table if you run into issues, and perform a manual comparison as the data in the ``systems`` table is relatively smaller in size. + +Plugin migrations +----------------- + +On the plugin side, we are going to take a different approach from what we have done above. We are not going to use ``morph`` tool to create tables and indexes this time. We are going to utilize ``pgloader`` to create the tables on behalf of us. The reason for doing so is Boards and Playbooks are leveraging application logic to facilitate SQL queries. But we don't want to use any level of application at this point. + +Playbooks +~~~~~~~~~ + +The ``pgloader`` configuration provided for Playbooks is based on ``v1.38.1`` and the plugin should be at least ``v1.36.0`` to perform migration. + +Once we are ready to migrate, we can start migrating the **schema** and the **data** by running ``pgLoader`` \*\* + +\*\* Use the following configuration for the baseline of the data migration: + +.. code:: + + LOAD DATABASE + FROM mysql://{{ .mysql_user }}:{{ .mysql_password }}@mysql:3306/{{ .source_schema }} + INTO pgsql://{{ .pg_user }}:{{ .pg_password }}@postgres:5432/{{ .target_schema }} + + WITH include drop, create tables, create indexes, no foreign keys, + workers = 8, concurrency = 1, + multiple readers per thread, rows per range = 50000, + preserve index names + + SET PostgreSQL PARAMETERS + maintenance_work_mem to '128MB', + work_mem to '12MB' + + SET MySQL PARAMETERS + net_read_timeout = '120', + net_write_timeout = '120' + + CAST column IR_ChannelAction.ActionType to text drop typemod, + column IR_ChannelAction.TriggerType to text drop typemod, + column IR_Incident.ChecklistsJSON to "json" drop typemod + + INCLUDING ONLY TABLE NAMES MATCHING + ~/IR_/ + + BEFORE LOAD DO + $$ ALTER SCHEMA public RENAME TO {{ .source_schema }}; $$ + + AFTER LOAD DO + $$ ALTER TABLE {{ .source_schema }}.IR_ChannelAction ALTER COLUMN ActionType TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_ChannelAction ALTER COLUMN TriggerType TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ReminderMessageTemplate TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ReminderMessageTemplate SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedInvitedUserIDs TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedInvitedUserIDs SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedWebhookOnCreationURLs TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedWebhookOnCreationURLs SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedInvitedGroupIDs TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedInvitedGroupIDs SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN Retrospective TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN Retrospective SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN MessageOnJoin TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN MessageOnJoin SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedWebhookOnStatusUpdateURLs TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedWebhookOnStatusUpdateURLs SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN CategoryName TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN CategoryName SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedBroadcastChannelIds TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedBroadcastChannelIds SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ChannelIDToRootID TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ChannelIDToRootID SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ReminderMessageTemplate TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ReminderMessageTemplate SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedInvitedUserIDs TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedInvitedUserIDs SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedWebhookOnCreationURLs TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedWebhookOnCreationURLs SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedInvitedGroupIDs TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedInvitedGroupIDs SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN MessageOnJoin TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN MessageOnJoin SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN RetrospectiveTemplate TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN RetrospectiveTemplate SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedWebhookOnStatusUpdateURLs TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedWebhookOnStatusUpdateURLs SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedSignalAnyKeywords TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedSignalAnyKeywords SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN CategoryName TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN CategoryName SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedBroadcastChannelIds TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedBroadcastChannelIds SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN RunSummaryTemplate TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN RunSummaryTemplate SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ChannelNameTemplate TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ChannelNameTemplate SET DEFAULT ''::text; $$, + $$ ALTER TABLE {{ .source_schema }}.IR_PlaybookMember ALTER COLUMN Roles TYPE varchar(65536); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Category_Item ADD CONSTRAINT ir_category_item_categoryid FOREIGN KEY (CategoryId) REFERENCES {{ .source_schema }}.IR_Category(Id); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Metric ADD CONSTRAINT ir_metric_metricconfigid FOREIGN KEY (MetricConfigId) REFERENCES {{ .source_schema }}.IR_MetricConfig(Id); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Metric ADD CONSTRAINT ir_metric_incidentid FOREIGN KEY (IncidentId) REFERENCES {{ .source_schema }}.IR_Incident(Id); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_MetricConfig ADD CONSTRAINT ir_metricconfig_playbookid FOREIGN KEY (PlaybookId) REFERENCES {{ .source_schema }}.IR_Playbook(Id); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_PlaybookAutoFollow ADD CONSTRAINT ir_playbookautofollow_playbookid FOREIGN KEY (PlaybookId) REFERENCES {{ .source_schema }}.IR_Playbook(Id); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_PlaybookMember ADD CONSTRAINT ir_playbookmember_playbookid FOREIGN KEY (PlaybookId) REFERENCES {{ .source_schema }}.IR_Playbook(Id); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_Run_Participants ADD CONSTRAINT ir_run_participants_incidentid FOREIGN KEY (IncidentId) REFERENCES {{ .source_schema }}.IR_Incident(Id); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_StatusPosts ADD CONSTRAINT ir_statusposts_incidentid FOREIGN KEY (IncidentId) REFERENCES {{ .source_schema }}.IR_Incident(Id); $$, + $$ ALTER TABLE {{ .source_schema }}.IR_TimelineEvent ADD CONSTRAINT ir_timelineevent_incidentid FOREIGN KEY (IncidentId) REFERENCES {{ .source_schema }}.IR_Incident(Id); $$, + $$ CREATE UNIQUE INDEX IF NOT EXISTS ir_playbookmember_playbookid_memberid_key on {{ .source_schema }}.IR_PlaybookMember(PlaybookId,MemberId); $$, + $$ CREATE INDEX IF NOT EXISTS ir_statusposts_incidentid_postid_key on {{ .source_schema }}.IR_StatusPosts(IncidentId,PostId); $$, + $$ CREATE INDEX IF NOT EXISTS ir_playbookmember_playbookid on {{ .source_schema }}.IR_PlaybookMember(PlaybookId); $$, + $$ ALTER SCHEMA {{ .source_schema }} RENAME TO public; $$, + $$ SELECT pg_catalog.set_config('search_path', '"$user", public', false); $$, + $$ ALTER USER {{ .pg_user }} SET SEARCH_PATH TO 'public'; $$; + +.. code:: bash + + pgLoader playbooks.load > playbooks_migration.log + +Focalboard +~~~~~~~~~~ + +As of ``v9.0`` Boards will transition to being fully community supported as the Focalboard plugin. Hence this guide covers only the version ``v7.10.x`` of the schema. `Official announcement `__. + +Once we are ready to migrate, we can start migrating the **schema** and the **data** by running ``pgLoader`` \*\* + +\*\* Use the following configuration for the baseline of the data migration: + +.. code:: + + LOAD DATABASE + FROM mysql://{{ .mysql_user }}:{{ .mysql_password }}@mysql:3306/{{ .source_schema }} + INTO pgsql://{{ .pg_user }}:{{ .pg_password }}@postgres:5432/{{ .target_schema }} + + WITH include drop, create tables, create indexes, reset sequences, + workers = 8, concurrency = 1, + multiple readers per thread, rows per range = 50000, + preserve index names + + SET PostgreSQL PARAMETERS + maintenance_work_mem to '128MB', + work_mem to '12MB' + + SET MySQL PARAMETERS + net_read_timeout = '120', + net_write_timeout = '120' + + CAST column focalboard_blocks.fields to "json" drop typemod, + column focalboard_blocks_history.fields to "json" drop typemod, + column focalboard_schema_migrations.name to "varchar" drop typemod, + column focalboard_sessions.props to "json" drop typemod, + column focalboard_teams.settings to "json" drop typemod, + column focalboard_users.props to "json" drop typemod, + type int when (= precision 11) to int4 drop typemod, + type json to jsonb drop typemod + + INCLUDING ONLY TABLE NAMES MATCHING + ~/focalboard/ + + BEFORE LOAD DO + $$ ALTER SCHEMA public RENAME TO {{ .source_schema }}; $$ + + AFTER LOAD DO + $$ UPDATE {{ .source_schema }}.focalboard_blocks SET `fields` = "{}" WHERE `fields` = ""; $$, + $$ UPDATE {{ .source_schema }}.focalboard_blocks_history SET `fields` = "{}" WHERE `fields` = ""; $$, + $$ UPDATE {{ .source_schema }}.focalboard_sessions SET `props` = "{}" WHERE `fields` = ""; $$, + $$ UPDATE {{ .source_schema }}.focalboard_teams SET `settings` = "{}" WHERE `fields` = ""; $$, + $$ UPDATE {{ .source_schema }}.focalboard_users SET `props` = "{}" WHERE `fields` = ""; $$, + $$ ALTER SCHEMA {{ .source_schema }} RENAME TO public; $$, + $$ SELECT pg_catalog.set_config('search_path', '"$user", public', false); $$, + $$ ALTER USER {{ .pg_user }} SET SEARCH_PATH TO 'public'; $$; + +.. code:: bash + + pgLoader focalboard.load > focalboard_migration.log + +Compare the plugin data +~~~~~~~~~~~~~~~~~~~~~~~ + +.. code:: sh + + dbcmp --source "${MYSQL_DSN}" --target "${POSTGRES_DSN}" --exclude="db_migrations,systems" diff --git a/source/guides/postgres-migration.rst b/source/guides/postgres-migration.rst deleted file mode 100644 index 44031537feb..00000000000 --- a/source/guides/postgres-migration.rst +++ /dev/null @@ -1,374 +0,0 @@ -Migration guidelines from MySQL to PostgreSQL -============================================= - -.. include:: ../_static/badges/allplans-selfhosted.rst - :start-after: :nosearch: - -From Mattermost v8.0, PostgreSQL is our database of choice for Mattermost to enhance the platform’s performance and capabilities. Recognizing the importance of supporting the community members who are interested in migrating from a MySQL database, we have taken proactive measures to provide guidance and best practices. - -To streamline the migration process and alleviate any potential challenges, we have prepared a comprehensive set of guidelines to facilitate a smooth transition. Additionally, we want to offer recommendations for various tools that have proven to be highly effective in simplifying your migration efforts. - -.. note:: - - These guidelines are in development and we are working to streamline the migration process. We plan to improve this guide by updating it as new information becomes available. Please use this guide as a starting point and always backup your database before starting a migration. - -.. contents:: On this page: - :backlinks: top - :local: - :depth: 1 - -Required tools --------------- - -- Install ``pgLoader``. See the official `installation - guide `__. -- Install morph CLI by running the following command: - - - ``go install github.com/mattermost/morph/cmd/morph@v1`` - -- Optinally install ``dbcmp`` to compare the data after a migration: - - - ``go install github.com/mattermost/dbcmp/cmd/dbcmp@latest`` - -Before the migration --------------------- - -.. note:: - This guide requires a schema of v6.4 or later. So, if you have an earlier version and planning to migrate, please update your Mattermost Server to v6.4 at a minimum. - -- Back up your MySQL data. -- Confirm your Mattermost version. See the **About** modal for details. -- Determine the migration window needed. This process requires you to stop the Mattermost Server during the migration. -- See the `schema-diffs <#schema-diffs>`__ section to ensure data compatibility between schemas. -- Prepare your PostgreSQL environment by creating a database and user. See the `database `__ documentation for details. - -Prepare target database ------------------------ - -- Clone the ``mattermost`` repository for your specific version: - ``git clone -b git@github.com:mattermost/mattermost.git --depth=1`` -- ``cd`` into ``mattermost`` project*. -- Create a PostgreSQL database using morph CLI with the following command: - -.. code:: bash - - morph apply up --driver postgres --dsn "postgres://user:pass@localhost:5432/?sslmode=disable" --path ./db/migrations/postgres --number -1 - -\* After ``v8`` due to project re-organization, the migrations directory has been changed to ``./server/channels/db/migrations/postgres/`` relative to project root. Therefore ``cd`` into ``mattermost/server/channels``. - -Schema diffs ------------- - -Before the migration, due to differences between two schemas, some manual steps may be required for an error-free migration. - -Text to character varying -~~~~~~~~~~~~~~~~~~~~~~~~~ - -Since the Mattermost MySQL schema uses the ``text`` column type in the various tables instead of ``varchar`` representation in the PostgreSQL schema, we encourage you to check if the sizes are consistent within the PostgreSQL schema limits. - -================ ================ ===================== -Table Column Data type casting -================ ================ ===================== -Audits Action text -> varchar(512) -Audits ExtraInfo text -> varchar(1024) -ClusterDiscovery HostName text -> varchar(512) -Commands IconURL text -> varchar(1024) -Commands AutoCompleteDesc text -> varchar(1024) -Commands AutoCompleteHint text -> varchar(1024) -Compliances Keywords text -> varchar(512) -Compliances Emails text -> varchar(1024) -FileInfo Path text -> varchar(512) -FileInfo ThumbnailPath text -> varchar(512) -FileInfo PreviewPath text -> varchar(512) -FileInfo Name text -> varchar(256) -FileInfo MimeType text -> varchar(256) -LinkMetadata URL text -> varchar(2048) -RemoteClusters SiteURL text -> varchar(512) -RemoteClusters Topics text -> varchar(512) -Sessions DeviceId text -> varchar(512) -Systems Value text -> varchar(1024) -UploadSessions FileName text -> varchar(256) -UploadSessions Path text -> varchar(512) -================ ================ ===================== - -As you can see, there are several occurrences where the schema can differ and data size constraints within the PostgreSQL schema can result in errors. Several reports have been received from our community that ``LinkMetadata`` and ``FileInfo`` tables had some overflows, so we recommend checking these tables in particular. Please do check if your data in the MySQL schema exceeds these limitations. You can check if there are any required deletions. For example, to do so in the ``Audits`` table/``Action`` column; run: - -.. code:: sql - - DELETE FROM mattermost.Audits where LENGTH(Action) > 512; - -Full-text indexes -~~~~~~~~~~~~~~~~~ - -It's possible that some words in the ``Posts`` and ``FileInfo`` tables can exceed the `limits of the maximum token length `__ for full text search indexing. In these cases, we recommend dropping the ``idx_posts_message_txt`` and ``idx_fileinfo_content_txt`` indexes from the PostgreSQL schema, and creating these indexes after the migration by running the following queries: - -To drop indexes, run the following commands before the migration (These are included in the script, so you may not need to run these manually): - -.. code:: sql - - DROP INDEX IF EXISTS idx_posts_message_txt; - DROP INDEX IF EXISTS idx_fileinfo_content_txt; - -Migrate the data ----------------- - -Once we set the schema to a desired state, we can start migrating the **data** by running ``pgLoader`` \*\* - -\*\* Use the following configuration for the baseline of the data migration: - -.. code:: - - LOAD DATABASE - FROM mysql://{{ .mysql_user }}:{{ .mysql_password }}@mysql:3306/{{ .source_schema }} - INTO pgsql://{{ .pg_user }}:{{ .pg_password }}@postgres:5432/{{ .target_schema }} - - WITH data only, - workers = 8, concurrency = 1, - multiple readers per thread, rows per range = 50000, - create no tables, create no indexes, - preserve index names - - SET PostgreSQL PARAMETERS - maintenance_work_mem to '128MB', - work_mem to '12MB' - - SET MySQL PARAMETERS - net_read_timeout = '120', - net_write_timeout = '120' - - CAST column Channels.Type to "channel_type" drop typemod, - column Teams.Type to "team_type" drop typemod, - column UploadSessions.Type to "upload_session_type" drop typemod, - column Drafts.Priority to text, - type int when (= precision 11) to integer drop typemod, - type bigint when (= precision 20) to bigint drop typemod, - type text to varchar drop typemod, - type tinyint when (<= precision 4) to boolean using tinyint-to-boolean, - type json to jsonb drop typemod - - EXCLUDING TABLE NAMES MATCHING ~, ~ - - BEFORE LOAD DO - $$ ALTER SCHEMA public RENAME TO {{ .source_schema }}; $$, - $$ DROP INDEX IF EXISTS idx_posts_message_txt; $$, - $$ DROP INDEX IF EXISTS idx_fileinfo_content_txt; $$ - - AFTER LOAD DO - $$ UPDATE {{ .source_schema }}.db_migrations set name='add_createat_to_teamembers' where version=92; $$, - $$ CREATE INDEX IF NOT EXISTS idx_posts_message_txt ON {{ .source_schema }}.posts USING gin(to_tsvector('english', message)); $$, - $$ CREATE INDEX IF NOT EXISTS idx_fileinfo_content_txt ON {{ .source_schema }}.fileinfo USING gin(to_tsvector('english', content)); $$, - $$ ALTER SCHEMA {{ .source_schema }} RENAME TO public; $$, - $$ SELECT pg_catalog.set_config('search_path', '"$user", public', false); $$, - $$ ALTER USER {{ .pg_user }} SET SEARCH_PATH TO 'public'; $$; - -Once you save this configuration file, e.g. ``migration.load``, you can run the ``pgLoader`` with the following command: - -.. code:: bash - - pgLoader migration.load > migration.log - -Feel free to contribute to and/or report your findings through your migration to us. - -Compare the data ----------------- - -We internally developed a tool to simplify the process of comparing contents of two databases. The ``dbcmp`` tool compares every table and reports whether if there is a diversion between two schemas. - -The tool includes a few flags to run a comparison: - -.. code:: sh - - Usage: - dbcmp [flags] - - Flags: - --exclude strings exclude tables from comparison, takes comma-separated values. - -h, --help help for dbcmp - --source string source database dsn - --target string target database dsn - -v, --version version for dbcmp - -For our case we can simply run the following command: - -.. code:: sh - - dbcmp --source "${MYSQL_DSN}" --target "${POSTGRES_DSN}" --exclude="db_migrations,ir_,focalboard,systems" - -Note that this migration guide only covers the tables for Mattermost products. - -Another exclusion we are making is in the ``db_migrations`` table which has a small difference (a typo in a single migration name) creates a diff. Since we created the PostgreSQL schema with morph, and the official ``mattermost`` source, we can skip it safely without concerns. On the other hand, ``systems`` table may contain additional diffs if there were extra keys added during some of the migrations. Consider excluding the ``systems`` table if you run into issues, and perform a manual comparison as the data in the ``systems`` table is relatively smaller in size. - -Plugin migrations ------------------ - -On the plugin side, we are going to take a different approach from what we have done above. We are not going to use ``morph`` tool to create tables and indexes this time. We are going to utilize ``pgloader`` to create the tables on behalf of us. The reason for doing so is Boards and Playbooks are leveraging application logic to facilitate SQL queries. But we don't want to use any level of application at this point. - -Playbooks -~~~~~~~~~ - -The ``pgloader`` configuration provided for Playbooks is based on ``v1.38.1`` and the plugin should be at least ``v1.36.0`` to perform migration. - -Once we are ready to migrate, we can start migrating the **schema** and the **data** by running ``pgLoader`` \*\* - -\*\* Use the following configuration for the baseline of the data migration: - -.. code:: - - LOAD DATABASE - FROM mysql://{{ .mysql_user }}:{{ .mysql_password }}@mysql:3306/{{ .source_schema }} - INTO pgsql://{{ .pg_user }}:{{ .pg_password }}@postgres:5432/{{ .target_schema }} - - WITH include drop, create tables, create indexes, no foreign keys, - workers = 8, concurrency = 1, - multiple readers per thread, rows per range = 50000, - preserve index names - - SET PostgreSQL PARAMETERS - maintenance_work_mem to '128MB', - work_mem to '12MB' - - SET MySQL PARAMETERS - net_read_timeout = '120', - net_write_timeout = '120' - - CAST column IR_ChannelAction.ActionType to text drop typemod, - column IR_ChannelAction.TriggerType to text drop typemod, - column IR_Incident.ChecklistsJSON to "json" drop typemod - - INCLUDING ONLY TABLE NAMES MATCHING - ~/IR_/ - - BEFORE LOAD DO - $$ ALTER SCHEMA public RENAME TO {{ .source_schema }}; $$ - - AFTER LOAD DO - $$ ALTER TABLE {{ .source_schema }}.IR_ChannelAction ALTER COLUMN ActionType TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_ChannelAction ALTER COLUMN TriggerType TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ReminderMessageTemplate TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ReminderMessageTemplate SET DEFAULT ''::text; $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedInvitedUserIDs TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedInvitedUserIDs SET DEFAULT ''::text; $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedWebhookOnCreationURLs TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedWebhookOnCreationURLs SET DEFAULT ''::text; $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedInvitedGroupIDs TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedInvitedGroupIDs SET DEFAULT ''::text; $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN Retrospective TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN Retrospective SET DEFAULT ''::text; $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN MessageOnJoin TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN MessageOnJoin SET DEFAULT ''::text; $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedWebhookOnStatusUpdateURLs TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedWebhookOnStatusUpdateURLs SET DEFAULT ''::text; $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN CategoryName TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN CategoryName SET DEFAULT ''::text; $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedBroadcastChannelIds TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ConcatenatedBroadcastChannelIds SET DEFAULT ''::text; $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ChannelIDToRootID TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Incident ALTER COLUMN ChannelIDToRootID SET DEFAULT ''::text; $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ReminderMessageTemplate TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ReminderMessageTemplate SET DEFAULT ''::text; $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedInvitedUserIDs TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedInvitedUserIDs SET DEFAULT ''::text; $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedWebhookOnCreationURLs TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedWebhookOnCreationURLs SET DEFAULT ''::text; $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedInvitedGroupIDs TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedInvitedGroupIDs SET DEFAULT ''::text; $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN MessageOnJoin TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN MessageOnJoin SET DEFAULT ''::text; $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN RetrospectiveTemplate TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN RetrospectiveTemplate SET DEFAULT ''::text; $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedWebhookOnStatusUpdateURLs TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedWebhookOnStatusUpdateURLs SET DEFAULT ''::text; $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedSignalAnyKeywords TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedSignalAnyKeywords SET DEFAULT ''::text; $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN CategoryName TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN CategoryName SET DEFAULT ''::text; $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedBroadcastChannelIds TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ConcatenatedBroadcastChannelIds SET DEFAULT ''::text; $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN RunSummaryTemplate TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN RunSummaryTemplate SET DEFAULT ''::text; $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ChannelNameTemplate TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Playbook ALTER COLUMN ChannelNameTemplate SET DEFAULT ''::text; $$, - $$ ALTER TABLE {{ .source_schema }}.IR_PlaybookMember ALTER COLUMN Roles TYPE varchar(65536); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Category_Item ADD CONSTRAINT ir_category_item_categoryid FOREIGN KEY (CategoryId) REFERENCES {{ .source_schema }}.IR_Category(Id); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Metric ADD CONSTRAINT ir_metric_metricconfigid FOREIGN KEY (MetricConfigId) REFERENCES {{ .source_schema }}.IR_MetricConfig(Id); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Metric ADD CONSTRAINT ir_metric_incidentid FOREIGN KEY (IncidentId) REFERENCES {{ .source_schema }}.IR_Incident(Id); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_MetricConfig ADD CONSTRAINT ir_metricconfig_playbookid FOREIGN KEY (PlaybookId) REFERENCES {{ .source_schema }}.IR_Playbook(Id); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_PlaybookAutoFollow ADD CONSTRAINT ir_playbookautofollow_playbookid FOREIGN KEY (PlaybookId) REFERENCES {{ .source_schema }}.IR_Playbook(Id); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_PlaybookMember ADD CONSTRAINT ir_playbookmember_playbookid FOREIGN KEY (PlaybookId) REFERENCES {{ .source_schema }}.IR_Playbook(Id); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_Run_Participants ADD CONSTRAINT ir_run_participants_incidentid FOREIGN KEY (IncidentId) REFERENCES {{ .source_schema }}.IR_Incident(Id); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_StatusPosts ADD CONSTRAINT ir_statusposts_incidentid FOREIGN KEY (IncidentId) REFERENCES {{ .source_schema }}.IR_Incident(Id); $$, - $$ ALTER TABLE {{ .source_schema }}.IR_TimelineEvent ADD CONSTRAINT ir_timelineevent_incidentid FOREIGN KEY (IncidentId) REFERENCES {{ .source_schema }}.IR_Incident(Id); $$, - $$ CREATE UNIQUE INDEX IF NOT EXISTS ir_playbookmember_playbookid_memberid_key on {{ .source_schema }}.IR_PlaybookMember(PlaybookId,MemberId); $$, - $$ CREATE INDEX IF NOT EXISTS ir_statusposts_incidentid_postid_key on {{ .source_schema }}.IR_StatusPosts(IncidentId,PostId); $$, - $$ CREATE INDEX IF NOT EXISTS ir_playbookmember_playbookid on {{ .source_schema }}.IR_PlaybookMember(PlaybookId); $$, - $$ ALTER SCHEMA {{ .source_schema }} RENAME TO public; $$, - $$ SELECT pg_catalog.set_config('search_path', '"$user", public', false); $$, - $$ ALTER USER {{ .pg_user }} SET SEARCH_PATH TO 'public'; $$; - -.. code:: bash - - pgLoader playbooks.load > playbooks_migration.log - -Focalboard -~~~~~~ - -As of ``v9.0`` Boards will transition to being fully community supported as the Focalboard plugin. Hence this guide covers only the version ``v7.10.x`` of the schema. `Official announcement `__. - -Once we are ready to migrate, we can start migrating the **schema** and the **data** by running ``pgLoader`` \*\* - -\*\* Use the following configuration for the baseline of the data migration: - -.. code:: - - LOAD DATABASE - FROM mysql://{{ .mysql_user }}:{{ .mysql_password }}@mysql:3306/{{ .source_schema }} - INTO pgsql://{{ .pg_user }}:{{ .pg_password }}@postgres:5432/{{ .target_schema }} - - WITH include drop, create tables, create indexes, reset sequences, - workers = 8, concurrency = 1, - multiple readers per thread, rows per range = 50000, - preserve index names - - SET PostgreSQL PARAMETERS - maintenance_work_mem to '128MB', - work_mem to '12MB' - - SET MySQL PARAMETERS - net_read_timeout = '120', - net_write_timeout = '120' - - CAST column focalboard_blocks.fields to "json" drop typemod, - column focalboard_blocks_history.fields to "json" drop typemod, - column focalboard_schema_migrations.name to "varchar" drop typemod, - column focalboard_sessions.props to "json" drop typemod, - column focalboard_teams.settings to "json" drop typemod, - column focalboard_users.props to "json" drop typemod, - type int when (= precision 11) to int4 drop typemod, - type json to jsonb drop typemod - - INCLUDING ONLY TABLE NAMES MATCHING - ~/focalboard/ - - BEFORE LOAD DO - $$ ALTER SCHEMA public RENAME TO {{ .source_schema }}; $$ - - AFTER LOAD DO - $$ UPDATE {{ .source_schema }}.focalboard_blocks SET `fields` = "{}" WHERE `fields` = ""; $$, - $$ UPDATE {{ .source_schema }}.focalboard_blocks_history SET `fields` = "{}" WHERE `fields` = ""; $$, - $$ UPDATE {{ .source_schema }}.focalboard_sessions SET `props` = "{}" WHERE `fields` = ""; $$, - $$ UPDATE {{ .source_schema }}.focalboard_teams SET `settings` = "{}" WHERE `fields` = ""; $$, - $$ UPDATE {{ .source_schema }}.focalboard_users SET `props` = "{}" WHERE `fields` = ""; $$, - $$ ALTER SCHEMA {{ .source_schema }} RENAME TO public; $$, - $$ SELECT pg_catalog.set_config('search_path', '"$user", public', false); $$, - $$ ALTER USER {{ .pg_user }} SET SEARCH_PATH TO 'public'; $$; - -.. code:: bash - - pgLoader focalboard.load > focalboard_migration.log - -Compare the plugin data -~~~~~~~~~~~~~~~~~~~~~~~ - -.. code:: sh - - dbcmp --source "${MYSQL_DSN}" --target "${POSTGRES_DSN}" --exclude="db_migrations,systems"