diff --git a/br/br-checkpoint-restore.md b/br/br-checkpoint-restore.md index 8a734e98c46c7..7c0e0ebc381c6 100644 --- a/br/br-checkpoint-restore.md +++ b/br/br-checkpoint-restore.md @@ -89,7 +89,11 @@ During the initial restore, `br` first enters the snapshot restore phase. BR rec When entering the log restore phase during the initial restore, `br` creates a `__TiDB_BR_Temporary_Log_Restore_Checkpoint` database in the target cluster. This database records checkpoint data, the upstream cluster ID, and the restore time range (`start-ts` and `restored-ts`). If restore fails during this phase, you need to specify the same `start-ts` and `restored-ts` as recorded in the checkpoint database when retrying. Otherwise, `br` will report an error and prompt that the current specified restore time range or upstream cluster ID is different from the checkpoint record. If the restore cluster has been cleaned, you can manually delete the `__TiDB_BR_Temporary_Log_Restore_Checkpoint` database and retry with a different backup. -Before entering the log restore phase during the initial restore, `br` constructs a mapping of upstream and downstream cluster database and table IDs at the `restored-ts` time point. This mapping is persisted in the system table `mysql.tidb_pitr_id_map` to prevent duplicate allocation of database and table IDs. Deleting data from `mysql.tidb_pitr_id_map` might lead to inconsistent PITR restore data. +Before entering the log restore phase during the initial restore, `br` constructs a mapping of upstream and downstream cluster database and table IDs at the `restored-ts` time point. This mapping is persisted in the system table `mysql.tidb_pitr_id_map` to prevent duplicate allocation of database and table IDs. **Deleting data from `mysql.tidb_pitr_id_map` at will might lead to inconsistent PITR restore data.** + +> **Note:** +> +> To be compatible with clusters of older versions, starting from v9.0.0, when the system table `mysql.tidb_pitr_id_map` does not exist in the restoring cluster, the `pitr_id_map` data will be written to the log backup directory with the file name `pitr_id_maps/pitr_id_map.cluster_id:{downstream-cluster-ID}.restored_ts:{restored-ts}`. ## Implementation details: store checkpoint data in the external storage @@ -151,4 +155,4 @@ During the initial restore, `br` first enters the snapshot restore phase. BR rec When entering the log restore phase during the initial restore, `br` creates a `restore-{downstream-cluster-ID}/log` path in the target cluster. This path records checkpoint data, the upstream cluster ID, and the restore time range (`start-ts` and `restored-ts`). If restore fails during this phase, you need to specify the same `start-ts` and `restored-ts` as recorded in the checkpoint database when retrying. Otherwise, `br` will report an error and prompt that the current specified restore time range or upstream cluster ID is different from the checkpoint record. If the restore cluster has been cleaned, you can manually clean up the checkpoint data in the external storage or specify another external storage path to store checkpoint data, and retry with a different backup. -Before entering the log restore phase during the initial restore, `br` constructs a mapping of the database and table IDs in the upstream and downstream clusters at the `restored-ts` time point. This mapping is persisted in the system table `mysql.tidb_pitr_id_map` to prevent duplicate allocation of database and table IDs. Deleting data from `mysql.tidb_pitr_id_map` might lead to inconsistent PITR restore data. +Before entering the log restore phase during the initial restore, `br` constructs a mapping of the database and table IDs in the upstream and downstream clusters at the `restored-ts` time point. This mapping is persisted in the checkpoint storage with the file name `pitr_id_maps/pitr_id_map.cluster_id:{downstream-cluster-ID}.restored_ts:{restored-ts}` to prevent duplicate allocation of database and table IDs. **Deleting files from the directory `pitr_id_maps` at will might lead to inconsistent PITR restore data.** diff --git a/br/br-snapshot-guide.md b/br/br-snapshot-guide.md index 170df2f4602ee..691d539319e31 100644 --- a/br/br-snapshot-guide.md +++ b/br/br-snapshot-guide.md @@ -151,6 +151,7 @@ When you perform a snapshot backup, BR backs up system tables as tables with the - Starting from BR v5.1.0, when you back up snapshots, BR automatically backs up the **system tables** in the `mysql` schema, but does not restore these system tables by default. - Starting from v6.2.0, BR lets you specify `--with-sys-table` to restore **data in some system tables**. - Starting from v7.6.0, BR enables `--with-sys-table` by default, which means that BR restores **data in some system tables** by default. +- Starting from v9.0.0, BR lets you specify `--fast-load-sys-tables` to restore system tables physically. Use the `RENAME TABLE` DDL to atomically swap the system tables in the database `__TiDB_BR_Temporary_mysql` with the system tables in the database `mysql`. Different from restoring system tables logically by `REPLACE INTO` SQL, restoring system tables physically will completely overwrite the original data in the system tables. **BR can restore data in the following system tables:** diff --git a/br/br-snapshot-manual.md b/br/br-snapshot-manual.md index 05b77255f9a9e..d3383f26cd51e 100644 --- a/br/br-snapshot-manual.md +++ b/br/br-snapshot-manual.md @@ -127,8 +127,21 @@ tiup br restore full \ --storage local:///br_data/ --pd "${PD_IP}:2379" --log-file restore.log ``` +> **Note:** +> +> Starting from v9.0.0, when the parameter `--load-stats` is set to false, br will not update the relevant information of the restored tables in the table `mysql.stats_meta`. And then you can manually execute `analyze table` SQL after the recovery is complete to update statistics. + When the backup and restore feature backs up data, it stores statistics in JSON format within the `backupmeta` file. When restoring data, it loads statistics in JSON format into the cluster. For more information, see [LOAD STATS](/sql-statements/sql-statement-load-stats.md). +Starting from 9.0.0, BR introduces the parameter `--fast-load-sys-tables`, which is enabled by default. When the br restore data in a new cluster and the IDs of tables and partitions can be reused (otherwise, it will automatically fall back to logically load statistic data), by setting `--fast-load-sys-tables`, br will use the `RENAME TABLE` DDL to atomically swap the system tables in the database `__TiDB_BR_Temporary_mysql` with the system tables in the database `mysql`. + +The following is an example: + +```shell +tiup br restore full \ +--storage local:///br_data/ --pd "${PD_IP}:2379" --log-file restore.log --load-stats --fast-load-sys-tables +``` + ## Encrypt the backup data BR supports encrypting backup data at the backup side and [at the storage side when backing up to Amazon S3](/br/backup-and-restore-storages.md#amazon-s3-server-side-encryption). You can choose either encryption method as required. @@ -181,6 +194,22 @@ Download&Ingest SST <----------------------------------------------------------- Restore Pipeline <-------------------------/...............................................> 17.12% ``` +Starting from TiDB v9.0.0, BR lets you specify `--fast-load-sys-tables` to restore statistic data physically in a new cluster: + +```shell +tiup br restore full \ + --pd "${PD_IP}:2379" \ + --with-sys-table \ + --fast-load-sys-tables \ + --storage "s3://${backup_collection_addr}/snapshot-${date}?access-key=${access-key}&secret-access-key=${secret-access-key}" \ + --ratelimit 128 \ + --log-file restorefull.log +``` + +> **Note:** +> +> Different from restoring system tables logically by `REPLACE INTO` SQL, restoring system tables physically will completely overwrite the original data in the system tables. + ## Restore a database or a table You can use `br` to restore partial data of a specified database or table from backup data. This feature allows you to filter out data that you do not need during the restore.