Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RBD volume with parent cannot be used after restore #77

Open
div8cn opened this issue Nov 14, 2020 · 4 comments
Open

RBD volume with parent cannot be used after restore #77

div8cn opened this issue Nov 14, 2020 · 4 comments

Comments

@div8cn
Copy link

div8cn commented Nov 14, 2020

If the RBD contains parent RBD (a new RBD created by rbd clone), it cannot be restored normally after backing up with backy2.By comparing the restored rbd info, it is found that the restored rbd has lost the parent attribute.

Before backup rbd volume info:
rbd image 'c1bb66e1-030f-48a0-94b5-74910333cd49.bak':
size 100GiB in 25600 objects
order 22 (4MiB objects)
block_name_prefix: rbd_data.1b0fcd96fe70f
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
create_timestamp: Fri Nov 13 14:04:31 2020
parent: rbd/0bafb558-18f6-46e4-82d8-7e6d09980618@cloudstack-base-snap
overlap: 100GiB

restore to new rbd volume

new rbd volume info
rbd image 'c1bb66e1-030f-48a0-94b5-74910333cd49':
size 100GiB in 25600 objects
order 22 (4MiB objects)
block_name_prefix: rbd_data.2548e6b8b4567
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
create_timestamp: Sat Nov 14 04:30:35 2020

The biggest difference is missing:
parent: rbd/0bafb558-18f6-46e4-82d8-7e6d09980618@cloudstack-base-snap

@wamdam
Copy link
Owner

wamdam commented Nov 19, 2020

backy2 does not care about any relationships in the storage volumes. It knows nothing about it, and this is a design choice. It just stores blocks.
How have you created the backups?

@div8cn
Copy link
Author

div8cn commented Nov 20, 2020

I do the backup as follows:

rbd snap create rbd/c1bb66e1-030f-48a0-94b5-74910333cd49@backup1
rbd diff --whole-object rbd/c1bb66e1-030f-48a0-94b5-74910333cd49@backup1 --format=json > /root/backup1.diff
backy2 backup -s backup1 -r /root/backup1.diff rbd://rbd/c1bb66e1-030f-48a0-94b5-74910333cd49@backup1 c1bb66e1-030f-48a0-94b5-74910333cd49

Use this backup to restore to a new RBD [root-101]
RBD [root-101] cannot be used normally and the data is incomplete

If I execute the following command first
rbd flatten rbd/c1bb66e1-030f-48a0-94b5-74910333cd49

Then take a snapshot, diff, backup

This backup can be restored and used normally

@anomaly256
Copy link

anomaly256 commented Sep 4, 2022

I wish I knew about this bug before today. I’ve lost a whole pool and now find that any vm images that were cloned from a base image cannot be recovered. Not a production environment but still lost 20 VMs from my home lab.

@elemental-lf
Copy link

This is most likely a bug in Ceph. Probably this one: https://tracker.ceph.com/issues/54970. If you're stuck on an older version try leaving off --whole-object from the rbd diff call as a workaround. But this will slow down the rbd diff.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants