Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lvm: Enhancements for LVM2 RAID support #969

Merged
merged 1 commit into from
Nov 15, 2022

Conversation

mvollmer
Copy link
Contributor

@mvollmer mvollmer commented Apr 8, 2022

@blivetghoul
Copy link

Can one of the admins verify this patch?

Copy link
Member

@tbzatek tbzatek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know this is pretty much WIP, just a couple of notes that caught my eye.

Generally it looks good so far!

modules/lvm2/data/org.freedesktop.UDisks2.lvm2.xml Outdated Show resolved Hide resolved
modules/lvm2/data/org.freedesktop.UDisks2.lvm2.xml Outdated Show resolved Hide resolved
resynching. A value of 1.0 corresponds to fully synched and
indicates that no synch operation is in progress.
-->
<property name="CopyRatio" type="d" access="read"/>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering whether it's worth adding a note that this concerns LVM RAID scenarios only. Might be perhaps confusing to newcomers trying to find their way through the interface with so many puzzling properties.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this true, however? Maybe other operations like pvmove or lvconvert also use this. AFAIK, "Copy%" predates raid support in LVM2. But shrug. I'll document it as only applying toRAID, and make sure in the code that others have always 1.0 as the value.

Copy link
Member

@tbzatek tbzatek May 13, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have no idea honestly :-) @vojtechtrefny?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the copy_sync/sync_percent LV property is also used for pvmove.

modules/lvm2/udiskslinuxlogicalvolume.c Outdated Show resolved Hide resolved
modules/lvm2/udiskslinuxlogicalvolume.c Outdated Show resolved Hide resolved
modules/lvm2/udiskslinuxvolumegroup.c Show resolved Hide resolved
@mvollmer
Copy link
Contributor Author

I know this is pretty much WIP, just a couple of notes that caught my eye.

Thanks a lot for having a look! Yes, there is lots of refactoring to be done. There are now three methods that take lists of PVs, and they should all share code for turning them into a strv for libblockdev.

@mvollmer mvollmer force-pushed the lvm2-plain-layouts branch 2 times, most recently from 32f98fb to 39a00f3 Compare May 13, 2022 12:02
@mvollmer
Copy link
Contributor Author

  • What about CreateThinPoolVolume and CreateVDOVolume? They can very well be some RAID, but because they already use --type for themselves,

Oh, they don't. I have to experiment with this.

@mvollmer
Copy link
Contributor Author

  • What about CreateThinPoolVolume and CreateVDOVolume? They can very well be some RAID, but because they already use --type for themselves,

Oh, they don't. I have to experiment with this.

Yes, they do. lvcreate --thinpool foo bar creates a new thin pool, as expected, but lvcreate --thinpool foo bar --type raid1 just fails: "Command does not accept option: --thinpool thin."

Maybe we should ask lvm2 to make this work before we support raid thin pools.

Copy link
Member

@tbzatek tbzatek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couple more minor things spotted, nothing major. Still looks good otherwise.

modules/lvm2/udiskslinuxvolumegroup.c Show resolved Hide resolved
modules/lvm2/udiskslinuxlogicalvolume.c Outdated Show resolved Hide resolved
modules/lvm2/udiskslinuxvolumegroup.c Outdated Show resolved Hide resolved
@vojtechtrefny
Copy link
Member

  • What about CreateThinPoolVolume and CreateVDOVolume? They can very well be some RAID, but because they already use --type for themselves,

Oh, they don't. I have to experiment with this.

Yes, they do. lvcreate --thinpool foo bar creates a new thin pool, as expected, but lvcreate --thinpool foo bar --type raid1 just fails: "Command does not accept option: --thinpool thin."

Maybe we should ask lvm2 to make this work before we support raid thin pools.

For RAID thin pools we need to manually create the two RAID LVs for metadata and data and then use lvconvert --type thinpool to create the thin pool from them. --thinpool and --type can't be used together because --thinpool is just an alias for --type thinpool.

@mvollmer mvollmer force-pushed the lvm2-plain-layouts branch 4 times, most recently from 0a389d2 to 35aa780 Compare July 5, 2022 10:25
@mvollmer
Copy link
Contributor Author

mvollmer commented Aug 11, 2022

  • The API probably also needs to allow to specify the size of a new LV as "%PVS". That could be another option that requires a Features flag, so maybe that's already reason enough to use options instead of additional functions.

I no longer think this will happen. The experience when using "%PVS" for anything but "100%PVS" is not good: "50%PVS" does not give you something that is half the size of "100%PVS":

# lsblk /dev/sdd /dev/sde /dev/sdf
NAME MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sdd    8:48   0  500M  0 disk 
sde    8:64   0  500M  0 disk 
sdf    8:80   0  500M  0 disk 

# vgcreate vgroup0 /dev/sdd /dev/sde /dev/sdf
 Volume group "vgroup0" successfully created

# lvcreate vgroup0 -n lvol0 --type raid5 -l "100%PVS"
  Using default stripesize 64.00 KiB.
  Logical volume "lvol0" created.

# lvs vgroup0
  LV    VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lvol0 vgroup0 rwi-a-r--- 984.00m                                    100.00          

# lvremove vgroup0/lvol0
  Logical volume "lvol0" successfully removed

# lvcreate vgroup0 -n lvol0 --type raid5 -l "50%PVS"
  Using default stripesize 64.00 KiB.
  Logical volume "lvol0" created.

# lvs vgroup0
  LV    VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lvol0 vgroup0 rwi-a-r--- 744.00m                                    100.00          

A raid5 on "100%PVS" is 984 megs, but with "50%PVS" it is 744 megs instead of the expected 490ish megs.

This is because "%PVS" doesn't work well with layouts that require much more space on PVs than they provide on the resulting LV. "100%PVS" in the case above is asking for a LV of size 1500ish megs, but lvcreate flips on a special mode and makes one that is as large as it can be, which turns out to be 980 megs. "50%PVS" asks for half of 1500 = 750, which is possible, so we get that.

So exposing a "%PVS" slider with its current semantics doesn't make sense. And if we have to change LVM2 anyway, we should change it to report the maximum possible size in a dry-run or test mode.

@mvollmer mvollmer force-pushed the lvm2-plain-layouts branch 2 times, most recently from 0e338e9 to 503f910 Compare August 15, 2022 17:30
@mvollmer
Copy link
Contributor Author

  • Should we have *CreateFooWithLayout functions?

I'd say "yes". I at least don't know how this all will work for pools, with all their meta data etc. Maybe we'll expose lvconvert in the API? So I wouldn't worry about that yet at all.

@mvollmer mvollmer marked this pull request as ready for review September 27, 2022 08:00
@tbzatek
Copy link
Member

tbzatek commented Nov 3, 2022

Jenkins, ok to test.

@mvollmer mvollmer force-pushed the lvm2-plain-layouts branch 2 times, most recently from e8ce55a to b01d743 Compare November 8, 2022 13:24
Copy link
Member

@tbzatek tbzatek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fixes. This looks good to me in its current form.

@mvollmer: do you plan any further changes to this? There are some design questions from you above, I don't have much more to say to that.

As we're close to the udisks-2.10.0 release, the API in this pull request will get public and fozen. If you think it might still need another iteration, we may postpone this to 2.11.0.

@mvollmer
Copy link
Contributor Author

As we're close to the udisks-2.10.0 release, the API in this pull request will get public and fozen.

I am happy with that.

Thanks a lot!

@tbzatek tbzatek merged commit 18d8873 into storaged-project:master Nov 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants