Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Qubes storage pools of type LVM issues #3438

Closed
7 tasks done
zander opened this issue Jan 1, 2018 · 10 comments · Fixed by QubesOS/qubes-core-admin#238
Closed
7 tasks done

Qubes storage pools of type LVM issues #3438

zander opened this issue Jan 1, 2018 · 10 comments · Fixed by QubesOS/qubes-core-admin#238

Comments

@zander
Copy link

zander commented Jan 1, 2018

This is a list of issues found in usage of pools in Qubes4RC3, as per private email I register them here to allow everyone to keep track of these;

  • creating an qvm-thinpool pool with arguments that is not a lvm thinpool should fail. (example)
    This is the purpose of VmCreationManager, but that covers only vm creation...

Per design, creating a pool creates a corresponding dir under /usr/lib/qubes/appvms
I’ve had two issues with that.

  • creating a VM with a ‘-p root=myPool --template=foo’ will fail if the
    template is not on the same pool. AND it will leave the directory behind.
    I expect failures to not leave a dir behind. Either check first or clean up on fail.

Which naturally leads to;

  • creating a new VM will cause an internal exception if the dir in /usr/lib/qubes
    already exists. I expect this to end up with a user-visible error from the API.

  • Creating a qvm pool with argument revisions_to_keep=0 will cause the qvm-
    create later to exit based on it not being able to make a backup or
    something.
    Error detection should happen in the qvm-pool command.

  • Calling qvm-create with the root-pool being different from the pool the template is in gives you;
    app: Error creating VM: Can't clone a snapshot volume root to pool qubes_ssd

I think what it meant to say is that the root has to be in the same pool as
the template it is based on.
This is probably just a case of improving the error message.

  • Requesting qvm-pool -i POOL lists the size of zero when just created.

  • Qubes creates a snapshot at every start which turns into a backup partition when it exits. The problem is that qubes never garbage collects old backups. They continue getting created and sudo lvs gets quite lengthy after some time.

@zander
Copy link
Author

zander commented Jan 7, 2018

@andrewdavidwong can you please remove the 'installer' tag? This is about the qvm-* commands only. Thanks!

@marmarek
Copy link
Member

marmarek commented Jan 7, 2018

Requesting qvm-pool -i POOL lists the size of zero when just created. It is a number much larger than the physical size after some usage. Similarly, the 'usage' shows a much higher number than it should.

I can't reproduce the latter, it shows me the right number, the same as "LSize" column in lvs output. Anything special about your pool?

@marmarek
Copy link
Member

marmarek commented Jan 7, 2018

Storing a VM (template) on a LVM based pool causes qvm-start to show a bug in timing.
Calling qvm-start normally will not return until the VM fully started and is operational. This is not the case on a lvm based pool. The qvm-start command returns immediately and as a side-effect the qemu bios window is shown.
This is an important functional issue as this means disposable VMs can't be stored on such a pool due to the timing issue causing them to exit before really having been started.

This isn't related to LVM. The template has a set of "features" (see qvm-features). If qrexec is not set there, qvm-start does not wait for qrexec startup. Similary, if gui is not set there, gui-agent is assumed to be missing and emulated VGA is used.
Those features should be set during template installation, but if weren't, you can set them manually.

@marmarek
Copy link
Member

marmarek commented Jan 7, 2018

Qubes creates a snapshot at every start which turns into a backup partition when it exits. The problem is that qubes never garbage collects old backups. They continue getting created and sudo lvs gets quite lengthy after some time.

What is revisions_to_keep value in that pool?

@zander
Copy link
Author

zander commented Jan 7, 2018

Requesting qvm-pool -i POOL lists the size of zero when just created. It is a number much larger
than the physical size after some usage. Similarly, the 'usage' shows a much higher number than it
should.

I can't reproduce the latter, it shows me the right number, the same as "LSize" column in lvs output.

You are right, this is due to the usage of GiB in lvs where I expected GB. The only issue really here seems to be that the values at the beginning were zero. (I know that for a fact, I copy pasted that in an email).
Maybe just an issue of not calculating stuff until a VM actually uses the pool.

I'll update the issue above to avoid confusing people coming in later.

@zander
Copy link
Author

zander commented Jan 7, 2018

What is revisions_to_keep value in that pool?

Confusing :)

qvm-pool doesn't list it. In the <pools> section of qubes.xml there is no mention either.

On each individual domain/volume that uses this pool, the xml-attribute is set to 1 (except for volatile, which is zero, for obvious reasons).

Oh, and I set it to two initially.

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Jan 11, 2018
Force cache refresh after registering new pool - it might be just
created.

QubesOS/qubes-issues#3438
marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Jan 12, 2018
marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Oct 21, 2018
It can be leftover from previous failed attempt. Don't crash on it, and
replace it instead.

QubesOS/qubes-issues#3438
@qubesos-bot
Copy link

Automated announcement from builder-github

The package qubes-core-dom0-4.0.33-1.fc25 has been pushed to the r4.0 testing repository for dom0.
To test this update, please install it with the following command:

sudo qubes-dom0-update --enablerepo=qubes-dom0-current-testing

Changes included in this update

@dadigi
Copy link

dadigi commented Nov 16, 2018

* Calling qvm-create with the root-pool being different from the pool the template is in gives you;
  `app: Error creating VM: Can't clone a snapshot volume root to pool qubes_ssd`

I get the same error even if the template-vm lies in the same lvm thin pool:

[root@dom0 ~]# qvm-pool -l
NAME DRIVER
nvme lvm_thin
linux-kernel linux-kernel
ssd lvm_thin
varlibqubes file
[root@dom0 ~]# qvm-volume ls arch-base
POOL:VOLUME VMNAME VOLUME_NAME REVERT_POSSIBLE
linux-kernel:4.14.74-1 arch-base kernel No
nvme:vg0/vm-arch-base-root arch-base root Yes
nvme:vg0/vm-arch-base-volatile arch-base volatile No
ssd:vg1/vm-arch-base-private arch-base private No
[root@dom0 ~]# qvm-create -p private=ssd -p root=nvme -p volatile=nvme -t arch-base -l yellow me-personal
app: Error creating VM: Can't clone a snapshot volume root to pool nvme

@dadigi
Copy link

dadigi commented Nov 16, 2018

[root@dom0 ~]# qvm-create -p private=ssd -p root=nvme -p volatile=nvme -t arch-base -l yellow me-personal
app: Error creating VM: Can't clone a snapshot volume root to pool nvme

Ok, it works if I do not specify the root volume:
[root@dom0 ~]# qvm-create -p private=ssd -p volatile=nvme -t arch-base -l yellow me-test01
[root@dom0 ~]# qvm-volume ls me-test01
POOL:VOLUME VMNAME VOLUME_NAME REVERT_POSSIBLE
linux-kernel:4.14.74-1 me-test01 kernel No
nvme:vg0/vm-me-test01-root me-test01 root No
nvme:vg0/vm-me-test01-volatile me-test01 volatile No
ssd:vg1/vm-me-test01-private me-test01 private No

@qubesos-bot
Copy link

Automated announcement from builder-github

The package qubes-core-dom0-4.0.37-1.fc25 has been pushed to the r4.0 stable repository for dom0.
To install this update, please use the standard update command:

sudo qubes-dom0-update

Or update dom0 via Qubes Manager.

Changes included in this update

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants