Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] VM image on NFS datastore error #26

Open
gurubert opened this issue May 7, 2020 · 7 comments
Open

[BUG] VM image on NFS datastore error #26

gurubert opened this issue May 7, 2020 · 7 comments

Comments

@gurubert
Copy link

gurubert commented May 7, 2020

Describe the bug
I am trying to create a VM with the Ubuntu 18.04 cloud image on an NFS datastore.

Terraform cannot continue because of this error:

Successfully imported disk as 'unused0:maestro:120/vm-120-disk-0.qcow2'
unable to parse directory volume name 'vm-120-disk-0'

The disk image has been created and resized, it just was not attached to the VM.

Expected behavior
The disk image should be attached and terraform should continue.

When chosing a different datastore (e.g. thin provisioned LVM) the creation works.

@danitso-dp
Copy link
Collaborator

@gurubert It's most likely because the code doesn't detect it as being a directory based datastore. Can you post the sanitized contents of /etc/pve/storage.cfg? I can then try to see, if I can reproduce the issue.

@gurubert
Copy link
Author

gurubert commented May 11, 2020

This is /etc/pve/storage.cfg

dir: local
	path /var/lib/vz
	content backup,snippets
	maxfiles 2
	shared 0

lvmthin: local-lvm
	disable
	thinpool data
	vgname pve
	content rootdir,images

lvmthin: local-lvm2
	thinpool data
	vgname pvedata
	content rootdir,images

nfs: maestro
	export /export/proxmox
	path /mnt/pve/maestro
	server maestro
	content images,vztmpl,backup,snippets,iso
	maxfiles 2
	options vers=3

@danitso-dp
Copy link
Collaborator

@gurubert check out #28 and see, if it fixes the issue.

@jescarri
Copy link

Hey @danitso-dp thanks for putting together this new provider!

The fix at #28 did not worked :( let me know how can I help to get this fixed.

proxmox_virtual_environment_vm.example: Creating...
  acpi:                                        "" => "true"
  bios:                                        "" => "seabios"
  clone.#:                                     "" => "1"
  clone.0.vm_id:                               "" => "9001"
  disk.#:                                      "" => "1"
  disk.0.datastore_id:                         "" => "proxmox-nfs"
  disk.0.file_format:                          "" => "qcow2"
  disk.0.size:                                 "" => "20"
  initialization.#:                            "" => "1"
  initialization.0.datastore_id:               "" => "proxmox-nfs"
  initialization.0.dns.#:                      "" => "1"
  initialization.0.dns.0.domain:               "" => "identitylabs.mx"
  initialization.0.dns.0.server:               "" => "1.1.1.1"
  initialization.0.ip_config.#:                "" => "1"
  initialization.0.ip_config.0.ipv4.#:         "" => "1"
  initialization.0.ip_config.0.ipv4.0.address: "" => "dhcp"
  ipv4_addresses.#:                            "" => "<computed>"
  ipv6_addresses.#:                            "" => "<computed>"
  keyboard_layout:                             "" => "en-us"
  mac_addresses.#:                             "" => "<computed>"
  memory.#:                                    "" => "1"
  memory.0.dedicated:                          "" => "2000"
  memory.0.floating:                           "" => "0"
  memory.0.shared:                             "" => "0"
  name:                                        "" => "test1"
  network_device.#:                            "" => "1"
  network_device.0.bridge:                     "" => "vmbr0"
  network_device.0.enabled:                    "" => "true"
  network_device.0.model:                      "" => "virtio"
  network_device.0.rate_limit:                 "" => "0"
  network_device.0.vlan_id:                    "" => "0"
  network_interface_names.#:                   "" => "<computed>"
  node_name:                                   "" => "pve01"
  started:                                     "" => "true"
  tablet_device:                               "" => "true"
  template:                                    "" => "false"
  vm_id:                                       "" => "2044"
2020/10/14 17:36:02 [TRACE] root: eval: *terraform.EvalApply
2020/10/14 17:36:02 [DEBUG] apply: proxmox_virtual_environment_vm.example: executing Apply
2020-10-14T17:36:02.488Z [DEBUG] plugin.terraform-provider-proxmox: 2020/10/14 17:36:02 [DEBUG] Performing HTTP POST request (path: nodes/pve01/qemu/9001/clone)
2020-10-14T17:36:02.488Z [DEBUG] plugin.terraform-provider-proxmox: 2020/10/14 17:36:02 [DEBUG] Added request body to HTTP POST request (path: nodes/pve01/qemu/9001/clone) - Body: full=1&name=test1&newid=2044
2020-10-14T17:36:02.537Z [DEBUG] plugin.terraform-provider-proxmox: 2020/10/14 17:36:02 [DEBUG] WARNING: Unhandled HTTP response body: {"data":"UPID:pve01:00001DD7:00724250:5F873702:qmclone:9001:root@pam:"}
2020-10-14T17:36:02.537Z [DEBUG] plugin.terraform-provider-proxmox: 2020/10/14 17:36:02 [DEBUG] Performing HTTP GET request (path: nodes/pve01/qemu/2044/status/current)
2020/10/14 17:36:05 [TRACE] dag/walk: vertex "meta.count-boundary (count boundary fixup)", waiting for: "proxmox_virtual_environment_vm.example"
2020/10/14 17:36:05 [TRACE] dag/walk: vertex "root", waiting for: "meta.count-boundary (count boundary fixup)"
2020/10/14 17:36:05 [TRACE] dag/walk: vertex "provider.proxmox (close)", waiting for: "proxmox_virtual_environment_vm.example"
2020-10-14T17:36:07.574Z [DEBUG] plugin.terraform-provider-proxmox: 2020/10/14 17:36:07 [DEBUG] Performing HTTP GET request (path: nodes/pve01/qemu/2044/status/current)
2020-10-14T17:36:07.605Z [DEBUG] plugin.terraform-provider-proxmox: 2020/10/14 17:36:07 [DEBUG] Performing HTTP PUT request (path: nodes/pve01/qemu/2044/config)
2020-10-14T17:36:07.606Z [DEBUG] plugin.terraform-provider-proxmox: 2020/10/14 17:36:07 [DEBUG] Added request body to HTTP PUT request (path: nodes/pve01/qemu/2044/config) - Body: balloon=0&delete=net1%2Cnet2%2Cnet3%2Cnet4%2Cnet5%2Cnet6%2Cnet7&ide2=file%3Dproxmox-nfs%3Acloudinit%2Cmedia%3Dcdrom&ipconfig0=ip%3Ddhcp&memory=2000&nameserver=1.1.1.1&net0=model%3Dvirtio%2Cbridge%3Dvmbr0&searchdomain=identitylabs.mx
2020-10-14T17:36:07.723Z [DEBUG] plugin.terraform-provider-proxmox: 2020/10/14 17:36:07 [DEBUG] WARNING: Received an HTTP 500 response - Reason: error during cfs-locked 'storage-proxmox-nfs' operation: disk image '/mnt/pve/proxmox-nfs/images/2044/vm-2044-cloudinit.qcow2' already exists
2020/10/14 17:36:07 [TRACE] root: eval: *terraform.EvalWriteState
2020/10/14 17:36:07 [TRACE] root: eval: *terraform.EvalApplyProvisioners
2020/10/14 17:36:07 [TRACE] root: eval: *terraform.EvalIf
2020/10/14 17:36:07 [TRACE] root: eval: *terraform.EvalWriteState
2020/10/14 17:36:07 [TRACE] root: eval: *terraform.EvalWriteDiff
2020/10/14 17:36:07 [TRACE] root: eval: *terraform.EvalApplyPost
2020/10/14 17:36:07 [ERROR] root: eval: *terraform.EvalApplyPost, err: 1 error(s) occurred:

* proxmox_virtual_environment_vm.example: Received an HTTP 500 response - Reason: error during cfs-locked 'storage-proxmox-nfs' operation: disk image '/mnt/pve/proxmox-nfs/images/2044/vm-2044-cloudinit.qcow2' already exists
2020/10/14 17:36:07 [ERROR] root: eval: *terraform.EvalSequence, err: 1 error(s) occurred:

* proxmox_virtual_environment_vm.example: Received an HTTP 500 response - Reason: error during cfs-locked 'storage-proxmox-nfs' operation: disk image '/mnt/pve/proxmox-nfs/images/2044/vm-2044-cloudinit.qcow2' already exists
2020/10/14 17:36:07 [TRACE] [walkApply] Exiting eval tree: proxmox_virtual_environment_vm.example
2020/10/14 17:36:07 [TRACE] dag/walk: upstream errored, not walking "meta.count-boundary (count boundary fixup)"
2020/10/14 17:36:07 [TRACE] dag/walk: upstream errored, not walking "provider.proxmox (close)"
2020/10/14 17:36:07 [TRACE] dag/walk: upstream errored, not walking "root"
2020/10/14 17:36:07 [TRACE] Preserving existing state lineage "8d45367d-26ab-55aa-e035-cb8f70cd7617"
2020/10/14 17:36:07 [TRACE] Preserving existing state lineage "8d45367d-26ab-55aa-e035-cb8f70cd7617"
2020/10/14 17:36:07 [TRACE] Preserving existing state lineage "8d45367d-26ab-55aa-e035-cb8f70cd7617"

Error: Error applying plan:

1 error(s) occurred:

* proxmox_virtual_environment_vm.example: 1 error(s) occurred:

2020/10/14 17:36:07 [DEBUG] plugin: waiting for all plugin processes to complete...
2020-10-14T17:36:07.732Z [DEBUG] plugin.terraform-provider-proxmox: 2020/10/14 17:36:07 [ERR] plugin: plugin server: accept unix /tmp/plugin777117039: use of closed network connection
* proxmox_virtual_environment_vm.example: Received an HTTP 500 response - Reason: error during cfs-locked 'storage-proxmox-nfs' operation: disk image '/mnt/pve/proxmox-nfs/images/2044/vm-2044-cloudinit.qcow2' already exists

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

2020-10-14T17:36:07.734Z [DEBUG] plugin: plugin process exited: path=/home/jescarri/terraform-providers/terraform-provider-proxmox

@jescarri
Copy link

tf config:

resource "proxmox_virtual_environment_vm" "example" {
  name      = "test1"
  node_name = "pve01"

  #pool_id   = "${proxmox_virtual_environment_pool.example.id}"
  vm_id = 2044

  clone {
    vm_id = 9001
  }

  memory {
    dedicated = 2000
  }

  disk {
    datastore_id = "proxmox-nfs"
    size         = 20
  }

  network_device {
    bridge = "vmbr0"
    model  = "virtio"
  }

  initialization {
    datastore_id = "proxmox-nfs"

    dns = {
      domain = "identitylabs.mx"
      server = "1.1.1.1"
    }

    ip_config {
      ipv4 {
        address = "dhcp"
      }
    }
  }
}

@danitso-dp
Copy link
Collaborator

@jescarri do you still experience this issue using the latest version?

@jescarri
Copy link

hey @danitso-dp I will test it in the next few days and let you know.

Thanks for the help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants