Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[3.0.1-rc6] GRPCProvider error after upgrading PVE from 7.4 to 8.3 #1237

Open
iazamat16 opened this issue Jan 31, 2025 · 6 comments
Open

[3.0.1-rc6] GRPCProvider error after upgrading PVE from 7.4 to 8.3 #1237

iazamat16 opened this issue Jan 31, 2025 · 6 comments
Labels
resource/qemu Issue or PR related to Qemu resource type/question Issue needs no code to be fixed, only a description on how to fix it yourself

Comments

@iazamat16
Copy link

Previously we used proxmox 7.4.19 with telmate 2.19.10
Then we upgraded proxmox hardware to pve 8.3.2
Had a problems with old provider version. Updated provider to the latest (3.0.1-rc6) version.
Fixed misconfigurations caused by versions differences. Also removed some breaking properties mentioned at 3.0.1-rc2

Now, after terraform plan receiving this kind of error:

  # module.proxmox_vm.proxmox_vm_qemu.cloudinit-test["k4m2"] will be updated in-place
  ~ resource "proxmox_vm_qemu" "cloudinit-test" {
      + ciupgrade              = false
        id                     = "prox62/qemu/116"
        name                   = "k4m2"
      + skip_ipv4              = false
      + skip_ipv6              = false
        tags                   = null
        # (54 unchanged attributes hidden)

      ~ disk {
          + cache                = "writeback"
            id                   = 0
          + size                 = "20G"
            # (27 unchanged attributes hidden)
        }
      - disk {
          - backup               = true -> null
          - cache                = "writeback" -> null
          - discard              = true -> null
          - emulatessd           = false -> null
          - format               = "raw" -> null
          - id                   = 0 -> null
          - iops_r_burst         = 0 -> null
          - iops_r_burst_length  = 0 -> null
          - iops_r_concurrent    = 0 -> null
          - iops_wr_burst        = 0 -> null
          - iops_wr_burst_length = 0 -> null
          - iops_wr_concurrent   = 0 -> null
          - iothread             = false -> null
          - linked_disk_id       = -1 -> null
          - mbps_r_burst         = 0 -> null
          - mbps_r_concurrent    = 0 -> null
          - mbps_wr_burst        = 0 -> null
          - mbps_wr_concurrent   = 0 -> null
          - passthrough          = false -> null
          - readonly             = false -> null
          - replicate            = true -> null
          - size                 = "20G" -> null
          - slot                 = "scsi0" -> null
          - storage              = "ceph3" -> null
          - type                 = "disk" -> null
            # (5 unchanged attributes hidden)
        }

        # (2 unchanged blocks hidden)
    }

Plan: 0 to add, 13 to change, 0 to destroy.
╷
│ Error: Request cancelled
│ 
│   with module.proxmox_vm.proxmox_vm_qemu.cloudinit-test["k4m3"],
│   on ../modules/proxmox-vm/main.tf line 2, in resource "proxmox_vm_qemu" "cloudinit-test":
│    2: resource "proxmox_vm_qemu" "cloudinit-test" {
│ 
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadResource call. The plugin logs may contain more details.
╵
╷
│ Error: Request cancelled
modules/proxmox-vm/main.tf
resource "proxmox_vm_qemu" "cloudinit-test" {
  for_each    = var.hosts
  name        = var.hosts[each.key].name
  desc        = var.hosts[each.key].description
  target_node = var.hosts[each.key].target_node
  clone       = var.hosts[each.key].clone_template

  # The destination resource pool for the new VM
  pool    = var.hosts[each.key].pool
  cores   = var.hosts[each.key].cores
  sockets = var.hosts[each.key].sockets
  memory  = var.hosts[each.key].memory
  onboot  = var.hosts[each.key].onboot
  agent   = var.hosts[each.key].agent
  hastate = var.hosts[each.key].hastate

  disk {
    slot         = var.hosts[each.key].disk_slot
#    discard = var.hosts[each.key].disk_discard
#    backup  = var.hosts[each.key].disk_backup
    cache        = var.hosts[each.key].cache
    size         = var.hosts[each.key].disk_size
    storage      = var.hosts[each.key].storage
    # Optional values
    type         = var.hosts[each.key].disk_type
    iothread     = var.hosts[each.key].disk_iothread
    replicate    = var.hosts[each.key].disk_replicate
    emulatessd   = var.hosts[each.key].disk_emulatessd
    disk_file    = var.hosts[each.key].disk_file
  }

  network {
    id        = var.hosts[each.key].network_id
    bridge    = var.hosts[each.key].vm_bridge_name
    firewall  = var.hosts[each.key].network_firewall
    # Optional values
    model     = var.hosts[each.key].network_model
    macaddr   = var.hosts[each.key].network_macaddr
    tag       = var.hosts[each.key].network_tag
    rate      = var.hosts[each.key].network_rate
    queues    = var.hosts[each.key].network_queues
    link_down = var.hosts[each.key].network_link_down
  }

  os_type    = var.hosts[each.key].os_type
  ciuser     = var.hosts[each.key].ciuser
  cipassword = var.hosts[each.key].cipassword
  nameserver = coalesce(var.hosts[each.key].nameserver, var.hosts[each.key].ip_gateway)
  ipconfig0  = "ip=${var.hosts[each.key].ip_address}/24,gw=${var.hosts[each.key].ip_gateway}"

  # Optional values
  boot                        = var.hosts[each.key].boot
  vmid                        = var.hosts[each.key].vmid
  define_connection_info      = var.hosts[each.key].define_connection_info
  bios                        = var.hosts[each.key].bios
  tablet                      = var.hosts[each.key].tablet
  bootdisk                    = var.hosts[each.key].bootdisk
  full_clone                  = var.hosts[each.key].full_clone
  hagroup                     = var.hosts[each.key].hagroup
  qemu_os                     = var.hosts[each.key].qemu_os
  balloon                     = var.hosts[each.key].balloon
  vcpus                       = var.hosts[each.key].vcpus
  numa                        = var.hosts[each.key].numa
  hotplug                     = var.hosts[each.key].hotplug
  scsihw                      = var.hosts[each.key].scsihw
  tags                        = var.hosts[each.key].tags
  force_create                = var.hosts[each.key].force_create
  force_recreate_on_change_of = var.hosts[each.key].force_recreate_on_change_of
  os_network_config           = var.hosts[each.key].os_network_config
  ssh_forward_ip              = var.hosts[each.key].ssh_forward_ip
  ssh_user                    = var.hosts[each.key].ssh_user
  ssh_private_key             = var.hosts[each.key].ssh_private_key
  ci_wait                     = var.hosts[each.key].ci_wait
  cicustom                    = var.hosts[each.key].cicustom
  searchdomain                = var.hosts[each.key].searchdomain
  sshkeys                     = var.hosts[each.key].sshkeys
  automatic_reboot            = var.hosts[each.key].automatic_reboot


  lifecycle {
    ignore_changes = [
      pool, hastate, sshkeys, clone, automatic_reboot, cipassword, os_type, qemu_os, timeouts, additional_wait, clone_wait
    ]
  }
}
Piece of state file
    {
      "module": "module.proxmox_vm",
      "mode": "managed",
      "type": "proxmox_vm_qemu",
      "name": "cloudinit-test",
      "provider": "provider[\"registry.terraform.io/telmate/proxmox\"]",
      "instances": [
        {
          "index_key": "k4m3",
          "schema_version": 0,
          "attributes": {
            "additional_wait": 0,
            "agent": 0,
            "args": "",
            "automatic_reboot": true,
            "balloon": 0,
            "bios": "seabios",
            "boot": "order=scsi0",
            "bootdisk": "",
            "bridge": "",
            "ci_wait": null,
            "cicustom": "",
            "cipassword": "somepassword",
            "ciuser": "root",
            "clone": "Ubuntu22-Cloud-Init",
            "clone_wait": 0,
            "cores": 4,
            "cpu_type": "host",
            "default_ipv4_address": null,
            "define_connection_info": true,
            "desc": "k4m3",
            "disk": [
              {
                "aio": "",
                "cache": "writeback",
                "file": "vm-114-disk-0",
                "format": "raw",
                "iothread": false,
                "mbps": 0,
                "mbps_rd": 0,
                "mbps_rd_max": 0,
                "mbps_wr": 0,
                "mbps_wr_max": 0,
                "media": "",
                "replicate": false,
                "size": "20G",
                "slot": "ide2",
                "emulatessd": false,
                "storage": "ceph3",
                "storage_type": "rbd",
                "type": "cloudinit",
                "volume": "ceph3:vm-114-disk-0"
              }
            ],
            "disk_gb": 0,
            "force_create": false,
            "force_recreate_on_change_of": null,
            "full_clone": true,
            "guest_agent_ready_timeout": 100,
            "hagroup": "",
            "hastate": "",
            "hotplug": "network,disk,usb",
            "id": "prox64/qemu/114",
            "ipconfig0": "ip=10.10.6.127/24,gw=10.10.6.1",
            "ipconfig1": "",
            "ipconfig2": "",
            "ipconfig3": "",
            "ipconfig4": "",
            "ipconfig5": "",
            "kvm": true,
            "mac": "",
            "memory": 6144,
            "name": "k4m3",
            "nameserver": "10.10.101.20",
            "network": [
              {
                "bridge": "vmbr6",
                "all": true,
                "link_down": false,
                "macaddr": "mac:ad:dr",
                "model": "virtio",
                "mtu": 0,
                "queues": 0,
                "rate": 0,
                "tag": -1
              }
            ],
            "nic": "",
            "numa": false,
            "onboot": true,
            "oncreate": true,
            "os_network_config": null,
            "os_type": "cloud-init",
            "pool": "",
            "preprovision": true,
            "pxe": null,
            "qemu_os": "other",
            "reboot_required": false,
            "scsihw": "virtio-scsi-pci",
            "searchdomain": "",
            "serial": [],
            "sockets": 1,
            "ssh_forward_ip": null,
            "ssh_host": null,
            "ssh_port": null,
            "ssh_private_key": null,
            "ssh_user": null,
            "sshkeys": "ssh public keys",
            "storage": "",
            "storage_type": "",
            "tablet": true,
            "tags": "",
            "target_node": "prox64",
            "timeouts": null,
            "unused_disk": [],
            "usb": [],
            "vcpus": 0,
            "vga": [],
            "vlan": -1,
            "vmid": null
          },
          "sensitive_attributes": [],
          "private": "PrivateKey=="
        },

How can I achieve: Plan: 0 to add, 0 to change, 0 to destroy. So that my resources won't be recreated.

thanks in advance

@Tinyblargon
Copy link
Collaborator

@iazamat16 the scsi0 is not in the state file, is it in your terraform configuration? If not, add a the disk configuration. Otherwise, terraform will think it should be removed.

@Tinyblargon Tinyblargon added type/question Issue needs no code to be fixed, only a description on how to fix it yourself resource/qemu Issue or PR related to Qemu resource labels Jan 31, 2025
@iazamat16
Copy link
Author

I provided "slot": scsi0 in the state file

tfstate
            "cores": 8,
            "cpu_type": "host",
            "default_ipv4_address": null,
            "define_connection_info": true,
            "desc": "k4w1",
            "disk": [
              {

                "cache": "writeback",
                "discard": true,
                "file": "vm-119-disk-0",
                "format": "raw",
                "iothread": false,
                "mbps": 0,
                "mbps_rd": 0,
                "mbps_rd_max": 0,
                "mbps_wr": 0,
                "mbps_wr_max": 0,
                "media": "",
                "replicate": false,
                "size": "100G",
                "slot": "scsi0",
                "emulatessd": false,
                "storage": "ceph3",
                "storage_type": "rbd",
                "type": "cloudinit",
                "volume": "ceph3:vm-119-disk-0"
              }
            ],
            "disk_gb": 0,
            "force_create": false,
            "force_recreate_on_change_of": null,
            "full_clone": true,
            "guest_agent_ready_timeout": 100,
            "hagroup": "",
            "hastate": "",
            "hotplug": "network,disk,usb",
            "id": "prox61/qemu/119",
terraform plan
  # module.proxmox_vm.proxmox_vm_qemu.cloudinit-test["k4w1"] will be updated in-place
  ~ resource "proxmox_vm_qemu" "cloudinit-test" {
      + ciupgrade              = false
        id                     = "prox52/qemu/119"
        name                   = "k4w1"
      + skip_ipv4              = false
      + skip_ipv6              = false
        tags                   = null
        # (54 unchanged attributes hidden)

      ~ disk {
          + cache                = "writeback"
            id                   = 0
          + size                 = "100G"
          ~ slot                 = "ide2" -> "scsi0"
            # (26 unchanged attributes hidden)
        }
      - disk {
          - backup               = true -> null
          - cache                = "writeback" -> null
          - discard              = true -> null
          - emulatessd           = false -> null
          - format               = "raw" -> null
          - id                   = 0 -> null
          - iops_r_burst         = 0 -> null
          - iops_r_burst_length  = 0 -> null
          - iops_r_concurrent    = 0 -> null
          - iops_wr_burst        = 0 -> null
          - iops_wr_burst_length = 0 -> null
          - iops_wr_concurrent   = 0 -> null
          - iothread             = false -> null
          - linked_disk_id       = -1 -> null
          - mbps_r_burst         = 0 -> null
          - mbps_r_concurrent    = 0 -> null
          - mbps_wr_burst        = 0 -> null
          - mbps_wr_concurrent   = 0 -> null
          - passthrough          = false -> null
          - readonly             = false -> null
          - replicate            = true -> null
          - size                 = "100G" -> null
          - slot                 = "scsi0" -> null
          - storage              = "ceph3" -> null
          - type                 = "disk" -> null
            # (5 unchanged attributes hidden)
        }

Plan: 0 to add, 3 to change, 0 to destroy.
╷
│ Warning: Failed to decode resource from state
│ 
│ Error decoding "module.proxmox_vm.proxmox_vm_qemu.cloudinit-test[\"k4w1\"]" from prior state: unsupported attribute
│ "file"

Why I got two overriding configurations?

          ~ slot                 = "ide2" -> "scsi0"
          - slot                 = "scsi0" -> null

But when I provide disk block in module like this, got this kind of error:

│ Error: Missing required argument
│ 
│   on ../modules/proxmox-vm/main.tf line 38, in resource "proxmox_vm_qemu" "cloudinit-test":
│   38:   disk {
│ 
│ The argument "slot" is required, but no definition was found.
╵
╷
│ Error: Unsupported block type
│ 
│   on ../modules/proxmox-vm/main.tf line 46, in resource "proxmox_vm_qemu" "cloudinit-test":
│   46:     scsi {
│ 
│ Blocks of type "scsi" are not expected here.

How to properly add disk block at state file?

@Tinyblargon
Copy link
Collaborator

@iazamat16 the reason for the error is that we have disk and disks singular means it can be in the config multiple times as each only configues a singular item, plural means all the items are within sub items. The disk is mostly used for dynamic configuration, and disks is for static configuration. Due to disks having a more pronounced schema, it has better input validation.

@iazamat16
Copy link
Author

@Tinyblargon thank you for your help

But I still don't get it, how do I need to convert disk block at terraform state to achieve not recreating resources

I changed the disk block declaration in the module as follows

  disks {
      scsi {
          scsi0 {
              disk {
                  backup             = var.disks[each.key].disk_slot
                  emulatessd         = var.disks[each.key].disk_emulatessd
                  iothread           = var.disks[each.key].disk_iothread
                  replicate          = var.disks[each.key].disk_replicate
                  storage            = var.disks[each.key].disk_storage
              }
          }
      }
  }
Tried to edit disk block at state like this
            "disks": {
              "scsi": {
                "scsi0": {
                  "disk": {
                    "cache": "none",
                    "file": "vm-103-disk-0",
                    "format": "raw",
                    "iothread": false,
                    "mbps": 0,
                    "mbps_rd": 0,
                    "mbps_rd_max": 0,
                    "mbps_wr": 0,
                    "mbps_wr_max": 0,
                    "media": "",
                    "replicate": false,
                    "size": "200G",
                    "slot": "scsi0",
                    "emulatessd": false,
                    "storage": "local-lvm",
                    "storage_type": "lvmthin",
                    "type": "cloudinit",
                    "volume": "local-lvm:vm-103-disk-0"
                  }
                }
              }
            },

Can you send a piece of terraform state file for example?

@Tinyblargon
Copy link
Collaborator

@iazamat16 Do consider that it will always say it's gonna delete disks. There is a protection flag in the provider and PVE, setting it to true disallows deletion of disks. Just to be safe, I'd set it in Terraform and PVE. Please first test this on a backup of the vm.

@iazamat16
Copy link
Author

Thanks, I will add a protection flag before I do terraform apply on the remote state.

But still, could you show, explain how to correctly declare resources according to the new scheme, especially the disk block

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
resource/qemu Issue or PR related to Qemu resource type/question Issue needs no code to be fixed, only a description on how to fix it yourself
Projects
None yet
Development

No branches or pull requests

2 participants