Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cluster destroy fails with instanceconfiguration in use #939

Open
atimgraves opened this issue Jul 3, 2024 · 0 comments
Open

cluster destroy fails with instanceconfiguration in use #939

atimgraves opened this issue Jul 3, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@atimgraves
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version and Provider Version

Terraform v1.5.7
on linux_amd64

Affected Resource(s)

instanceconfiguration

Terraform Configuration Files

module "oke" {
  source  = "oracle-terraform-modules/oke/oci"
  version = "5.1.7"
  compartment_id="ocid1.compartment.oc1..<ocid>"
  tenancy_id="ocid1.tenancy.oc1..<ocid>"
  home_region="eu-frankfurt-1"
  region="uk-london-1"
  cluster_name="alstom"
  kubernetes_version = "v1.29.1"
  output_detail = true
  create_bastion = false
  create_operator = false
  worker_pool_size= 1
  worker_image_id   = "ocid1.image.oc1.eu-frankfurt-1.aaaaaaaa643udgvi33hqaxkfx2petd5bmew76jaxvbwhdrjzksyrlwzatwea"
  worker_image_type = "custom"
  worker_pools = {
    oke-vm-standard = {
       size             = 1,
       shape            = "VM.Standard.E4.Flex",
       ocpus            = 1,
       memory           = 4,
     },
    oke-vm-instance-pool = {
      description = "Self-managed Instance Pool",
      mode        = "instance-pool",
      size        = 1,
      shape            = "VM.Standard.E4.Flex",
      ocpus            = 1,
      memory           = 4,
      image = "ocid1.image.oc1.eu-frankfurt-1.aaaaaaaa643udgvi33hqaxkfx2petd5bmew76jaxvbwhdrjzksyrlwzatwea"
      node_labels = {
        "np" = "self",
        "managaed" = "self"
      } 
    }
  }
providers = {
    oci      = oci.frankfurt
    oci.home = oci.home
  }
}
terraform {
  required_providers {
    oci = {
      source = "oracle/oci"
    }
  }
  required_version = ">= 1.0.0"
}

Debug Output

Panic Output

Expected Behavior

it should have deleted the pool and then the cluster

Actual Behavior

I suspect a sequence error where the pool and instanceonfiguration dletreion is done in the wrong order or a timing issue where the node pool delete should have a delay before trying the instanceconfig
Error: 409-Conflict, The Instance Configuration ocid1.instanceconfiguration.oc1.eu-frankfurt-1.aaaaaaaanibtpp2u2fzvvhsuzpelcooagvqd652orbaieb4vj3k2m736aiqa is associated to one or more Instance Pools.
│ Suggestion: The resource is in a conflicted state. Please retry again or contact support for help with service: Core Instance Configuration

Steps to Reproduce

ran terraform destroy

Important Factoids

running inn the oci cloud shell (on the oci services network)

References

@atimgraves atimgraves added the bug Something isn't working label Jul 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant