Skip to content
This repository has been archived by the owner on Dec 13, 2023. It is now read-only.

Abnormality observed during Updation of Workers in the kubify cluster #66

Open
pyogesh2 opened this issue Feb 7, 2019 · 1 comment
Open
Labels
lifecycle/rotten Nobody worked on this for 12 months (final aging stage)

Comments

@pyogesh2
Copy link

pyogesh2 commented Feb 7, 2019

I have setup up the cluster using Kubify. Initially, the worker type of the VM's were m4.large y default and the Master nodes were 3, Worker nodes were 10.
We want the vm types to be m4.4xlarge. I updated the file- https://github.com/gardener/kubify/blob/master/modules/vms/versions.tf with the vm type as m4.4xlarge and increased the worker nodes to 12.
Executed terraform init variant, terraform plan variant and terraform apply varaint, After that checked, 2 nodes of master and 2 nodes of worker were created with m4.4xlarge and the remaining 1 master node and 10 nodes were still of the type m4.large.

~ module.instance.module.worker.module.vms.aws_instance.nodes[4]
      instance_type:                             "m4.large" => "m4.4xlarge"

  + module.instance.module.worker.module.vms.aws_instance.nodes[6]
      id:                                        <computed>
      ami:                                       "ami-b5742acf"
      arn:                                       <computed>
      associate_public_ip_address:               <computed>
      availability_zone:                         "us-east-1b"
      cpu_core_count:                            <computed>
      cpu_threads_per_core:                      <computed>
      disable_api_termination:                   "false"
      ebs_block_device.#:                        <computed>
      ephemeral_block_device.#:                  <computed>
      get_password_data:                         "false"
      host_id:                                   <computed>
      iam_instance_profile:                      "perfgardener-eval-worker"
      instance_state:                            <computed>
      instance_type:                             "m4.large"
      ipv6_address_count:                        <computed>
      ipv6_addresses.#:                          <computed>
      key_name:                                  "perfgardener-eval"
      network_interface.#:                       <computed>
      network_interface_id:                      <computed>
      password_data:                             <computed>
      placement_group:                           <computed>

Ideally, all the master and worker nodes should of type m4.4xlarge as per the configuration. But thats not happening. Kindly check. Thanks !

@afritzler
Copy link
Contributor

afritzler commented Feb 9, 2019

I would recommend to update the worker/master flavors via the terraform.tfvars file

master = {
  count=3
  flavor_name="medium_4_8"
}

worker = {
  count=3
  flavor_name="medium_2_4"
}

We also have a rolling mechanism in place. So what should happen when you do the first terraform apply, the first master and the first worker is updated. You need then to run apply as long as all the machines have been replaced.

@gardener-robot-ci-1 gardener-robot-ci-1 added lifecycle/stale Nobody worked on this for 6 months (will further age) and removed lifecycle/stale Nobody worked on this for 6 months (will further age) labels Apr 11, 2019
@gardener-robot-ci-1 gardener-robot-ci-1 added lifecycle/stale Nobody worked on this for 6 months (will further age) and removed lifecycle/stale Nobody worked on this for 6 months (will further age) labels Jun 11, 2019
@gardener-robot-ci-1 gardener-robot-ci-1 added lifecycle/stale Nobody worked on this for 6 months (will further age) and removed lifecycle/stale Nobody worked on this for 6 months (will further age) labels Aug 11, 2019
@gardener-robot-ci-2 gardener-robot-ci-2 added lifecycle/stale Nobody worked on this for 6 months (will further age) and removed lifecycle/stale Nobody worked on this for 6 months (will further age) labels Oct 11, 2019
@ghost ghost added lifecycle/stale Nobody worked on this for 6 months (will further age) and removed lifecycle/stale Nobody worked on this for 6 months (will further age) labels Dec 11, 2019
@ghost ghost added lifecycle/stale Nobody worked on this for 6 months (will further age) and removed lifecycle/stale Nobody worked on this for 6 months (will further age) labels Feb 10, 2020
@ghost ghost added the lifecycle/stale Nobody worked on this for 6 months (will further age) label Apr 15, 2020
@gardener-robot gardener-robot added lifecycle/rotten Nobody worked on this for 12 months (final aging stage) and removed lifecycle/stale Nobody worked on this for 6 months (will further age) labels Jun 15, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Nobody worked on this for 12 months (final aging stage)
Projects
None yet
Development

No branches or pull requests

5 participants