This module allows creating a managed instance group supporting one or more application versions via instance templates. Optionally, a health check and an autoscaler can be created, and the managed instance group can be configured to be stateful.
This module can be coupled with the compute-vm
module which can manage instance templates, and the net-lb-int
module to assign the MIG to a backend wired to an Internal Load Balancer. The first use case is shown in the examples below.
Stateful disks can be created directly, as shown in the last example below.
This example shows how to manage a simple MIG that leverages the compute-vm
module to manage the underlying instance template. The following sub-examples will only show how to enable specific features of this module, and won't replicate the combined setup.
module "cos-nginx" {
source = "./fabric/modules/cloud-config-container/nginx"
}
module "nginx-template" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
name = "nginx-template"
zone = "${var.region}-b"
tags = ["http-server", "ssh"]
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
nat = false
addresses = null
}]
boot_disk = {
initialize_params = {
image = "projects/cos-cloud/global/images/family/cos-stable"
}
}
create_template = true
metadata = {
user-data = module.cos-nginx.cloud_config
}
}
module "nginx-mig" {
source = "./fabric/modules/compute-mig"
project_id = var.project_id
location = "${var.region}-b"
name = "mig-test"
target_size = 2
instance_template = module.nginx-template.template.self_link
}
# tftest modules=2 resources=2 inventory=simple.yaml e2e
If multiple versions are desired, use more compute-vm
instances for the additional templates used in each version (not shown here), and reference them like this:
module "cos-nginx" {
source = "./fabric/modules/cloud-config-container/nginx"
}
module "nginx-template" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
name = "nginx-template"
zone = "${var.region}-b"
tags = ["http-server", "ssh"]
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
nat = false
addresses = null
}]
boot_disk = {
initialize_params = {
image = "projects/cos-cloud/global/images/family/cos-stable"
}
}
create_template = true
metadata = {
user-data = module.cos-nginx.cloud_config
}
}
module "nginx-mig" {
source = "./fabric/modules/compute-mig"
project_id = var.project_id
location = "${var.region}-b"
name = "mig-test"
target_size = 3
instance_template = module.nginx-template.template.self_link
versions = {
canary = {
instance_template = module.nginx-template.template.self_link
target_size = {
fixed = 1
}
}
}
}
# tftest modules=2 resources=2 inventory=multiple.yaml e2e
Autohealing policies can use an externally defined health check, or have this module auto-create one:
module "cos-nginx" {
source = "./fabric/modules/cloud-config-container/nginx"
}
module "nginx-template" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
name = "nginx-template"
zone = "${var.region}-b"
tags = ["http-server", "ssh"]
network_interfaces = [{
network = var.vpc.self_link,
subnetwork = var.subnet.self_link,
nat = false,
addresses = null
}]
boot_disk = {
initialize_params = {
image = "projects/cos-cloud/global/images/family/cos-stable"
}
}
create_template = true
metadata = {
user-data = module.cos-nginx.cloud_config
}
}
module "nginx-mig" {
source = "./fabric/modules/compute-mig"
project_id = var.project_id
location = "${var.region}-b"
name = "mig-test"
target_size = 3
instance_template = module.nginx-template.template.self_link
auto_healing_policies = {
initial_delay_sec = 30
}
health_check_config = {
enable_logging = true
http = {
port = 80
}
}
}
# tftest modules=2 resources=3 inventory=health-check.yaml e2e
The module can create and manage an autoscaler associated with the MIG. When using autoscaling do not set the target_size
variable or set it to null
. Here we show a CPU utilization autoscaler, the other available modes are load balancing utilization and custom metric, like the underlying autoscaler resource.
module "cos-nginx" {
source = "./fabric/modules/cloud-config-container/nginx"
}
module "nginx-template" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
name = "nginx-template"
zone = "${var.region}-b"
tags = ["http-server", "ssh"]
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
nat = false
addresses = null
}]
boot_disk = {
initialize_params = {
image = "projects/cos-cloud/global/images/family/cos-stable"
}
}
create_template = true
metadata = {
user-data = module.cos-nginx.cloud_config
}
}
module "nginx-mig" {
source = "./fabric/modules/compute-mig"
project_id = var.project_id
location = "${var.region}-b"
name = "mig-test"
target_size = 3
instance_template = module.nginx-template.template.self_link
autoscaler_config = {
max_replicas = 3
min_replicas = 1
cooldown_period = 30
scaling_signals = {
cpu_utilization = {
target = 0.65
}
}
}
}
# tftest modules=2 resources=3 inventory=autoscaling.yaml e2e
module "cos-nginx" {
source = "./fabric/modules/cloud-config-container/nginx"
}
module "nginx-template" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
name = "nginx-template"
zone = "${var.region}-b"
tags = ["http-server", "ssh"]
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
nat = false
addresses = null
}]
boot_disk = {
initialize_params = {
image = "projects/cos-cloud/global/images/family/cos-stable"
}
}
create_template = true
metadata = {
user-data = module.cos-nginx.cloud_config
}
}
module "nginx-mig" {
source = "./fabric/modules/compute-mig"
project_id = var.project_id
location = "${var.region}-b"
name = "mig-test"
target_size = 3
instance_template = module.nginx-template.template.self_link
update_policy = {
minimal_action = "REPLACE"
type = "PROACTIVE"
min_ready_sec = 30
max_surge = {
fixed = 1
}
}
}
# tftest modules=2 resources=2 inventory=policy.yaml e2e
Stateful MIGs have some limitations documented here. Enforcement of these requirements is the responsibility of users of this module.
You can configure a disk defined in the instance template to be stateful for all instances in the MIG by configuring in the MIG's stateful policy, using the stateful_disk_mig
variable. Alternatively, you can also configure stateful persistent disks individually per instance of the MIG by setting the stateful_disk_instance
variable. A discussion on these scenarios can be found in the docs.
An example using only the configuration at the MIG level can be seen below.
Note that when referencing the stateful disk, you use device_name
and not disk_name
. Specifying an existing disk in the template (and stateful config) only allows a single instance to be managed by the MIG, typically coupled with an autohealing policy (shown in the examples above).
module "cos-nginx" {
source = "./fabric/modules/cloud-config-container/nginx"
}
module "nginx-template" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
name = "nginx-template"
zone = "${var.region}-b"
tags = ["http-server", "ssh"]
instance_type = "e2-small"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
boot_disk = {
initialize_params = {
image = "projects/cos-cloud/global/images/family/cos-stable"
}
}
attached_disks = [{
source_type = "attach"
name = "data-1"
size = 10
source = google_compute_disk.test-disk.name
}]
create_template = true
metadata = {
user-data = module.cos-nginx.cloud_config
}
}
module "nginx-mig" {
source = "./fabric/modules/compute-mig"
project_id = var.project_id
location = "${var.region}-b"
name = "mig-test-2"
target_size = 1
instance_template = module.nginx-template.template.self_link
stateful_disks = {
data-1 = false
}
}
# tftest modules=2 resources=3 fixtures=fixtures/attached-disks.tf inventory=mig-config.yaml e2e
Here is an example defining the stateful config at the instance level. As in the example above, specifying an existing disk in the template (and stateful config) only allows a single instance to be managed by the MIG, typically coupled with an autohealing policy (shown in the examples above).
module "cos-nginx" {
source = "./fabric/modules/cloud-config-container/nginx"
}
module "nginx-template" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
name = "nginx-template"
zone = "${var.region}-b"
tags = ["http-server", "ssh"]
instance_type = "e2-small"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
boot_disk = {
initialize_params = {
image = "projects/cos-cloud/global/images/family/cos-stable"
}
}
attached_disks = [{
source_type = "attach"
name = "data-1"
size = 10
source = google_compute_disk.test-disk.name
}]
create_template = true
metadata = {
user-data = module.cos-nginx.cloud_config
}
}
module "nginx-mig" {
source = "./fabric/modules/compute-mig"
project_id = var.project_id
location = "${var.region}-b"
name = "mig-test"
instance_template = module.nginx-template.template.self_link
stateful_config = {
instance-1 = {
minimal_action = "NONE",
most_disruptive_allowed_action = "REPLACE"
preserved_state = {
disks = {
data-1 = {
source = google_compute_disk.test-disk.id
}
}
metadata = {
foo = "bar"
}
}
}
}
}
# tftest modules=2 resources=4 fixtures=fixtures/attached-disks.tf inventory=stateful.yaml e2e
name | description | type | required | default |
---|---|---|---|---|
instance_template | Instance template for the default version. | string |
✓ | |
location | Compute zone or region. | string |
✓ | |
name | Managed group name. | string |
✓ | |
project_id | Project id. | string |
✓ | |
all_instances_config | Metadata and labels set to all instances in the group. | object({…}) |
null |
|
auto_healing_policies | Auto-healing policies for this group. | object({…}) |
null |
|
autoscaler_config | Optional autoscaler configuration. | object({…}) |
null |
|
default_version_name | Name used for the default version. | string |
"default" |
|
description | Optional description used for all resources managed by this module. | string |
"Terraform managed." |
|
distribution_policy | DIstribution policy for regional MIG. | object({…}) |
null |
|
health_check_config | Optional auto-created health check configuration, use the output self-link to set it in the auto healing policy. Refer to examples for usage. | object({…}) |
null |
|
named_ports | Named ports. | map(number) |
null |
|
stateful_config | Stateful configuration for individual instances. | map(object({…})) |
{} |
|
stateful_disks | Stateful disk configuration applied at the MIG level to all instances, in device name => on permanent instance delete rule as boolean. | map(bool) |
{} |
|
target_pools | Optional list of URLs for target pools to which new instances in the group are added. | list(string) |
[] |
|
target_size | Group target size, leave null when using an autoscaler. | number |
null |
|
update_policy | Update policy. Minimal action and type are required. | object({…}) |
null |
|
versions | Additional application versions, target_size is optional. | map(object({…})) |
{} |
|
wait_for_instances | Wait for all instances to be created/updated before returning. | object({…}) |
null |
name | description | sensitive |
---|---|---|
autoscaler | Auto-created autoscaler resource. | |
group_manager | Instance group resource. | |
health_check | Auto-created health-check resource. | |
id | Fully qualified group manager id. |