source_file = "${path.cwd}/main.py" # Path to the current working directory source_file = "${path.root}/main.py" # Path to the root module source_file = "${path.module}/main.py" # Path to the current module
- https://www.terraform.io/docs/configuration/interpolation.html#element-list-index-
- if
index
is greater thanlen(list)
,index
modulolen(list)
is used
vars { ip = "${element(aws_instance.cluster.*.private_ip, count.index)}" }
There was a problem when I defined multiple aws_eip
s which are associated to aws_instance
s.
resource "aws_instance" "foo" { count = 10 .. } resource "aws_eip" "bar" { count = 10 instance = "${element(aws_instance.foo.*.i, count.index}" }
Terraform plans to change the all association when I only change the count
.
To work around this, use ignore_changes
resource "aws_eip" "bar" { count = 10 instance = "${element(aws_instance.foo.*.i, count.index}" lifecycle { ignore_changes = ["instance"] } }
- https://www.terraform.io/docs/provisioners/null_resource.html
- Allows to run provionsers not directly associated with a single existing resource
provisioner "local-exec" { command = "run.sh ${var.args}" }
connection { type = "ssh" user = "ubuntu" private_key = "${file(var.key_path)}" } provisioner "remote-exec" { inline = [ "curl -sSL https://get.docker.com/ | sh", ] }
- https://www.terraform.io/docs/providers/archive/d/archive_file.html
- Useful to provision resources which require zip files.
data "archive_file" "code" { type = "zip" source_file = "${path.module}/main.py" output_path = "${path.module}/lambda.zip" } resource "aws_lambda_function" "main" { function_name = "foo" filename = "${data.archive_file.code.output_path}" source_code_hash = "${data.archive_file.code.output_base64sha256}" ... }
- https://www.terraform.io/docs/providers/template/index.html
- Use
$$
intemplate
to escape$
data "template_file" "curl" { count = "${var.count}" template = "curl http://$${ip}" vars { ip = "${element(aws_instance.cluster.*.private_ip, count.index)}" } }
terraform plan
terraform plan -var 'access_key=foo' -var 'secret_key=bar'
terraform plan -var 'amis={us-east-1 = "foo", us-west-2 = "bar"}'
terraform plan -out=my.plan
terraform apply
terraform apply 'my.plan'
terraform import aws_instance.main i-abcd1234
# from ./terraform.tfstate:aws_instance.main
# to new/terraform.tfstate:aws_instance.server
terraform state mv -state-out new/terraform.tfstate \
aws_instance.main \
aws_instance.server
terraform taint aws_instance.main 1 ↵
terraform taint -module=my_module aws_instance.main 1 ↵
- All
.tf
files are loaded .tf
files are declarative, so the order of loading files doesn’t matter, except for Override files- Override files are
.tf
files named asoverride.tf
or{name}_override.tf
- Override files are loaded last in alphabetical order
- Configurations in override files are merged into the existing configuration, not appended.
- Resources are infrastructures managed by
terraform
- Data sources are not managed by
terraform
The use case of these things are following:
You can provision servers by defining them as resources.
For specifying server configurations, you can reference existing security groups, VPCs, and the like by defining them as data sources.
- State about the real managed infrastructure
terraform.tfstate
by default- Formatted in
json
- While terraform files are about to be, state file is about as is
- State is refreshed before performing most of operations like
terraform plan
,terraform apply
- Basic modifications can be done through
terraform state [sub]
commands - Importing existing infrastructures can be done using
terraform state import
- Importing is related to
resources
, notdata sources
- Which means
terraform
can destroy the existing infrastructures once they are imported
- Importing is related to
- A file named
terraform.tfvars
is automatically loaded - Use
-var-file
flag to specify other.tfvars
files
[module path][resource spec] module.A.module.B.module.C... resource_type.resource_name[N]
resource "aws_instance" "web" { # ... count = 4 } aws_instance.web[3] aws_instance.web
${self.private_ip_address} # attributes of their own ${aws_instance.web.id} ${aws_instance.web.0.id} # a specific one when the resource is plural('count' attribute exists) ${aws_instance.web.*.id} # this is a list ${module.foo.bar} # outputs from module .. and many more including some functions
- https://www.terraform.io/docs/modules/create.html
- When you run
terraform apply
, the current working directory holding the Terraform files is called the root module. - With Local File Paths, Terraform will create a symbolic link to the original directory. Therefore, any changes are automatically available.
For now, you can’t use interpolation for referencing other resources
to specify count
because of the way that terraform handles count
.
variable my_count { default = 10 } resource "something" "foo" { count = "${var.my_count}" # ok } resource "something" "bar" { count = "${something.foo.count}" # error }
We should definitely do this, the tricky part comes from the fact that count expansion is currently done statically, before the primary graph walk, which means we can’t support “computed” counts right now. (A “computed” value in TF is one that’s flagged as not known until all its dependencies are calculated.)
- hashicorp/terraform#7705
- The type of most mapping arguments are actually the list of maps
variable "cluster_config" { type = "map" } resource aws_elasticsearch_domain "main" { cluster_config = "${var.cluster_config}" # Not supported }
Because the actual schema is:
"cluster_config": {
Type: schema.TypeList,
Optional: true,
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
Terraform currently doesn’t support changing the instance type without destroying-recreating it.
We should change it manually, and sync it with the terraform state to change the instance type without destroying the instance.
For syncing, you have to remove its state from tfstate
and import
it again.
Follow the steps below:
- Stop the target instance and change its instance type to what you desire.
- Update your
tf
file as you changed:resource "aws_instance" "my_instance" { (...) instance_type = "t2.mirco" # as you changed at step 1 (...) }
- Verify it with
terraform plan
$ terraform plan (...) No changes. Infrastructure is up-to-date. This means that Terraform could not detect any differences between your configuration and the real physical resources that exist. As a result, Terraform doesn't need to do anything.
If plan
shows some unexpected changes, you can just remove the instance from tfstate and re-import it.
- Remove the instance from
terraform.tfstate
:$ terraform state rm aws_instance.my_instance
- Import your instance
$ terraform import aws_instance.my_instance i-abcdefg012345678