-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aws_ecs_task_definition and continuous delivery to ecs #632
Comments
I have run into this issue as well. I think the solution I am going to go with is to not have the task definition be managed by terraform. Circle CI has a blog post about how to push a new task definition via a script they provide. I agree that the ability to set the |
I've gotten around this by using
|
@JDiPierro While using the |
@naveenb29 Nope. I believe that would be the case if you were tainting the ECS Service. Since just the task-def is being recreated the ECS service is updated causing the new tasks to deploy. ECS waits for them to become healthy and then kills the old containers. |
Our CI process is tagging the image as it's pushed to ECR and then passing that tag to the task definition. This automatically leads to the task definition changing so that Terraform knows to recreate it and then that's linked to the ECS service causing it to update the service with the new task definition. I think I'm missing the issue that others are having here. That said, I'd like for Terraform to (optionally) wait for the deployment to complete: new task definition has running tasks equal to the desired count and, potentially, that the other task definitions have been deregistered so no new traffic will reach them. I can't see a nice way to get at that information using the API though as the events don't really expose enough information so you'd probably have to describe the old task def, find the running tasks using it, find the ports they were running on, check the ALB they are registered to has all of those ports set as draining. For now I'm simply shelling out and waiting until the PRIMARY service deployment has a running count equal to the desired count (doesn't catch the short period from PRIMARY tasks being registered to old tasks being deregistered) or waiting until the deployment list has a length of 1 (all old task definitions have been completely drained which is overkill as new connections won't arrive there so the deployment can be considered completed before this). |
Hi folks, Labelling appropriately... |
I was able to solve the inactive task definition issue with the example in the ECS task definition data source. You set up the ECS service resource to use the the max revision of either what your Terraform resource has created, or what is in the AWS console which the data source retrieves. The one downside to this is if someone changes the task definition, Terraform will not realign that to what's defined in code. |
What do you guys think about having a remote backend (It's S3 in my case), and having your CI pipeline create the new task definition and changing the .tfstate file directly to match it ? For example, mine looks like this : "aws_ecs_task_definition.backend_service": {
"type": "aws_ecs_task_definition",
"depends_on": [
"data.template_file.task_definition"
],
"primary": {
"id": "backend-service",
"attributes": {
"arn": "arn:aws:ecs:eu-west-1:REDACTED:task-definition/backend-service:8", // This could be changed manually
"container_definitions": "REDACTED",
"family": "backend-service",
"id": "backend-service",
"network_mode": "",
"placement_constraints.#": "0",
"revision": "8", // This could be increased manually
"task_role_arn": ""
},
"meta": {},
"tainted": false
},
"deposed": [],
"provider": ""
}, Couldn't we just change the arn and the revision, so that the next time terraform runs, it still thinks it has the "latest version" of the task defintion in it's state ? |
I'm not sure I understand the problem y'all are trying to solve here. Why not just use terraform to create the new task definition in the first place, and then your tf state is always consistent? Our setup is similar to what @tomelliff describes. |
@chriswhelix Well in my particular case, I have two separate repositories. One that holds the terraform project, and it creates my ECS cluster, my services, and the initial task definition. The other one is for a specific service, and I'd like to have some CI/continuous delivery flow in place (Using gitlab pipelines in my case) to "containerize" the project, push it to ECR, and trigger a service update on my ECS cluster. (Edit: as a reminder, currently, if we use the aws cli to do this as part of our CI workflow, then the next terraform run will overwrite the task def.) So, when you say "use terraform to create the new task definition in the first place", are you implying that on our CI system, when pushing our service's code, we should also clone our terraform repo, change the variable that holds the image tag or that service, do a terraform apply, and commit + push to the TF repository? tl;dr: Need a way to trigger service updates from any of our projects' build pipeline, without any user interaction with terraform. |
@Esya what we do is that each project has in its build config the version of the terraform repo it is expecting to be deployed with. When the CI pipeline is ready to deploy, it pulls down the terraform repo using the git tag specified in the project build config, then runs terraform against that, providing the image tag it just wrote to the ECR repo as an input variable. We don't write down the ECR image tag in the terraform repo; it must be provided each time terraform is run. So that avoids simple code updates to projects requiring any change to the terraform repo. |
I'm using ecs-deploy in my deployment pipeline, and a terraform config that looks something like this: # Gets the CURRENT task definition from AWS, reflecting anything that's been deployed
# outside of Terraform (ie. CI builds).
data "aws_ecs_task_definition" "task" {
task_definition = "${aws_ecs_task_definition.main.family}"
}
# "Dummy" application for initial deployment
data "aws_ecr_repository" "sample" {
name = "sample"
}
# ECR Repo for our actual app
data "aws_ecr_repository" "main" {
name = "${var.ecr_name}"
}
resource "aws_ecs_task_definition" "main" {
family = "${var.name}"
task_role_arn = "${module.iam_roles.ecs_service_deployment_role_arn}"
container_definitions = <<DEFINITION
[
{
"name": "${var.name}",
"image": "${data.aws_ecr_repository.sample.repository_url}:latest",
"essential": true,
"portMappings": [{
"containerPort": ${var.container_port},
"hostPort": 0
}]
}
]
DEFINITION
}
resource "aws_ecs_service" "main" {
name = "${var.name}"
cluster = "${var.cluster}"
desired_count = 2
task_definition = "${aws_ecs_task_definition.main.family}:${max("${aws_ecs_task_definition.main.revision}", "${data.aws_ecs_task_definition.task.revision}")}"
iam_role = "${module.iam_roles.ecs_service_deployment_role_arn}"
} During the initial deployment, Terraform deploys an "empty" container. When the CI pipeline runs, ecs-deploy creates a new task definition revision with the newly-built image/tag, and updates the service accordingly. Terraform recognizes these new deployments via HOWEVER, if other parts of the task definition change, Terraform will redeploy the sample application, as it'll try to create a new revision of the task definition (using the config containing the sample application). Hypothetically, data "aws_ecs_task_definition" "task" {
task_definition = "${aws_ecs_task_definition.main.family}"
}
data "aws_ecs_container_definition" "task" {
task_definition = "${data.aws_ecs_task_definition.task.id}"
container_name = "${var.name}"
}
resource "aws_ecs_task_definition" "main" {
family = "${var.name}"
task_role_arn = "${module.iam_roles.ecs_service_deployment_role_arn}"
container_definitions = <<DEFINITION
[
{
"name": "${var.name}",
"image": "${data.aws_ecs_container_definition.task.image}",
}
]
DEFINITION
} This creates a cycle, and won't work during the initial deployment. This is very close to my ideal setup. If Terraform somehow supported a "get or create" data/resource hybrid, I'd be able to do almost exactly what I'm looking for. |
@schmod you could possibly use a var to create a special bootstrapping mode, i.e. "count = var.bootstrapping ? 0 : 1" to turn on/off the data sources, and coalesce(data.aws_ecs_container_definition.task.*.image, "fake_image") on the task def. I feel like if you're going to manage a particular resource with terraform, it's really best to make all modifications to it using terraform, though. If you solve this issue for container images, you're just going to have it again for service scaling, and again for environment changes, and again for anything else ecs-deploy does behind terraform's back. What we really need is good deployment tools that work with terraform instead of around it. |
Given the scope of what Terraform is allowed to do to my AWS resources, I'm rather apprehensive about running it in an automated/unmonitored environment. On the other hand, I can control exactly what ecs-deploy is going to do. Infrastructure deployments and application deployments are very different in my mind. There's a fairly large and mature ecosystem around the latter, and I don't think that Terraform should need to reinvent that wheel. It should merely provide a configuration interface to tell it the exact set of changes that I expect those external tools to make. We already have a version of that in the form of the |
@schmod isn't the real issue what your build agent is permissioned to do? If your build agent has least privileges for the changes you actually want it to make, shouldn't matter which tool makes them. I agree that the interface between terraform and existing deployment tools seems like a generally awkward area. We've dealt with that mostly by just writing our own deployment scripts, in conjunction with what ECS provides out of the box. I'm not sure it's a problem that's solvable solely by changes to terraform, though; in this case, the fundamental problem is that there's no clean divide between the "infrastructure" part of ECS and the "application" part of ECS. That's really Amazon's fault, not terraform's. There is a clean boundary at the cluster level -- i.e. it would be easy to have terraform manage all the backing instances for an ECS cluster, and another tool manage all the services and tasks running on the cluster. If your basic philosophy is a strong divide between "infrastructure" and "applications", it seems like drawing that line right through the middle of a task definition creates much too complicated a boundary to easily manage. |
Right. The problem is that (in my use-case, and probably most others) an application deployment should change exactly one parameter on the task definition ( It's difficult to draw a line around the task definition, however, because it contains a lot of other configuration that I'd really prefer to remain static (and managed by Terraform). This makes it unattractive to draw a clean boundary at the cluster level (and also leaves both your Service and Task Definition completely unmanaged by Terraform). As I mentioned earlier, the |
we share the same use case as most people are reporting here. our deployments are uniquely tagged, which requires a new task definition to update the ecs service on each deployment. This happens outside of the control of terraform due to a variety of reasons which are not important to the issue at hand. Seems like we need the ability to ignore changes on aws_ecs_service resource; we can't do that right now due to TF not supporting interpolations in lifecycle blocks as this resource is part of shared module. |
I worked around this by using a bash script in an External Data Source to return the current image for the container definition. If the script gets an error looking up the task definition then it assumes this is the initial infrastructure deployment and it uses a default value. resource "aws_ecs_task_definition" "task" {
family = "${var.app}-${var.env}"
task_role_arn = "${aws_iam_role.app_role.arn}"
container_definitions = <<JSON
[
{
"name": "${var.app}",
"image": "${aws_ecr_repository.app_repo.repository_url}:${data.external.current_image.result["image_tag"]}"
}
]
JSON
}
data "external" "current_image" {
program = ["bash", "${path.module}/ecs-get-image.sh"]
query = {
app = "${var.app}"
cluster = "${var.cluster_id}"
}
} ecs-get-image.sh: #!/bin/bash
# This script retrieves the container image running in the current <app>-<env>
# If it can't get the image tag from AWS, assume this is the initial
# infrastructure deployment and default to "latest"
# Exit if any of the intermediate steps fail
set -e
# Get parameters from stdin
eval "$(jq -r '@sh "app=\(.app) cluster=\(.cluster)"')"
taskDefinitionID="$(aws ecs describe-services --service $app --cluster $cluster | jq -r .services[0].taskDefinition)"
# Default to "latest" if taskDefinition doesn't exist
if [[ ! -z "$taskDefinitionID" && "$taskDefinitionID" != "null" ]]; then {
taskDefinition="$(aws ecs describe-task-definition --task-definition $taskDefinitionID)"
containerImage="$(echo "$taskDefinition" | jq -r .taskDefinition.containerDefinitions[0].image)"
imageTag="$(echo "$containerImage" | awk -F':' '{print $2}')"
} else {
imageTag="latest"
}
fi
# Generate a JSON object containing the image tag
jq -n --arg imageTag "$imageTag" '{"image_tag":$imageTag}'
exit 0 It triggers a new task definition in Terraform when anything in the container_definition besides the image is changed, so we can still manage memory, cpu, etc, from Terraform, and it plays nicely with our CI (Jenkins) which pushes new images to ECR and creates new task definitions to point to those images. It may need some reworking to support running multiple containers in a single task. Edit: If you are using the same image tag for every deployment (e.g. "latest", "stage") then this will revert to whatever task definition is in the state file. It doesn't break anything, but it is confusing. A work around for this can be done by creating an external data source similar to this one that returns the current task definition running in AWS to the aws_ecs_service if the image tag hasn't changed. Edit 2: This still does not support multiple containers in a single task definition. I also want to say thanks to endofcake; I looked at your python version and took a stab at rewriting my code in python. I learned a lot, but ultimately stuck with bash because it's less likely to introduce dependency issues. |
I've also used an external data source as a workaround. The main difference is that it's written in Python, supports multiple containers in the task definition, and does not fall back to The script is here: Here's a snippet of Terraform configuration that uses it:
This solved the problem with Terraform trying to reset the image in the task definition to the one it knew about. However, after an app deployment which happens outside of Terraform it still detects changes in the task definition the next time it runs. It then creates a new task revision, which triggers a bounce of the ECS service - essentially a redeployment of the same image version. I could find no way to prevent it from doing this so far. |
After some investigation it looks to me like the problem is caused by Terraform not knowing about the new task revision. Say, the last revision it knows about is This lack of clear separation between infrastructure and application deployments turns out rather problematic, and I'm not sure how we can work around that. |
How has this not been resolved yet? ECS has been around for a while and CI deployments outside of terraform seems like the standard operating procedure, and yet here I am still trying to get this new deployment working... |
See also this approach, which looks more promising: |
Hi everyone, Sorry but I am struggling to understand the problem most of people are having, namely : why do you want to avoid a new task definition revision being created when the image has changed? is not that the standard way of deploying a new image to ecs, or how are you doing it otherwise? |
@codergolem , it's not about avoiding the new task definition, it's about making Terraform play nicely with changes which happen outside of Terraform, namely application deployments in ECS. Terraform is an infrastructure management tool, and just doesn't cut it when it comes to application deployments, even if we bolt on wait-conditions on it. This really looks more like a problem with AWS API than with Terraform, but so far I see no way to resolve this impedance mismatch cleanly. |
@codergolem To put @endofcake reply into context, let me provide our example:
We do it this way for several reasons:
Because of these reasons, it's making it very difficult to use terraform with a CI server when you want to specify the task definition structure within terraform, which I would implore is needed since it would need references to the role created for the task or any other references being used for the infrastructure. |
@blytheaw I'm wondering if you could use a null_resource with a provisioner to trigger the changes somehow? I have a similar problem, just trying to find the best solution. We use GitLab so I'm going to try to see if I can get the null_resource to trigger a GitLab pipeline when something changes by using a provisioner with curl. |
We also hit this problem. I agree with what others have said: ECS could make it easier to draw a line between infrastructure and application deployments. It has led to us doing something quite idiosyncratic where we'd prefer to follow convention. I think HashiCorp's answer to this may be the recently announced Waypoint. Right now Waypoint doesn't integrate enough with Terraform to meet our needs (i.e. task definitions reference secrets and resources created by Terraform). There are a few ways we explored to fix this: a) Deploy with Terraform, use We've gone for option C for now. This gives us a single source of truth for config (apart from docker From an operator perspective the workflow looks like this:
To add another benefit to deploying apps separately to infrastructure updates: deploy hooks. Sometimes you want to run a task, using the same task definition you use for apps, but for a different service (e.g. db migrations pseudocommand: |
@bilbof so, when you have 40 services in this cluster, you do this manually for every service? That is a bit of nonsense. |
@sp3c1 thanks for your feedback. The procedure I described above is automated using a Concourse pipeline; releases are continuously deployed without manual intervention. Since you mention it… our design has changed to following procedure, which gives us more control over release deployments (this is automated): a) ECS Services and other resources are managed by Terraform I’d be keen to improve on this. If there is a generally accepted pattern for continuously deploying to ~100 ECS Services managed with Terraform, I'd like to adopt it. |
After 4 years of following this issue, the consensus seems to be that, no there is not. You're (more or less) following the same compromise that most folks have landed on. |
For a little while I managed one of the larger Terraform modules for ECS. Since EKS came, many issues stayed unresolved and it was clear that AWS is letting ECS for what it is. A big shame because ECS is the 'exoskeleton' way of managing docker services and from the beginning it was very close to perfection. As with many AWS products; ElasticBeanstalk, Amplify, they get to a level where it's working good enough but AWS never gives them the final paint job. I haven't touched ECS the recent years, but my belief now is to really integrate the creation and updating of services within the CI/CD itself. SSM could be used with Terraform to centrally orchestrate parts like memory consumption which can later be used by CI/CD et cetera. This, or completely use Terraform to build up Codepipeline/Codebuild and have control over the ECS Services' configuration by managing its CICD layer. |
After reading through the thread and much thinking, I decided to take the following approach.
resource "null_resource" "task_definition_generator" {
triggers = {
family = var.family
command = var.generator_command
}
provisioner "local-exec" {
command = var.generator_command
working_dir = local.root_dir
}
}
data "aws_ecs_task_definition" "task_def" {
depends_on = [null_resource.task_definition_generator]
# This pulls in the latest ACTIVE revision. It might or might not be
# the one created by the generator_command, but that's generally ok.
# We're just assuming the latest version is always working.
task_definition = var.family
} |
@zen0wu How do you apply updates to task definitions? E.g.: chaging the |
@WhyNotHugo all the actual task definition belongs to our TS code, so if we want to update those, we'll just trigger a deploy, by first calling the same generator command (to create the new task definition) and then call UpdateService. TF only manages limited info, which includes, which load balancer to use, how many containers in total, essential things that only belongs to the service. |
Task definitions have a few terraform-managed resources in my case (group log, environment variables, ssm arns, and IAM arns). How do you get those values from terraform into the deploy process? |
@WhyNotHugo good question - ideally we can pass those as arguments into the generator command, by passing those in as command line arguments when terraform calls it, since TF has those values. But for the continuous delivery part (TypeScript in our case, running alone for the deploy), they'll either have to pull the values out of a - existing values in the task definition, b - from terraform (by running terraform console), or c - hard code (this is what we did :p, since we only need taskRoleArn) |
First make an API call to describe the existing task definition which contains those fields. Then modify just the image (or any other fields you want) from that response, and pass it into a task def registration call. How I handled this: task_def_family_name=${1:-}
image=${2:-}
container_name=${3:-}
if [[ -z "$task_def_family_name" ]] ; then
echo 1>&2 "error: A task definition family name is required. May also include a revision (familyName:revision)"
exit 1
fi
if [[ -z "$image" ]] ; then
echo 1>&2 "error: An image is required"
exit 1
fi
# Format the response in a way that is easily usable by register-task-definition
latest_task_definition=$( \
aws ecs describe-task-definition \
--include TAGS \
--task-definition "$task_def_family_name" \
--query '{ containerDefinitions: taskDefinition.containerDefinitions,
family: taskDefinition.family,
taskRoleArn: taskDefinition.taskRoleArn,
executionRoleArn: taskDefinition.executionRoleArn,
networkMode: taskDefinition.networkMode,
volumes: taskDefinition.volumes,
placementConstraints: taskDefinition.placementConstraints,
requiresCompatibilities: taskDefinition.requiresCompatibilities,
cpu: taskDefinition.cpu,
memory: taskDefinition.memory,
tags: tags}' \
)
container_count=$(jq -r '.containerDefinitions | length' <<< "$latest_task_definition")
if [[ "$container_count" -gt 1 ]] && [[ -z "$container_name" ]] ; then
echo 1>&2 "error: The task definition has more than one container definition, you must choose one."
exit 1
fi
# If there's only one container in the task definition, update its image, otherwise look by container name
# We should never make it to the `else` here, but we just create a duplicate revision if we do.
new_task_definition=$(echo "$latest_task_definition" \
| jq -rc --arg containerName "$container_name" --arg newImage "$image" \
'.containerDefinitions |= (
if ( . | length ) == 1 then
.[].image = $newImage
elif ($containerName | length) > 0 then
map(select(.name == $containerName).image = $newImage)
else
.
end
)'\
)
registration_response=$(aws ecs register-task-definition --cli-input-json "$new_task_definition")
new_revision=$(jq '.taskDefinition.revision' <<< "$registration_response")
old_revision=$((new_revision-1))
deregistration_response=$(aws ecs deregister-task-definition --task-definition "$task_def_family_name:$old_revision") |
Hi @dennari and all those following this issue 👋 .Thank you again for submitting/providing feedback to this issue. As noted by others in the comments above, because Terraform expects full management of the ECS Task Definition resource and the upstream ECS API does not support methods to appropriately manage individual revisions without replacement, we cannot provide any patches to the current state of resource’s behavior in the provider and thus will be closing this issue. Patches that suggest setting or manipulating the state difference of an ECS Task Definition’s’ revision would imply having one resource responsible for multiple AWS resources, and this would prove problematic to traditional practitioner experience with the rest of the Terraform provider ecosystem. With that said, we want to note that the ECS API does not allow for a great user experience when using Terraform and separate tooling like |
@bholzer My issue with that approach is that the next time terraform runs, it detects changes in the task definitions, and tries to recreate them. I used to have a hack to work around this: a
I finally found a solution that really works from all angles. I create a "template" task definition in terraform, which is fully terraform-managed and never altered outside of terraform: resource "aws_ecs_task_definition" "django_template" {
for_each = local.full_websites
family = "django-${each.key}-template"
container_definitions = jsonencode([{
name = "django"
command = ["/app/scripts/run-django"]
essential = true
image = "whynothugo/sleep"
memoryReservation = 600
portMappings = [{ containerPort = 8000, protocol = "tcp" }]
user = "django"
# Zeep's cache failes with this on:
# readonlyRootFilesystem = true
linuxParameters = { tmpfs = [{ containerPath = "/tmp", size = 100 }] }
environmentFiles = [
{
value = local.envfile_arns[each.key]
type = "s3"
}
]
healthCheck = {
command = ["/app/scripts/healthcheck-django"]
interval = 30
retries = 3
timeout = 5
}
logConfiguration = {
logDriver = "awslogs"
options = {
awslogs-group = aws_cloudwatch_log_group.django[each.key].name
awslogs-region = "us-west-2"
awslogs-stream-prefix = "ecs"
}
}
}])
task_role_arn = aws_iam_role.production_task_role.arn
execution_role_arn = aws_iam_role.production_task_execution_role.arn
network_mode = "bridge"
placement_constraints {
type = "memberOf"
expression = "attribute:Role == Web"
}
requires_compatibilities = ["EC2"]
tags = {
Component = "Django",
Environment = "Production"
BaseImageUrl = aws_ecr_repository.production.repository_url
}
} These don't have the right image though. My deployment pipeline will find the task definition (the one with The base image URL for each service is specified in the tags, so the deployment script merely appends the desired tag to it. This also implies that tags change on each deployment, allowing automatic rollbacks to work. Finally, my services initially point to the "template" task definition, but include: lifecycle {
ignore_changes = [task_definition, load_balancer]
} This means that after the first deploy, ECS replaces the task definition for the service, and terraform never touches that again. Ever. I've been using this setup and found this setup works really well. Deployments never result in any noise in terraform plans, and terraform itself FULLY manages the template, while deployments operate on a separate TD. |
@anGie44 I understand that terraform won't make changes to support this odd-case. Do you mind not locking this issue, so workarounds can continue to be discussed in comments here (including, probably, by others in the same situation in future). |
@anGie44 Terraform could provide some help to allow for a better user experience still. Looking at what I am doing with GCP Cloud Run, I can quite easily use the provider to change the memory size without changing the image that is used since that is managed by the CICD (like ecs-deploy). It is just a matter of considering the task definition as a whole state (modified using revisions) without pinning it to one revision. So you can ignore the changes in say image and still detect changes in memory/cpu. |
@anGie44 thank you for clearly explaining the team's position on this issue. I agree, ECS doesn't do a great job of assisting the developer experience here, due to the fact that task definitions are immutable and versioned. I'd had it in the back of my mind for a while now to roll my own provider while #11506 was being ignored, but other work took priority and I was able to get by with manual hacks. Since this basically confirms the PR won't be accepted, it seems the community is left with little choice but to use a custom provider in order to make ECS work properly. To echo @WhyNotHugo's request, please don't lock this issue so the user community can continue to discuss workarounds. |
also @anGie44:
Isn't that exactly what the |
The "template" task definition approach works, and doesn't violate any of the principles that terraform or ECS follow. The biggest downside is that listing task definitions yields more results, so if you interact manually with ECS a lot, then that might be annoying. |
I'm kind of surprised that you're viewing task revisions as distinct resources. Surely there's precedent in the Terraform ecosystem for managing resources that maintain immutable version-histories? |
Lambdas.
There's two "dimensions" in which I make changes to task definitions:
Having this concept of "Task definition template"and "Task definition" means that Terraform owns one of these, and CodeDeploy owns the other. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
This issue was originally opened by @dennari as hashicorp/terraform#13005. It was migrated here as part of the provider split. The original body of the issue is below.
With the task and container definition data sources I'm almost able to get our continuous delivery setup to play nicely with Terraform. We rebuild the docker image with a unique tag at every deployment. This means that after the CI service redeploys a service, the corresponding task definition's revision is incremented and the image field in a container definition changes.
I dont' seem to be able to create a setup where the task definition could be managed by Terraform in this scenario.
Terraform Version
v0.9.1
Affected Resource(s)
Terraform Configuration Files
The problem is then that after a CI deployment, terraform would like to create a new task definition. The task definition resource here points to an earlier revision and the image field is considered changed.
With the deprecated template resources, I was able to ignore changes to variables which solved this issue. One solution that comes to mind would be the ability to set revision of the
aws_ecs_task_definition
resource.I'd be grateful for any and all insights.
The text was updated successfully, but these errors were encountered: