Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform cycle - Error: Cycle: aws_api_gateway_resource.hello (destroy), aws_api_gateway_rest_api.kotless_app (destroy) #99

Open
dariopellegrini opened this issue Mar 9, 2021 · 5 comments

Comments

@dariopellegrini
Copy link

dariopellegrini commented Mar 9, 2021

I'm experiencing this error when using Kotless with Ktor during Gradle deploy.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
> Task :initialize
aws_iam_role.say_hello: Refreshing state... [id=say-hello]
aws_iam_role_policy.say_hello: Refreshing state... [id=say-hello:terraform-20210309090955288800000001]
aws_lambda_permission.autowarm_say_hello: Refreshing state... [id=autowarm-say-hello]
aws_api_gateway_method.hello_get: Refreshing state... [id=agm-yman4njgzc-lyluof-GET]
aws_cloudwatch_event_rule.autowarm_say_hello: Refreshing state... [id=autowarm-say-hello]
aws_s3_bucket_object.say_hello: Refreshing state... [id=kotless-lambdas/say-hello.jar]
aws_api_gateway_rest_api.kotless_app: Refreshing state... [id=yman4njgzc]
aws_lambda_permission.hello_get: Refreshing state... [id=hello-get]
aws_api_gateway_resource.hello: Refreshing state... [id=lyluof]
aws_api_gateway_integration.hello_get: Refreshing state... [id=agi-yman4njgzc-lyluof-GET]
aws_lambda_function.say_hello: Refreshing state... [id=say-hello]
data.aws_region.current: Refreshing state...
aws_cloudwatch_event_target.autowarm_say_hello: Refreshing state... [id=autowarm-say-hello-terraform-20210309091106001500000003]
data.aws_iam_policy_document.good_get_assume: Refreshing state...
data.aws_s3_bucket.kotless_bucket: Refreshing state...
data.aws_caller_identity.current: Refreshing state...
data.aws_iam_policy_document.kotless_static_assume: Refreshing state...
aws_iam_role.kotless_static_role: Refreshing state... [id=kotless-static-role]
aws_api_gateway_deployment.root: Refreshing state... [id=2dy1bq]
data.aws_iam_policy_document.good_get: Refreshing state...
data.aws_iam_policy_document.kotless_static_policy: Refreshing state...
aws_iam_role_policy.kotless_static_policy: Refreshing state... [id=kotless-static-role:terraform-20210309090955291500000002]

Error: Cycle: aws_api_gateway_resource.hello (destroy), aws_api_gateway_rest_api.kotless_app (destroy), aws_api_gateway_deployment.root, aws_api_gateway_deployment.root (destroy deposed ce7d4b13), aws_api_gateway_integration.hello_get (destroy)

My gradle configurations.

// build.gradle.kts
import io.kotless.plugin.gradle.dsl.kotless
import org.jetbrains.kotlin.gradle.tasks.KotlinCompile

plugins {
    kotlin("jvm") version "1.3.61"
    id("io.kotless") version "0.1.6" apply true
}

group = "com.dariopellegrini.ktorkotless"
version = "0.1"

repositories {
    mavenCentral()
    jcenter()
    maven {
        url = uri("https://plugins.gradle.org/m2/")
    }
}

dependencies {
    testImplementation(kotlin("test-junit"))
    implementation(kotlin("stdlib"))
    implementation("io.kotless", "ktor-lang", "0.1.6")
}

tasks.test {
    useJUnit()
}

tasks.withType<KotlinCompile>() {
    kotlinOptions {
        jvmTarget = "1.8"
        languageVersion = "1.3"
        apiVersion = "1.3"
    }
}

kotless {
    config {
        bucket = "my.bucket"

        terraform {
            profile = "dario.pellegrini.dev"
            region = "eu-west-1"
        }
    }
    webapp {
        lambda {
            memoryMb = 128
            kotless {
                packages = setOf("com.dariopellegrini.ktorkotless")
            }
        }
    }
}
// settings.gradle.kts
rootProject.name = "KtorKotless"

pluginManagement {
    repositories {
        gradlePluginPortal()
    }
}

Before doing this I deployed correctly using Kotless DSL on that bucket. It seems there is some kind of conflict.
Everything is working well using gradle local with Ktor.

Thank you.

@djohnsson
Copy link

I don't know what the underlying issue is but manually deleting the API-resources under "Amazon API Gateway" seemed to allow me to continue deploying at least.

@TanVD
Copy link
Member

TanVD commented Mar 24, 2021

Did you have any previous deployments? Or maybe Kotless deployments previously failed for some reason?

@djohnsson
Copy link

djohnsson commented Mar 24, 2021

Yes, I did have a previous deployment. I've been trying to figure out what is causing my existing ktor-application deployment to misbehave so I can't guarantee I haven't been hammering away a little bit too hard ;)

My first attempt at getting back to a working state was to tear down everything (using destroy) and set it back up, which worked but took a long time to get back up and running (mostly because of DNS TTLs I think). The second time the error appeared I did the manual remove.

My method now is to remove everything ktor-related in my existing application and add it back piece-by-piece and so far I haven't seen the error pop up again.

@djohnsson
Copy link

djohnsson commented Mar 25, 2021

@TanVD I found one situation where the Cycle-error appeared which is when I changed the mergeLambda-optimization parameter. When I changed it back to the previous value the error disappeared. So:

  1. None -> All (Cycle-error)
  2. All -> None (Error is gone again)

As one might expect there was a large difference between what the plan stages (None vs All) were reporting before I attempted the deploys. Hope that helps.

Edit:
Also, it seems to appear when I remove a route from app.routing { }

@dariopellegrini
Copy link
Author

dariopellegrini commented Mar 26, 2021

As mentioned by @djohnsson the problem disappears after deleting lambdas created by Kotless on S3. I have to do that every time I redeploy. Not a solution in my opinion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants