Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: Timeout when destroying aws_ecs_capacity_provider from dependency on aws_ecs_cluster_capacity_providers #253

Open
1 task done
oleg-glushak opened this issue Jan 26, 2025 · 2 comments

Comments

@oleg-glushak
Copy link

oleg-glushak commented Jan 26, 2025

Description

The module is affected by the bug described here

  • ✋ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version [Required]: 5.12.0

  • Terraform version:
    1.5

  • Provider version(s):
    5.8.4

Reproduction Code [Required]

Steps to reproduce the behaviour:

  1. Create an ECS cluster with 2 capacity providers similar to how it's done in this example
  2. Remove one capacity provider from the definition and try to apply changes.
  3. The process hangs on removing the aws_ecs_capacity_provider resource and eventually times out.

Expected behavior

Terraform apply succeeds.

Actual behavior

Terraform apply fails.

Terminal Output Screenshot(s)

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place
  - destroy
Terraform will perform the following actions:
  # module.ecs_cluster.module.cluster.aws_ecs_capacity_provider.this["ex_2"] will be destroyed
  # (because key ["ex_2"] is not in for_each map)
  - resource "aws_ecs_capacity_provider" "this" {
      - arn      = "arn:aws:ecs:eu-central-1:xxx:capacity-provider/ex_2" -> null
      - id       = "arn:aws:ecs:eu-central-1:xxx:capacity-provider/ex_2" -> null
      - name     = "ex_2" -> null
      - tags     = {
          - "Example" = "ex-aws_ecs_cluster"
          - "Name"    = "ex-aws_ecs_cluster"
        } -> null
      - tags_all = {
          - "Example"                 = "ex-aws_ecs_cluster"
          - "Name"                    = "ex-aws_ecs_cluster"
          - "aws_account_environment" = "non-production"
          - "aws_account_id"          = "xxx"
          - "aws_account_name"        = "xxx"
          - "aws_region"              = "eu-central-1"
          - "customer_name"           = "xxx"
        } -> null
      - auto_scaling_group_provider {
          - auto_scaling_group_arn         = "arn:aws:autoscaling:eu-central-1:xxx:autoScalingGroup:xxx:autoScalingGroupName/ex-aws_ecs_cluster-ex_2-xxx[415](xxx" -> null
          - managed_draining               = "ENABLED" -> null
          - managed_termination_protection = "ENABLED" -> null
          - managed_scaling {
              - instance_warmup_period    = 0 -> null
              - maximum_scaling_step_size = 15 -> null
              - minimum_scaling_step_size = 5 -> null
              - status                    = "ENABLED" -> null
              - target_capacity           = 90 -> null
            }
        }
    }
  # module.ecs_cluster.module.cluster.aws_ecs_cluster_capacity_providers.this[0] will be updated in-place
  ~ resource "aws_ecs_cluster_capacity_providers" "this" {
      ~ capacity_providers = [
          - "ex_2",
            # (1 unchanged element hidden)
        ]
        id                 = "ex-aws_ecs_cluster"
        # (1 unchanged attribute hidden)
      - default_capacity_provider_strategy {
          - base              = 0 -> null
          - capacity_provider = "ex_2" -> null
          - weight            = 40 -> null
        }
        # (1 unchanged block hidden)
    }
Plan: 0 to add, 1 to change, 1 to destroy.
module.ecs_cluster.module.cluster.aws_ecs_capacity_provider.this["ex_2"]: Destroying... [id=arn:aws:ecs:eu-central-1:xxx:capacity-provider/ex_2]
...
│ Error: waiting for ECS Capacity Provider (arn:aws:ecs:eu-central-1:xxx:capacity-provider/ex_2) delete: timeout while waiting for resource to be gone (last state: 'ACTIVE', timeout: 20m0s)

Additional context

I wonder if there's any workaround we could introduce for the time being.

All my attempts to play around with depends_on didn't work out.

Copy link

This issue has been automatically marked as stale because it has been open 30 days
with no activity. Remove stale label or comment or this issue will be closed in 10 days

@github-actions github-actions bot added the stale label Feb 26, 2025
@sagraut12
Copy link

Any update on this? I'm having the same issue

@github-actions github-actions bot removed the stale label Mar 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants