Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow specifying an archival policy for AWS buckets #3563

Merged
merged 2 commits into from
Feb 14, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions docs/howto/features/buckets.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,9 @@ on why users want this!
},
"bucket2": {
"delete_after": null
},
"bucket3": {
"archival_storageclass_after": 3
}
}
```
Expand All @@ -28,6 +31,12 @@ on why users want this!
very helpful for 'scratch' buckets that are temporary. Set to
`null` to prevent this cleaning up process from happening, e.g., if users want a persistent bucket.

`archival_storageclass_after` (available only for AWS currently) transitions objects
created in this bucket to a cheaper, slower archival class after the number of days
specified in this variable. This is helpful for archiving user home directories or similar
use cases, where data needs to be kept for a long time but rarely accessed. This should
not be used for frequently accessed or publicly accessible data.

2. Enable access to these buckets from the hub or make them publicly accessible from outside
by [editing `hub_cloud_permissions`](howto:features:cloud-access:access-perms)
in the same `.tfvars` file. Follow all the steps listed there - this
Expand Down
18 changes: 18 additions & 0 deletions terraform/aws/buckets.tf
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,24 @@ resource "aws_s3_bucket_lifecycle_configuration" "user_bucket_expiry" {
days = each.value.delete_after
}
}

dynamic "rule" {
# Only set up this rule if it will be enabled. Prevents unnecessary
# churn in terraform
for_each = each.value.archival_storageclass_after != null ? [1] : []

content {
id = "archival-storageclass"
status = "Enabled"

transition {
# Transition this to much cheaper object storage after a few days
days = each.value.archival_storageclass_after
# Glacier Instant is fast enough while also being pretty cheap
storage_class = "GLACIER_IR"
}
}
}
}

locals {
Expand Down
3 changes: 3 additions & 0 deletions terraform/aws/projects/openscapes.tfvars
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,9 @@ user_buckets = {
"scratch" : {
"delete_after" : 7
},
"prod-homedirs-archive" : {
"archival_storageclass_after" : 3
}
}


Expand Down
17 changes: 13 additions & 4 deletions terraform/aws/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -20,17 +20,26 @@ variable "cluster_nodes_location" {
}

variable "user_buckets" {
type = map(object({ delete_after : number }))
type = map(
object({
delete_after : optional(number, null),
archival_storageclass_after : optional(number, null)
})
)
default = {}
description = <<-EOT
S3 Buckets to be created.

The key for each entry will be prefixed with {var.prefix}- to form
the name of the bucket.

The value is a map, with 'delete_after' the only accepted key in that
map - it lists the number of days after which any content in the
bucket will be deleted. Set to null to not delete data.
The value is a map, with the following accepted keys:

1. `delete_after` - number of days after *creation* an object in this
bucket will be automatically deleted. Set to null to not delete data.
2. `archival_storageclass_after` - number of days after *creation* an
object in this bucket will be automatically transitioned to a cheaper,
slower storageclass for cost savings. Set to null to not transition.
EOT
}

Expand Down