-
-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
imagePullSecrets for GCR - gcp secrets engine #123
Comments
So far, I am running vault agent, and dumping the gitlab-runner-sa.json (key.json) file out... then use the following command to "create" the secret... the .dockerconfigjson is a PITA ... embedding that entire mess of json into the "password" field seems really wrong to me. template:
relevant part of agent config template:
|
Hi @TJM, I think adding support for the Google Cloud Secrets Engine would be the best way to support your use case. Unfortunately I do not have that much experience with GCP and Vault. If you or someone else wants to add support for the Google Cloud Secrets Engine, I think it could be done in a similar way like it was done for Azure: #114 |
That PR looks like an authentication mechanism, rather than a secrets engine. The GCP Secrets Engine would replace KV or KVV2, which hold manual/static secrets, with dynamic secrets (like AWS it creates the service account and returns the credentials). I have a vault agent based solution (hack?) working, but I would consider it to be very alpha. https://github.com/TJM/vault-gcr-secrets I think it would be better to tie it into VSO at some point. I just need to wrap my head around the secrets engine interfaces and see how hard it would be to add GCP. :) ~tommy |
Ah ok, sorry for misunderstanding, hopefully I got it now:
Did I get it right now? If this is correct I think we can support this. We already have a If this is the case we can check for the secretEngine in the Does this makes sense? |
Yes, sir! This sounds right. To make things a little more complicated, we would have to do some trickery with the data returned because of the way that kubernetes formats the "docker-registry" type certificates, or rather the fact that Google wants the entire key.json as the ~tommy |
Can you maybe share the output of the following command Edit: Maybe also the output of |
@ricoberger The output is pretty boring... I can tell you that the part that we care about is base64 encoded in the json key We use the vault template:
Here is the censored version: {
"request_id": "2b0f5057-411d-aa4f-8df7-8b6b91b849de",
"lease_id": "path/to/gcp/key/gitlab-runner-sa/(CENSORED-SOME-ID)",
"lease_duration": 2592000,
"renewable": true,
"data": {
"key_algorithm": "KEY_ALG_RSA_2048",
"key_type": "TYPE_GOOGLE_CREDENTIALS_FILE",
"private_key_data": "(CENSORED-BASE64-ENCODED-keyfile)="
},
"warnings": null
} the last one is pretty straightforward too, as it doesn't really connect to vault...
|
@ricoberger THANKS! ... I saw this yesterday, but was in meetings all day ;( ... I need to get some project work completed today, but will jump on testing this next week. Sorry for the delay |
Looks like this was released a couple days ago? I will hopefully get a chance to try it out :) I am running into an issue finding an environment that I could put a development version of VSO... now that it is released, maybe I can talk them into it :-/ |
Hi, this is only available via the |
This is looking OK, so far... NOTE: We already had the kubernetes auth setup... Created the prerequisites: vault_gcp_secret_roleset, kubernetes_service_account, vault_kubernetes_auth_backend_role, vault_policy ... Then the helm chart: resource "helm_release" "a0000_vault_secrets_operator" {
name = "vault-secrets-operator"
namespace = kubernetes_namespace.a0000_anthos_namespace.metadata.0.name
repository = "https://ricoberger.github.io/helm-charts"
chart = "vault-secrets-operator"
values = [
templatefile("${path.module}/files/vault_secrets_operator.yaml.tpl", {
KUBERNETES_PATH = "auth/${vault_auth_backend.kubernetes.path}"
KUBERNETES_ROLE = vault_kubernetes_auth_backend_role.a0000_vault_secrets_operator.role_name
KUBERNETES_SA = kubernetes_service_account.a0000_vault_secrets_operator.metadata.0.name
RBAC_NAMESPACED = false # Limit VSO to its own namespace
NAMESPACES = [kubernetes_namespace.a0000_anthos_namespace.metadata.0.name]
})
]
depends_on = [
vault_kubernetes_auth_backend_role.a0000_vault_secrets_operator,
vault_policy.a0000,
kubernetes_service_account.a0000_vault_secrets_operator,
]
// FIXME: Create VaultSecrets for each of the three service account keys
} with the following yaml tmpl mentioned above: ### WARNING! Changing this file changes *all* VaultSecretOperators
### Use an instance variable whenever possible
image:
repository: docker-remote.artifactory.company.com/ricoberger/vault-secrets-operator
tag: dev
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
vault:
address: https://vault.company.com
authMethod: kubernetes
kubernetesPath: ${KUBERNETES_PATH}
kubernetesRole: ${KUBERNETES_ROLE}
namespaces: ${join(",", NAMESPACES)}
rbac:
namespaced: ${RBAC_NAMESPACED}
serviceAccount:
create: false
name: ${KUBERNETES_SA} Then, the secret looks like: apiVersion: ricoberger.de/v1alpha1
kind: VaultSecret
metadata:
name: a0000-bq-secret
spec:
isBinary: true
keys:
- private_key_data
path: ai/np/gcp/key/a0000-bq
secretEngine: gcp
type: Opaque ... which I had to apply manually because apparently raw yaml (CRDs) are not well liked by terraform... I have created a helm chart in the past to get around this, still thinking on how to manage these in terraform. Logs:
... though I am a little concerned that it appeared to update the secret twice. Every time the secret is "read" a new key is created, max 10 ... so it can't "read" the secret to reconcile, it needs to pay attention to the lease time. I have not had a chance to look at the code yet to see if it does that... AND NOW, other than the secret having the "wrong" key name (they were expecting |
One thing we found is that it is a little too aggressive on retries... I messed up the path and it looked like it tried ~45 times in just a few minutes. I can send the logs if you want, but I think it needs to back off when it gets permission denied (403) - I dont think this is "gcp" specific? |
FYI: We are using https://github.com/TJM/vault-secret-helmchart to create our VaultSecret objects through Terraform ;) |
Hi @TJM, thanks for testing.
|
That would be best if it was only "read" once, since each time it is "read" it creates a new key (lease).
Oh, Vault leases :)
I think with both types of leases, you can "request" a certain length, but vault policies may limit that.
Yep, I am keeping templating in my back pocket, in case there is push back on using the name that vault/google call the secret. I think it should be ok, or even "better" to use a "common" name like
Some sort of backoff does seem like a good idea, to reduce the impactful load against Vault. Or I could stop making mistakes? Nope. That doesn't sound like it will happen. :) |
Hi all,
I was looking at #54 to create imagePullSecrets, and that looks like it might work, but the secret that I am trying to access is not a "kv" type. The credential comes from the gcp secrets engine. So, as my goal is to get a secret (
imagePullSecrets
) to access GCR, would it be better to try to hack at this code to use the GCP secrets engine in Vault, or to hack at something else like the vault agent to create a kubernetes secret?The text was updated successfully, but these errors were encountered: