Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor method of applying K8s resource files #27

Open
tonglil opened this issue Apr 13, 2017 · 7 comments
Open

Refactor method of applying K8s resource files #27

tonglil opened this issue Apr 13, 2017 · 7 comments

Comments

@tonglil
Copy link
Member

tonglil commented Apr 13, 2017

As more complex K8s workloads are being created, teams would like to separate out their K8s resource manifests to be more maintainable.

cc @yunzhu-li.

Idea

Currently template: points to a single file for filling vars: variables.

The suggestion is to allow template: to point to a directory that could containing 0-n K8s manifests for filling vars: variables.

The resulting filled templates would be in some output: directory (default to workspace.Path + /drone-gke-output/*.yml).

Then kubectl apply -f would be performed on the entire output: directory.

Implications

  1. The same thing pattern need to be followed for K8s Secrets manifests (for filling vars:*, secrets:, and secrets_base64 variables), but also output to the same output: directory.

*note vars: is not being filled into .kube.sec.yml today, but is a desired feature #8.

  1. The debug of these files will not be accessible via the verbose: parameter as it could/would have difficulty filtering out the "rendered" .kube.sec.yml file(s) in the output: directory. We want to avoid leaking secrets.

  2. A solution (work around) for 2. is that since all files are writen to the output: directory, those files can be uploaded to GCS or S3 for viewing/redeployment/replication at a later time. This is detailed in Refactor where templated manifest files are written to #29.

*note the output: directory may need to create a new temp directory in workspace.Path
(/drone/src/github.com/org/repo/drone-gke-output) instead of something in the workspace.Root
(/drone/drone-gke/).

It may be problematic if the temp directory contains non K8s manifest files, but it cannot error (see the next point for a use case).

This is to support restrictions other plugins have on the directory tree that can be accessed (see https://github.com/drone-plugins/drone-google-cloudstorage/issues/10).

There are further implications of this method, please see #29.

  1. An output: directory will also enable other plugins to speak Kubernetes. One use case is for a vault-k8s plugin to write to the output: folder a Vault specific K8s resource for drone-gke to kubectl apply -f dir/.

  2. Following this issue Refactor method of creating namespaces #28, this would also allow the same pattern to be followed for creating and deploying into namespaces with labels and names that are templated with vars: variables. However controlling that the Namespace is created first (before other resources) is still TBD.

  3. For those that want to wait until the rollout of some resource to be complete using kubectl rollout status (wait until deploy completes #26), that feature can be implemented with the -f and -R flags to manage all resources in the output: directory, versus specifying the exact resource. Specifics of that feature should be continued in that issue.

Releasing

There may be a way to implement these changes without break changes (transparently for plugin consumers).

If not, we may have to 1) create a new tag for the Docker container, 2) update the existing container to add a very clear DEPRECATION message, and 3) announce a limited support period.

Next steps

This is an RFC, soliciting comments and implications.

@msuterski
Copy link
Contributor

msuterski commented Apr 19, 2017

Why does it need to take a full directory? Can it take a list of files (comma separated?) instead?

@tonglil
Copy link
Member Author

tonglil commented Apr 25, 2017

@msuterski I took a few days to think about that and here's what I have to say about that:

  • Providing a list interface would certainly work, however the final manifest files will be output into a directory anyway (because I want to avoid manipulating the template files in place), and this would keep to the same ux
  • I don't see any use cases or benefits of referring to files to apply one-by-one (vs having them in a dir). The spec for list would look like this for example:
templates:
  - deploy/app.yml
  - deploy/redis.yml
  - deploy/ingress.yml
  - somedir/xyz.yml
output: drone-gke-output/

vs

template: deploy/
output: drone-gke-output/

@tonglil
Copy link
Member Author

tonglil commented Apr 25, 2017

Something else I failed to mention in the original RFC comment is that template: currently defaults to .kube.yml, which is safe to assume most people won't specify it.

With the changes in this RFC, we can still default to .kube.yml unless something else is specified. This would prevent a BC in this regard.

@msuterski
Copy link
Contributor

I thought that maybe having them as a list would not necessitate for having an output as a directory (and not leak secrets), but I guess those are unrelated.

Maybe I would ask a different question, then, why does output need to be a directory and not a single file (that would not include kube.sec.yml)?

@tonglil
Copy link
Member Author

tonglil commented Apr 26, 2017

Good question! I want the output to match the input so that if the files are uploaded/reviewed/verbosely printed, they match the organization of the original input manifest templates.

@stephansnyt
Copy link
Contributor

stephansnyt commented Apr 28, 2017

Re 1 and 2:

Is "same thing pattern" some pattern I should know about? If not this seems like an awkward sentence. Also, does the same pattern really need to be followed for secrets? Could that be left as one file to make it easier to filter out secret values from verbose output?

Code that renders templates is aware of vars and secrets. Secrets could be selectively filtered out and replaced with something like a hash of the original value before being printed as verbose output. Very crudely:

for each secret in secrets:
  verbose_output.replace(secret, hash(secret))

This may even be a good idea in general to prevent developers from accidentally printing secrets in their build logs.

What about specifying template_dir and secret_template_dir which get rendered out into specific subdirectories, then run kubectl apply -f -R $parent on a directory that contains both subdirectories?

@msuterski
Copy link
Contributor

It would be great to come back to this idea/requirement. Our deployments get more and more complex and a single yaml file might become difficult to read and browse through.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants