Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Workload patching #25

Open
maxstepanov opened this issue Aug 26, 2024 · 14 comments
Open

Workload patching #25

maxstepanov opened this issue Aug 26, 2024 · 14 comments
Assignees
Labels
discussion needed enhancement New feature or request

Comments

@maxstepanov
Copy link

I'm triyng to implement HPA provisioner and i'm facing the issue with Deployments spec.replicas set to 1. Documentation mentions this but doesn't explain why.
I'd like to unset it and the only supported option seems to be via patching in cli arguments

I've toyed with a similar tool and ended up implementing Json patches in provisioner definitions.
These returned along side with manifests and run at the end of the rendering pipeline.
This way provisioner functions similar to kustomize's components.
Makes provisioners more self contained and removes the need to monkey-patch with cli arguments or kustomize step later.
What do you think? Was this discussed already somewhere?

@mathieu-benoit
Copy link
Contributor

mathieu-benoit commented Aug 27, 2024

Hi @maxstepanov, just to make sure, what do you mean by unset the spec.replicas field?

Also, what you want is generating/deploying a HorizontalPodAutoscaler resource, right? If that's the case, why not having your hpa type/provisioner generating this manifest that the Dev could ask for in their Score file?

@maxstepanov
Copy link
Author

maxstepanov commented Aug 28, 2024

Hi, @mathieu-benoit, when using HPA it is recommended to remove the spec.replicas field in Deployment. It's to avoid conflicts between manual scaling and HPA-controlled scaling. When kubectl applies the Deployment the replicas will be reset to spec.replicas value. Some pod might start to shutdown before HPA can react to adjust the replicaset to the desired value.

The default value for spec.replicas is 1 so i'm not sure why it's set explicitly by score-k8s.
That's what brought me to suggest the JSON patches in provisioners.
Deployment can be patched by the provisioiner itself then i don't need to patch the value from command line and leak this aspect outside. I found this approach very useful in my implementations.

Like so:

- uri: template://default/hpa-deployment
  type: hpa-deployment
  manifests: |
    - apiVersion: autoscaling/v2
      kind: HorizontalPodAutoscaler
      metadata:
        name: {{ .SourceWorkload }}
      spec:
        scaleTargetRef:
          apiVersion: apps/v1
          kind: Deployment
          name: {{ .SourceWorkload }}
        minReplicas: {{ .Params.minReplicas }}
        maxReplicas: {{ .Params.maxReplicas }}
        metrics:
          - type: Resource
            resource:
              name: cpu
              target:
                type: Utilization
                averageUtilization: 80
  patches:
  - target:
      kind: Deployment
      name: "{{ .SourceWorkload }}"
    patch: |-
      - op: remove
        path: /spec/replicas

PS: Not sure if Workload Kind value is available to provisioners but would be nice to have.

@mathieu-benoit
Copy link
Contributor

mathieu-benoit commented Aug 28, 2024

Makes sense, thanks for the info.

That's what brought me to suggest the JSON patches in provisioners.

I totally agree with that, it would be very beneficial, to inject or patch Deployment's field. Example of: "I want to inject securityContext", etc. Note: I have filed in score-compose the same ask/approach: score-spec/score-compose#142. CC: @astromechza

The default value for spec.replicas is 1 so i'm not sure why it's set explicitly by score-k8s.

That's a good point: https://github.com/score-spec/score-k8s/blob/main/internal/convert/workloads.go#L212, @astromechza, should we not set/generate this field/value by default?

@astromechza
Copy link
Member

Fix pr for replicas=1 is up!

@astromechza
Copy link
Member

I totally understand the need for modifying the converted output of score-k8s, and we have a few places where I've seen yq or kustomize used to modify the manifests prior to deployment.

I'm not clear really why a "resource" is the answer here though. It might need to be some other syntax and semantics that allows users to modify the workload output - similar to how we might allow score workloads to be converted to jobs/cronjobs or custom CRDs.

@maxstepanov
Copy link
Author

maxstepanov commented Sep 4, 2024

@astromechza There are many cases that require Workload patching along side the added manifests.
If patches defined somewhere else the implementation of the resource is incomplete in template provisioner.

I have a case where i need to create a Secret and mount it as file.
So I created the template provisioner that adds the Secret to output manifests. Next i need to patch the volumes into workload. I don't see any way to accomplish that in template provisioner definition. After all i'd like to have the code in one place.

I can run yq after score-k8s generates, detect if resource of this type is used. Read it and patch it with kustomize but this makes the implementation leak outside and spread it over several components.
At this point it's easier not to implement template resource at all and patch it with kustomize Component later.
That's what we have been doing so far.

If i can specify JSON patches and template together then provisioner implementation is self-contained and easy to understand. That's why kustomize added Components at the end and we use them heavily.

This approach would require adding WorkloadKind to provisioners Input so provisioner can support different kinds and error out if it's not supported.

Here is one example. The same this with serviceAccounts, PDBs etc...

- uri: template://default/mounted-secret
  type: mounted-secret
  manifests: |
    {{ if not (eq .WorkloadKind "Deployment") }}{{ fail "Kind not supported" }}{{ end }}
    - apiVersion: v1
      kind: Secret
      metadata:
        name: {{ .State.service }}
        annotations:
          k8s.score.dev/source-workload: {{ .SourceWorkload }}
          k8s.score.dev/resource-uid: {{ .Uid }}
          k8s.score.dev/resource-guid: {{ .Guid }}
        labels:
          app.kubernetes.io/managed-by: score-k8s
          app.kubernetes.io/name: {{ .State.service }}
          app.kubernetes.io/instance: {{ .State.service }}
      data:
        password: {{ .State.password | b64enc }}
  patches: |
    - target:
        kind: Deployment
        name: "{{ .SourceWorkload }}"
      patch: |-
        - op: add
          path: /spec/template/spec/volumes
          value:
            - name: "{{ .SourceWorkload }}"
              secret:
                secretName: "{{ .SourceWorkload }}"

The second route is not to use score-k8s templating facilities at all and implement custom CMD provisioners that leave the patches aside for the later stage of kustomize patching. This approach is cumbersome and devalues template provisioners.

I can work on this if there is interest.

@astromechza
Copy link
Member

I have a case where i need to create a Secret and mount it as file. So I created the template provisioner that adds the Secret to output manifests. Next i need to patch the volumes into workload. I don't see any way to accomplish that in template provisioner definition. After all i'd like to have the code in one place.

Ok cool, this is the use case.

We do already have a solution to this particular thing, although it's not documented particularly well. https://github.com/score-spec/score-k8s/blob/26e22116f2a7562d669256bc1d64e9013a6be346/internal/provisioners/default/zz-default.provisioners.yaml#L39C5-L39C5

The template provisioner returns an output key that looks like secret-thing: {{ encodeSecretRef "my-secret" "my-key" }} this is a way of encoding Secret Name, and key within the secret.

Then when you mount this as a file in the workload, you use container.x.files[n].content: ${resources.foo.secret-thing}} it will identify that this is a secret, and will mount it as a volume in the pod.

So your example above can look like this:

Provisioner:

- uri: template://default/mounted-secret
  type: mounted-secret
  outputs:
    reference: {{ encodeSecretRef .State.service "password" }}
  manifests: |
    {{ if not (eq .WorkloadKind "Deployment") }}{{ fail "Kind not supported" }}{{ end }}
    - apiVersion: v1
      kind: Secret
      metadata:
        name: {{ .State.service }}
        annotations:
          k8s.score.dev/source-workload: {{ .SourceWorkload }}
          k8s.score.dev/resource-uid: {{ .Uid }}
          k8s.score.dev/resource-guid: {{ .Guid }}
        labels:
          app.kubernetes.io/managed-by: score-k8s
          app.kubernetes.io/name: {{ .State.service }}
          app.kubernetes.io/instance: {{ .State.service }}
      data:
        password: {{ .State.password | b64enc }}

And your workload could look like

...
containers:
  example:
    ...
    files:
    - target: /mnt/password
      content: ${resources.secret.reference}
resources:
  secret:
    type: mounted-secret

@maxstepanov
Copy link
Author

maxstepanov commented Sep 4, 2024 via email

@astromechza
Copy link
Member

Yeah I agree that it doesn't handle other kinds of links between resources and workloads. In general we should look for ways to have the workload consume a resource, rather than the resource submitting arbitrary patches to workloads.

@astromechza
Copy link
Member

As seen on #29.

@astromechza The same thing we do every day, add objects and patch references. If the reference in the workload like, service accounts, pod disruption budget, topology spread i must be able to patch them. In order to generate the patch i need to know the object Kind to provided the appropriate patch.

Our Developers, must be able to specify the service account, topology spread, pod disruption budget etc in score-k8s via resource!
score-k8s doesn't support this pattern! Only for secrets and volumes!

I'm trying to avoid forking this, that's why i'm begging to add the value. so i can re-implement templating in CMD provider that also leaves patches out-of-band(score is not aware of them).

then i can use wrap and dump pattern. Where i wrap the score-k8s execution in a script that picks up the patches left by the my CMD provisioners and apply them to manifests.yaml

This is such a common devops pattern I'm really surprised to get so much push back on this.

There's a suggestion that we should be able to access workload annotations or context which describes the output "shape" of the workload since without this the patch may be invalid or unnecessary.

@astromechza astromechza self-assigned this Sep 7, 2024
@astromechza
Copy link
Member

I've been looking at other Score implementations and similar issues and conversations and I think I have a suggestion here.

Similar prior art:

  1. Some score-compose users add a compose-mixins.yaml which gets applied after their main compose spec via docker compose -f compose.yaml -f compose-mixins.yaml. This mixin gets applied directly on top of the workload.

  2. Humanitec has a resource type called a workload which can provide update outputs which are applied as json patches to the generated Humanitec "Deployment Set". See https://developer.humanitec.com/platform-orchestrator/reference/resource-types/#workload.

So I think we can probably do a similar thing here: Currently the provisioners have manifests as part of their execution output, these are new manifests to add to the output but do not get applied to the workload's manifest. We could add a patches output, which gets applied using rfc6902 to the workload output itself after being converted.

After converting the workload, we search all listed resources and apply any patch outputs in lexicographic order based on the resource name.

This mechanism could be re-used by most of our other Score implementations in order to apply meaningful mixins.

We currently pass the workload name through to the provisioner, but the provisioner will need to understand more about the structure of the output kubernetes spec in order to generate a meaningful patch. This could easily be done through the params or annotations on the resource for now while we investigate this further.

Example provisioners:

As an example, here's a workload which requests a k8s-service-account resource type. The application running inside the workload will expect this to provider the urls and k8s token needed to access a kubernetes API. Even if the workload was running on docker compose via score-compose or some other score implementation.

score-k8s implementation:

- uri: template://example-k8s-service-account
  type: k8s-service-account
  manifests:
  - <service account manifest here if it needs to be generated>
  patches:
  - op: add
    path: /spec/serviceAccountName
    value: my-service-account

score-compose implementation:

- uri: template://example-k8s-service-account
  type: k8s-service-account
  patches:
  - op: add
    path: /services/my-workload/environment/KUBERNETES_SERVICE_HOST
    value: localhost
  - op: add
    path: /services/my-workload/environment/KUBERNETES_SERVICE_PORT_HTTPS
    value: 6443
  - op: add
    path: /services/my-workload/volumes/...
	value: < a kubernetes token or something here>

Or something similar to that. This is also generic enough to work for other workload kinds in the future too.

Thoughts @mathieu-benoit ?

@astromechza
Copy link
Member

I'll see if I can put up a PR for this soon

@mathieu-benoit
Copy link
Contributor

mathieu-benoit commented Sep 8, 2024

Yes, I think patches is a really good approach. I can add/inject securityContext, etc. or patch any field.

Now, I'm wondering with your example, in the Score file, we will need to add a resource dependencies k8s-service-account? If that is, I think that's not convenient. Because what about I patch any fields, for example I want to inject securityContext, how to inject that? Create a type workload-security-context? Not very user friendly... Could we have like in Humanitec a workload type, which is always resolved and that someone can implement/override?

Note: same note/remark for both score-k8s and score-compose.

@astromechza
Copy link
Member

@mathieu-benoit Or just create a type workload an add your custom provisioner for that.

resources:
  workload-patch:
    type: workload

I don't think we should add automatic workload resources that suprise users.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion needed enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants