Replies: 4 comments 4 replies
-
Original reply by @from-nibly in cuelang/cue#806 (comment) I think helper functions would be a lot more useful than abstractions. Sometimes it seems like kubernetes resources allow you to specify way too many properties that you don't care about, but you always end up needing to break out and configure one thing or another. I've been doing kubernetes stuff since 1.4 and you definitely want full access to the resource properties. Using helper functions instead of full blown abstractions can help you reduce boilerplate even when you are customizing one thing or another. if abstractions are wanted I think that would be something that should be stored in the repo for each orgs kubernetes setup, that way they can customize those abstractions to be what they need. |
Beta Was this translation helpful? Give feedback.
-
It's been some months since I shared the original approach; the project Most of the time has been spent in improving the original implementation which We've used this newer implementation for a couple months now and believe it to Other teams have also adopted CUE and are offering features using this project,
A stripped down version can be seen here. Some of what is described here is missing, but it should give some idea and a place PackagesWe have packages for:
A snapshot of our packages looks something like (excluding the many
All of these packages can be used both by users to create complete AbstractionsAs mentioned several times previously, mutliple abstractions are provided for The workload resources are the most interesting and the most commonly used, as For example, consider this simple service: package kubernetes
import (
"github.com/acme/microservices-kubernetes/kit/pkg/kit"
)
Metadata: kit.#Metadata & {
serviceID: "acme-echo-jp"
}
App: Echo: kit.#Application & {
metadata: Metadata & {
name: "echo"
}
spec: {
envFrom: [{configMapRef: name: ConfigMap.metadata.name}]
scaling: horizontal: maxReplicas: 5
}
patch: deployment: {
spec: template: spec: {
serviceAccountName: "pod-default"
volumes: [{secret: optional: true}]
}
}
}
Batch: Loadtester: kit.#Batch & {
metadata: Metadata & {
name: "loadtester"
}
}
ConfigMap: Echo: kit.#ConfigMap & {
metadata: Metadata & {
name: "echo"
}
data: """
// ...
"""
}
Pipeline: Echo: kit.#CanaryPipeline & {
metadata: Metadata & {
name: "echo"
}
spec: {
notifications: [{channel: "acme-echo-jp-log"}]
progression: [5, 10, 20]
}
}
Pipeline: Loadtester: kit.#RunJobPipeline & {
metadata: Metadata & {
name: "loadtester"
}
}
Delivery: {
echo: kit.#Delivery & {
pipeline: Pipeline.Echo.pipeline
resources: App.Echo.resources +
ConfigMap.Echo.resources
}
loadtester: kit.#Delivery & {
pipeline: Pipeline.Loadtester.pipeline
resources: Batch.Loadtester.resources
}
} It contains two deliverables:
The
and a The Two Spinnaker pipelines are also specified, which is used by the delivery Using the workload abstractions provided, users will receive considerable
Reduction in line count is definitely not the only important metric, but it's Note: line counts exclude package and import statements and are based on Deployment MethodsIn the original approach, all resources were deployed in a uniform way: some There was also a tight coupling of the generation of resources with how these This is fine for most, as (mostly) things are done uniformly across the Also there were issues with this approach when:
To solve this problem we split the generation of deployable resources from how We provide packages for delivery via:
This offers flexibility to the user about how they want to deploy their The delivery interface looks like: package delivery
import "pkg/k8s"
#Method: {
resources [...k8s.#Resource]
plan: [...#Task]
apply: [...#Task]
} With package kubectl
import "pkg/delivery"
#Delivery: delivery.#Method & {
context: string
prune bool | *false
apply: [for a in _apply {a}]
plan: [for p in _plan {p}]
// ...
}
And users can use it like: package foo
import "pkg/delivery/kubectl"
Delivery: app: kubectl.#Delivery & {
resources: App.resources
} For users who want to deploy in the "traditional" way we abstracted this so they package foo
import "pkg/delivery/kit"
Delivery: app: kit.#Delivery & {
pipeline: Pipeline.pipeline
resources: App.resources
} This will:
The reason the the "delivery" interface has both plan and apply is because It also works nicely with our CI system as when a PR is created and CI runs, Another benefit of this split is that directory structure no longer matters, This affords even greater flexibility, allowing users to be able to structure Constrained & Unconstrained ResourcesThe original implementation only allowed abstractions to be used. While we know We can provide a better experience configuring Kubernetes with CUE than We can constrain them with our own constraints very nicely, but again there We currently deploy all Sourcegraph resources (of which there are quite a few) We are looking to invest some time in making migration but ToolingWe have a plethora of custom CUE scripts to perform various actions. Some of Pretty much everything is written in pure CUE, though we have acknowledged that DocumentationThe one thing we do use the Go API for is to automatically generate interface The tool parses CUE definitions and either displays in the terminal (like FrictionThere are a few sources of friction we currently have, but I understand that all PerformanceWe've run into performance issues a couple of times now. The first occurred when trying to sort resources according to an apply order. Generating a (small) set of Kubernetes manifests went from ~2 seconds to ~8 This causes a little bit of friction for users who want to quickly check However, this becomes an even greater source of friction when somebody has There is already an open issue for The second issue was with generating Datadog dashboards. I'll just mention that It would probably be best to provide details in an issue along with a way of ErrorsError messages can be very verbose, and sometimes do not offer a clear For example, a typo in a field name generates a >500 line error message. FutureIncreasing AdoptionUntil recently we have not been particularly active in approaching users, but We believe we are at a point where we can start doing this. We are actively Though dialogue with our users is very open, we have also conducted an anonymous We will continue to support other teams who have already, or are interested in, ToolingAs mentioned previously we will invest more in custom tooling which utilizes the The current scripts work, but some of it feels like a bit of a hack and does not Other than replacing and/or enhancing the functionality currently provided by
VersioningIdeally we want to be able to version individual definitions in such a way that Though there is not yet an official way to deal with versioning, if it Instead I foresee this looking something like how Kubernetes resources are It is not enough that we might be able to provide backwards compatible interface For example, a recent migration to Workload Identity DaggerWe are also looking forward to spending time evaluating Dagger. Other than lurking in the Discord and making a few trivial contributions, I've Obviously the project's scope is different, but maybe we can also encorporate |
Beta Was this translation helpful? Give feedback.
-
@slewiskelly the repo link does not appear to work |
Beta Was this translation helpful? Give feedback.
-
I have a feeling I already commented on something similar but I've been using CUE to manage Kubernetes manifests for over a year and it's been great. At its core, it's just building a massive JSON file and running https://github.com/uhthomas/automata/tree/c7f29d497a4f85a3f35e52ab860f6f230355f425 |
Beta Was this translation helpful? Give feedback.
-
Originally opened by @slewiskelly in cuelang/cue#806
The below discusses a CUE based approach to generate Kubernetes manifests from one or more abstractions provided to users.
The intention of this is to receive feedback and suggestions from the CUE community.
Goals
The goals of the abstraction framework are:
Future goals of adopting CUE, but are not immediate, may be:
Non-Goals
The abstraction framework does not, at least yet:
Design
Abstractions
Note
Application
The reason this is expressed as an
application
and not aDeployment
is because the result is a set of Kubernetes resources, rather than just a singleDeployment
. The comment in the below example shows which resources are generated.See the appendix for complete examples of how a user would declare an
application
, and what the result would be.It is noted that some of the syntax here may be intimidating to users who are used to configuring Kubernetes in pure YAML. For example, how would a user, just by reading this definition, know how to declare environment variables (especially when using
envSpec
),ports
, orvolumes
.In practice, the configuration a user would write is actually quite simple, but users will rely on good documentation and examples to get started.
The same concerns can also be applied to
Jobs
andCronJobs
, mentioned below.Job
Similarly to an
application
, more than just aJob
resource is generated, but the naming is general enough that this distinction does not matter.See the appendix for complete examples of how a user would declare a job, and what the result would be.
Cron
Similarly to a
Job
, more than just aCronJob
resource is generated. Again, the naming is general enough that this distinction does not matter.A
CronJob
is a superset of aJob
, adding only two additional fields:See the appendix for complete examples of how a user would declare a
cronJob
, and what the result would be.Config Map
The abstraction of a
ConfigMap
exposed to the user is as simple as:A user would then express a
configMap
like so:As a
ConfigMap
can be used by multiple applications within a namespace, there’s no simple way of binding it to a particular application, unless it is bound to all of them. There are also different ways in which aConfigMap
can be used (e.g. as environment variables or a volume mount)As such, the user will have to reference it by the application(s) that require it, how it is intended to be used:
Service
Services are not an abstraction by themself. They are generated according to the ports listed under an
application
’sexpose
field. This way, the user does not need to bind the service to thePod
, and assign ports in the container.A user would express which ports they want to expose the application on:
Which will generate the following:
Beyond Abstractions
While the immediate goal is to provide abstractions, in the future it would be possible to provide CUE as the configuration language for all Kubernetes manifests.
Users would be able to choose from three ways to create their manifests:
While services can access raw Kubernetes directly. Some resource types should be protected, encouraging users to use the resources which have constraints applied, or more preferably, the provided abstractions. The reason for this is for convenience (sensible defaults can be set), and to ensure platform requirements are satisfied.
Providing access to raw Kubernetes manifests would be particularly useful when deploying third-party applications, where not all platform constraints can be applied.
Users may even, in the future, be able to create their own abstraction on top of the provided schemas.
Hierarchy
Our environment is multi-cluster, multi-environment, multi-tenant and our configuration repository reflects this.
It is important to developers to not have to copy all configuration that can be applied across environments and/or clusters. This is currently achieved by patching via Kustomize, where a general base is defined and more specific patches are applied following a standard directory structure.
CUE is able to accommodate the same, leading to a structure similar to the following:
Appendix
Complete Example
Note
Minimal
Consider the following CUE configuration which declares an
application
,job
andcronJob
.This is the absolute minimum configuration required for each resource, and would generate this set of resources.
Customized
Now consider the following CUE configuration which now declares an
application
,job
and aconfigMap
, thecronJob
is a superset of a job, so no need to show a customized version of it.This is not exhaustive of all configuration options, but would generate this set of resources.
Beta Was this translation helpful? Give feedback.
All reactions