diff --git a/docs/book/src/getting-started.md b/docs/book/src/getting-started.md
index 29c39226571..e330cee6c4c 100644
--- a/docs/book/src/getting-started.md
+++ b/docs/book/src/getting-started.md
@@ -1,22 +1,32 @@
# Getting Started
-## Overview
+We will create a sample project to let you know how it works. This sample will:
+
+- Reconcile a Memcached CR - which represents an instance of a Memcached deployed/managed on cluster
+- Create a Deployment with the Memcached image
+- Not allow more instances than the size defined in the CR which will be applied
+- Update the Memcached CR status
+
+
+Why Operators?
By following the [Operator Pattern][k8s-operator-pattern], it’s possible not only to provide all expected resources
but also to manage them dynamically, programmatically, and at execution time. To illustrate this idea, imagine if
someone accidentally changed a configuration or removed a resource by mistake; in this case, the operator could fix it
without any human intervention.
-## Sample Project
+
-We will create a sample project to let you know how it works. This sample will:
+
+Following Along vs Jumping Ahead
-- Reconcile a Memcached CR - which represents an instance of a Memcached deployed/managed on cluster
-- Create a Deployment with the Memcached image
-- Not allow more instances than the size defined in the CR which will be applied
-- Update the Memcached CR status
+Note that most of this tutorial is generated from literate Go files that
+form a runnable project, and live in the book source directory:
+[docs/book/src/getting-started/testdata/project][tutorial-source].
+
+[tutorial-source]: https://github.com/kubernetes-sigs/kubebuilder/tree/master/docs/book/src/cronjob-tutorial/testdata/project
-Use the following steps.
+
## Create a project
@@ -28,106 +38,109 @@ cd $GOPATH/memcached-operator
kubebuilder init --domain=example.com
```
+
+Developing in $GOPATH
+
+If your project is initialized within [`GOPATH`][GOPATH-golang-docs], the implicitly called `go mod init` will interpolate the module path for you.
+Otherwise `--repo=` must be set.
+
+Read the [Go modules blogpost][go-modules-blogpost] if unfamiliar with the module system.
+
+
+
## Create the Memcached API (CRD):
-Next, we'll create a new API responsible for deploying and managing our Memcached solution. In this instance, we will utilize the [Deploy Image Plugin][deploy-image] to get a comprehensive code implementation for our solution.
+Next, we'll create the API which will be responsible for deploying and
+managing Memcached(s) instances on the cluster.
-```
-kubebuilder create api --group cache --version v1alpha1 --kind Memcached --image=memcached:1.4.36-alpine --image-container-command="memcached,-m=64,-o,modern,-v" --image-container-port="11211" --run-as-user="1001" --plugins="deploy-image/v1-alpha" --make=false
+```shell
+kubebuilder create api --group cache --version v1alpha1 --kind Memcached
```
### Understanding APIs
-This command's primary aim is to produce the Custom Resource (CR) and Custom Resource Definition (CRD) for the Memcached Kind. It creates the API with the group `cache.example.com` and version `v1alpha1`, uniquely identifying the new CRD of the Memcached Kind. By leveraging the Kubebuilder tool, we can define our APIs and objects representing our solutions for these platforms. While we've added only one Kind of resource in this example, you can have as many `Groups` and `Kinds` as necessary. Simply put, think of CRDs as the definition of our custom Objects, while CRs are instances of them.
-
-
-Getting a better idea
+This command's primary aim is to produce the Custom Resource (CR) and Custom Resource Definition (CRD) for the Memcached Kind.
+It creates the API with the group `cache.example.com` and version `v1alpha1`, uniquely identifying the new CRD of the Memcached Kind.
+By leveraging the Kubebuilder tool, we can define our APIs and objects representing our solutions for these platforms.
-Consider a typical scenario where the objective is to run an application and its database on a Kubernetes platform. In this context, one object might represent the Frontend App, while another denotes the backend Data Base. If we define one CRD for the App and another for the DB, we uphold essential concepts like encapsulation, the single responsibility principle, and cohesion. Breaching these principles might lead to complications, making extension, reuse, or maintenance challenging.
+While we've added only one Kind of resource in this example, we can have as many `Groups` and `Kinds` as necessary.
+To make it easier to understand, think of CRDs as the definition of our custom Objects, while CRs are instances of them.
-In essence, the App CRD and the DB CRD will each have their own controller. Let's say, for instance, that the application requires a Deployment and Service to run. In this example, the App’s Controller will cater to these needs. Similarly, the DB’s controller will manage the business logic of its items.
+
+ Please ensure that you check
-Therefore, for each CRD, there should be one distinct controller, adhering to the design outlined by the [controller-runtime][controller-runtime]. For further information see [Groups and Versions and Kinds, oh my!][group-kind-oh-my].
+[Groups and Versions and Kinds, oh my!][group-kind-oh-my].
-### Define your API
-
-In this example, observe that the Memcached Kind (CRD) possesses certain specifications. These were scaffolded by the Deploy Image plugin, building upon the default scaffold for management purposes:
+### Defining our API
-#### Status and Specs
+#### Defining the Specs
-The `MemcachedSpec` section is where we encapsulate all the available specifications and configurations for our Custom Resource (CR). Furthermore, it's worth noting that we employ Status Conditions. This ensures proficient management of the Memcached CR. When any change transpires, these conditions equip us with the necessary data to discern the current status of this resource within the Kubernetes cluster. This is akin to the status insights we obtain for a Deployment resource.
-
-From: `api/v1alpha1/memcached_types.go`
+Now, we will define the values that each instance of your Memcached resource on the cluster can assume. In this example,
+we will allow configuring the number of instances with the following:
```go
-// MemcachedSpec defines the desired state of Memcached
type MemcachedSpec struct {
- // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
- // Important: Run "make" to regenerate code after modifying this file
-
- // Size defines the number of Memcached instances
- // The following markers will use OpenAPI v3 schema to validate the value
- // More info: https://book.kubebuilder.io/reference/markers/crd-validation.html
- // +kubebuilder:validation:Minimum=1
- // +kubebuilder:validation:Maximum=3
- // +kubebuilder:validation:ExclusiveMaximum=false
+ ...
Size int32 `json:"size,omitempty"`
-
- // Port defines the port that will be used to init the container with the image
- ContainerPort int32 `json:"containerPort,omitempty"`
}
+```
+
+#### Creating Status definitions
+
+We also want to track the status of our Operations which will be done to manage the Memcached CR(s).
+This allows us to verify the Custom Resource's description of our own API and determine if everything
+occurred successfully or if any errors were encountered,
+similar to how we do with any resource from the Kubernetes API.
+```go
// MemcachedStatus defines the observed state of Memcached
type MemcachedStatus struct {
- // Represents the observations of a Memcached's current state.
- // Memcached.status.conditions.type are: "Available", "Progressing", and "Degraded"
- // Memcached.status.conditions.status are one of True, False, Unknown.
- // Memcached.status.conditions.reason the value should be a CamelCase string and producers of specific
- // condition types may define expected values and meanings for this field, and whether the values
- // are considered a guaranteed API.
- // Memcached.status.conditions.Message is a human readable message indicating details about the transition.
- // For further information see: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties
-
Conditions []metav1.Condition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions"`
}
```
-Thus, when we introduce new specifications to this file and execute the `make generate` command, we utilize [controller-gen][controller-gen] to generate the CRD manifest, which is located under the `config/crd/bases` directory.
+
+ Status Conditions
+
+Kubernetes has established conventions, and because of this, we use
+Status Conditions here. We want our custom APIs and controllers to behave
+like Kubernetes resources and their controllers, following these standards
+to ensure a consistent and intuitive experience.
+
+Please ensure that you review: [Kubernetes API Conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties)
+
+
#### Markers and validations
-Moreover, it's important to note that we're employing `markers`, such as `+kubebuilder:validation:Minimum=1`. These markers help in defining validations and criteria, ensuring that data provided by users — when they create or edit a Custom Resource for the Memcached Kind — is properly validated. For a comprehensive list and details of available markers, refer [the Markers documentation][markers].
-Observe the validation schema within the CRD; this schema ensures that the Kubernetes API properly validates the Custom Resources (CRs) that are applied:
+Furthermore, we want to validate the values added in our CustomResource
+to ensure that those are valid. To do it we are will use refer [markers][markers],
+such as `+kubebuilder:validation:Minimum=1`.
-From: [config/crd/bases/cache.example.com_memcacheds.yaml](https://github.com/kubernetes-sigs/kubebuilder/tree/master/docs/book/src/getting-started/testdata/project/config/crd/bases/cache.example.com_memcacheds.yaml)
+Now, see our example fully completed.
+
+{{#literatego ./getting-started/testdata/project/api/v1alpha1/memcached_types.go}}
+
+#### Generating manifests with the specs and validations
+
+To generate the required CRDs we will run `make generate` command, which will call [controller-gen][controller-gen]
+to generate the CRD manifest, which is located under the `config/crd/bases` directory.
+
+config/crd/bases/cache.example.com_memcacheds.yam
: Our Memcached CRD
```yaml
-description: MemcachedSpec defines the desired state of Memcached
-properties:
- containerPort:
- description: Port defines the port that will be used to init the container
- with the image
- format: int32
- type: integer
- size:
- description: 'Size defines the number of Memcached instances The following
- markers will use OpenAPI v3 schema to validate the value More info:
- https://book.kubebuilder.io/reference/markers/crd-validation.html'
- format: int32
- maximum: 3 ## Generated from the marker +kubebuilder:validation:Maximum=3
- minimum: 1 ## Generated from the marker +kubebuilder:validation:Minimum=1
- type: integer
-type: object
+{{#include ./getting-started/testdata/project/config/crd/bases/cache.example.com_memcacheds.yaml}}
```
-#### Sample of Custom Resources
+
-The manifests located under the "config/samples" directory serve as examples of Custom Resources that can be applied to the cluster.
-In this particular example, by applying the given resource to the cluster, we would generate a Deployment with a single instance size (see `size: 1`).
+#### Sample of Custom Resources
-From: [config/samples/cache_v1alpha1_memcached.yaml](https://github.com/kubernetes-sigs/kubebuilder/tree/master/docs/book/src/getting-started/testdata/project/config/samples/cache_v1alpha1_memcached.yaml)
+The manifests located under the `config/samples` directory serve as examples of Custom Resources that can be applied to the cluster.
+In this particular example, by applying the given resource to the cluster, we would generate
+a Deployment with a single instance size (see `size: 1`).
```yaml
{{#include ./getting-started/testdata/project/config/samples/cache_v1alpha1_memcached.yaml}}
@@ -135,7 +148,13 @@ From: [config/samples/cache_v1alpha1_memcached.yaml](https://github.com/kubernet
### Reconciliation Process
-The reconciliation function plays a pivotal role in ensuring synchronization between resources and their specifications based on the business logic embedded within them. Essentially, it operates like a loop, continuously checking conditions and performing actions until all conditions align with its implementation. Here's pseudo-code to illustrate this:
+In a simplified way, Kubernetes works by allowing us to declare the desired state of our system, and then its controllers continuously observe the cluster and take actions to ensure that the actual state matches the desired state. For our custom APIs and controllers, the process is similar. Remember, we are extending Kubernetes' behaviors and its APIs to fit our specific needs.
+
+In our controller, we will implement a reconciliation process.
+
+Essentially, the reconciliation process functions as a loop, continuously checking conditions and performing necessary actions until the desired state is achieved. This process will keep running until all conditions in the system align with the desired state defined in our implementation.
+
+Here's a pseudo-code example to illustrate this:
```go
reconcile App {
@@ -168,7 +187,8 @@ reconcile App {
}
```
-#### Return Options
+
+ Return Options
The following are a few possible return options to restart the Reconcile:
@@ -195,254 +215,157 @@ return ctrl.Result{}, nil
return ctrl.Result{RequeueAfter: nextRun.Sub(r.Now())}, nil
```
+
+
#### In the context of our example
-When a Custom Resource is applied to the cluster, there's a designated controller to manage the Memcached Kind. You can check how its reconciliation is implemented:
+When our sample Custom Resource (CR) is applied to the cluster (i.e. `kubectl apply -f config/sample/cache_v1alpha1_memcached.yaml`),
+we want to ensure that a Deployment is created for our Memcached image and that it matches the number of replicas defined in the CR.
-From: [internal/controller/memcached_controller.go](https://github.com/kubernetes-sigs/kubebuilder/tree/master/docs/book/src/getting-started/testdata/project/internal/controller/memcached_controller.go)
+To achieve this, we need to first implement an operation that checks whether the Deployment for our Memcached instance already exists on the cluster.
+If it does not, the controller will create the Deployment accordingly. Therefore, our reconciliation process must include an operation to ensure that
+this desired state is consistently maintained. This operation would involve:
```go
-func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
- log := log.FromContext(ctx)
-
- // Fetch the Memcached instance
- // The purpose is check if the Custom Resource for the Kind Memcached
- // is applied on the cluster if not we return nil to stop the reconciliation
- memcached := &examplecomv1alpha1.Memcached{}
- err := r.Get(ctx, req.NamespacedName, memcached)
- if err != nil {
- if apierrors.IsNotFound(err) {
- // If the custom resource is not found then it usually means that it was deleted or not created
- // In this way, we will stop the reconciliation
- log.Info("memcached resource not found. Ignoring since object must be deleted")
- return ctrl.Result{}, nil
- }
- // Error reading the object - requeue the request.
- log.Error(err, "Failed to get memcached")
- return ctrl.Result{}, err
- }
-
- // Let's just set the status as Unknown when no status is available
- if memcached.Status.Conditions == nil || len(memcached.Status.Conditions) == 0 {
- meta.SetStatusCondition(&memcached.Status.Conditions, metav1.Condition{Type: typeAvailableMemcached, Status: metav1.ConditionUnknown, Reason: "Reconciling", Message: "Starting reconciliation"})
- if err = r.Status().Update(ctx, memcached); err != nil {
- log.Error(err, "Failed to update Memcached status")
- return ctrl.Result{}, err
- }
-
- // Let's re-fetch the memcached Custom Resource after updating the status
- // so that we have the latest state of the resource on the cluster and we will avoid
- // raising the error "the object has been modified, please apply
- // your changes to the latest version and try again" which would re-trigger the reconciliation
- // if we try to update it again in the following operations
- if err := r.Get(ctx, req.NamespacedName, memcached); err != nil {
- log.Error(err, "Failed to re-fetch memcached")
- return ctrl.Result{}, err
- }
- }
-
- // Let's add a finalizer. Then, we can define some operations which should
- // occur before the custom resource to be deleted.
- // More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers
- if !controllerutil.ContainsFinalizer(memcached, memcachedFinalizer) {
- log.Info("Adding Finalizer for Memcached")
- if ok := controllerutil.AddFinalizer(memcached, memcachedFinalizer); !ok {
- log.Error(err, "Failed to add finalizer into the custom resource")
- return ctrl.Result{Requeue: true}, nil
- }
-
- if err = r.Update(ctx, memcached); err != nil {
- log.Error(err, "Failed to update custom resource to add finalizer")
- return ctrl.Result{}, err
- }
- }
-
- // Check if the Memcached instance is marked to be deleted, which is
- // indicated by the deletion timestamp being set.
- isMemcachedMarkedToBeDeleted := memcached.GetDeletionTimestamp() != nil
- if isMemcachedMarkedToBeDeleted {
- if controllerutil.ContainsFinalizer(memcached, memcachedFinalizer) {
- log.Info("Performing Finalizer Operations for Memcached before delete CR")
-
- // Let's add here a status "Downgrade" to reflect that this resource began its process to be terminated.
- meta.SetStatusCondition(&memcached.Status.Conditions, metav1.Condition{Type: typeDegradedMemcached,
- Status: metav1.ConditionUnknown, Reason: "Finalizing",
- Message: fmt.Sprintf("Performing finalizer operations for the custom resource: %s ", memcached.Name)})
-
- if err := r.Status().Update(ctx, memcached); err != nil {
- log.Error(err, "Failed to update Memcached status")
- return ctrl.Result{}, err
- }
-
- // Perform all operations required before removing the finalizer and allow
- // the Kubernetes API to remove the custom resource.
- r.doFinalizerOperationsForMemcached(memcached)
-
- // TODO(user): If you add operations to the doFinalizerOperationsForMemcached method
- // then you need to ensure that all worked fine before deleting and updating the Downgrade status
- // otherwise, you should requeue here.
-
- // Re-fetch the memcached Custom Resource before updating the status
- // so that we have the latest state of the resource on the cluster and we will avoid
- // raising the error "the object has been modified, please apply
- // your changes to the latest version and try again" which would re-trigger the reconciliation
- if err := r.Get(ctx, req.NamespacedName, memcached); err != nil {
- log.Error(err, "Failed to re-fetch memcached")
- return ctrl.Result{}, err
- }
-
- meta.SetStatusCondition(&memcached.Status.Conditions, metav1.Condition{Type: typeDegradedMemcached,
- Status: metav1.ConditionTrue, Reason: "Finalizing",
- Message: fmt.Sprintf("Finalizer operations for custom resource %s name were successfully accomplished", memcached.Name)})
-
- if err := r.Status().Update(ctx, memcached); err != nil {
- log.Error(err, "Failed to update Memcached status")
- return ctrl.Result{}, err
- }
-
- log.Info("Removing Finalizer for Memcached after successfully perform the operations")
- if ok := controllerutil.RemoveFinalizer(memcached, memcachedFinalizer); !ok {
- log.Error(err, "Failed to remove finalizer for Memcached")
- return ctrl.Result{Requeue: true}, nil
- }
-
- if err := r.Update(ctx, memcached); err != nil {
- log.Error(err, "Failed to remove finalizer for Memcached")
- return ctrl.Result{}, err
- }
- }
- return ctrl.Result{}, nil
- }
-
// Check if the deployment already exists, if not create a new one
found := &appsv1.Deployment{}
err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found)
if err != nil && apierrors.IsNotFound(err) {
// Define a new deployment
- dep, err := r.deploymentForMemcached(memcached)
- if err != nil {
- log.Error(err, "Failed to define new Deployment resource for Memcached")
-
- // The following implementation will update the status
- meta.SetStatusCondition(&memcached.Status.Conditions, metav1.Condition{Type: typeAvailableMemcached,
- Status: metav1.ConditionFalse, Reason: "Reconciling",
- Message: fmt.Sprintf("Failed to create Deployment for the custom resource (%s): (%s)", memcached.Name, err)})
-
- if err := r.Status().Update(ctx, memcached); err != nil {
- log.Error(err, "Failed to update Memcached status")
- return ctrl.Result{}, err
- }
+ dep := r.deploymentForMemcached()
+ // Create the Deployment on the cluster
+ if err = r.Create(ctx, dep); err != nil {
+ log.Error(err, "Failed to create new Deployment",
+ "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
+ return ctrl.Result{}, err
+ }
+ ...
+ }
+```
- return ctrl.Result{}, err
- }
+Next, note that the `deploymentForMemcached()` function will need to define and return the Deployment that should be
+created on the cluster. This function should construct the Deployment object with the necessary
+specifications, as demonstrated in the following example:
- log.Info("Creating a new Deployment",
- "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
- if err = r.Create(ctx, dep); err != nil {
- log.Error(err, "Failed to create new Deployment",
- "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
- return ctrl.Result{}, err
- }
-
- // Deployment created successfully
- // We will requeue the reconciliation so that we can ensure the state
- // and move forward for the next operations
- return ctrl.Result{RequeueAfter: time.Minute}, nil
- } else if err != nil {
- log.Error(err, "Failed to get Deployment")
- // Let's return the error for the reconciliation be re-trigged again
- return ctrl.Result{}, err
+```go
+ dep := &appsv1.Deployment{
+ Spec: appsv1.DeploymentSpec{
+ Replicas: &replicas,
+ Template: corev1.PodTemplateSpec{
+ Spec: corev1.PodSpec{
+ Containers: []corev1.Container{{
+ Image: "memcached:1.6.26-alpine3.19",
+ Name: "memcached",
+ ImagePullPolicy: corev1.PullIfNotPresent,
+ Ports: []corev1.ContainerPort{{
+ ContainerPort: 11211,
+ Name: "memcached",
+ }},
+ Command: []string{"memcached", "-m=64", "-o", "modern", "-v"},
+ }},
+ },
+ },
+ },
}
+```
+
+Additionally, we need to implement a mechanism to verify that the number of Memcached replicas
+on the cluster matches the desired count specified in the Custom Resource (CR). If there is a
+discrepancy, the reconciliation must update the cluster to ensure consistency. This means that
+whenever a CR of the Memcached Kind is created or updated on the cluster, the controller will
+continuously reconcile the state until the actual number of replicas matches the desired count.
+The following example illustrates this process:
- // The CRD API is defining that the Memcached type, have a MemcachedSpec.Size field
- // to set the quantity of Deployment instances is the desired state on the cluster.
- // Therefore, the following code will ensure the Deployment size is the same as defined
- // via the Size spec of the Custom Resource which we are reconciling.
+```go
+ ...
size := memcached.Spec.Size
if *found.Spec.Replicas != size {
found.Spec.Replicas = &size
if err = r.Update(ctx, found); err != nil {
log.Error(err, "Failed to update Deployment",
"Deployment.Namespace", found.Namespace, "Deployment.Name", found.Name)
+ return ctrl.Result{}, err
+ }
+ ...
+```
- // Re-fetch the memcached Custom Resource before updating the status
- // so that we have the latest state of the resource on the cluster and we will avoid
- // raising the error "the object has been modified, please apply
- // your changes to the latest version and try again" which would re-trigger the reconciliation
- if err := r.Get(ctx, req.NamespacedName, memcached); err != nil {
- log.Error(err, "Failed to re-fetch memcached")
- return ctrl.Result{}, err
- }
-
- // The following implementation will update the status
- meta.SetStatusCondition(&memcached.Status.Conditions, metav1.Condition{Type: typeAvailableMemcached,
- Status: metav1.ConditionFalse, Reason: "Resizing",
- Message: fmt.Sprintf("Failed to update the size for the custom resource (%s): (%s)", memcached.Name, err)})
-
- if err := r.Status().Update(ctx, memcached); err != nil {
- log.Error(err, "Failed to update Memcached status")
- return ctrl.Result{}, err
- }
-
- return ctrl.Result{}, err
- }
-
- // Now, that we update the size we want to requeue the reconciliation
- // so that we can ensure that we have the latest state of the resource before
- // update. Also, it will help ensure the desired state on the cluster
- return ctrl.Result{Requeue: true}, nil
- }
-
- // The following implementation will update the status
- meta.SetStatusCondition(&memcached.Status.Conditions, metav1.Condition{Type: typeAvailableMemcached,
- Status: metav1.ConditionTrue, Reason: "Reconciling",
- Message: fmt.Sprintf("Deployment for custom resource (%s) with %d replicas created successfully", memcached.Name, size)})
+Now, you can review the complete controller responsible for managing Custom Resources of the
+Memcached Kind. This controller ensures that the desired state is maintained in the cluster,
+making sure that our Memcached instance continues running with the number of replicas specified
+by the users.
- if err := r.Status().Update(ctx, memcached); err != nil {
- log.Error(err, "Failed to update Memcached status")
- return ctrl.Result{}, err
- }
+internal/controller/memcached_controller.go
: Our Controller Implementation
- return ctrl.Result{}, nil
-}
+```go
+{{#include ./getting-started/testdata/project/internal/controller/memcached_controller.go}}
```
+
+
+### Diving Into the Controller Implementation
-#### Observing changes on cluster
+#### Setting Manager to Watching Resources
-This controller is persistently observant, monitoring any events associated with this Kind. As a result, pertinent changes
-instantly set off the controller's reconciliation process. It's worth noting that we have implemented the `watches` feature. [(More info)][watches].
-This allows us to monitor events related to creating, updating, or deleting a Custom Resource of the Memcached kind, as well as the Deployment
-which is orchestrated and owned by its respective controller. Observe:
+The whole idea is to be [Watching][watching-resources] the resources that matter for the controller.
+When a resource that the controller is interested in changes, the Watch triggers the controller's
+reconciliation loop, ensuring that the actual state of the resource matches the desired state
+as defined in the controller's logic.
+
+Notice how we configured the Manager to monitor events such as the creation, update, or deletion of a Custom Resource (CR) of the Memcached kind,
+as well as any changes to the Deployment that the controller manages and owns:
```go
// SetupWithManager sets up the controller with the Manager.
-// Note that the Deployment will be also watched in order to ensure its
-// desirable state on the cluster
+// The Deployment is also watched to ensure its
+// desired state in the cluster.
func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
- For(&examplecomv1alpha1.Memcached{}). ## Create watches for the Memcached Kind
- Owns(&appsv1.Deployment{}). ## Create watches for the Deployment which has its controller owned reference
- Complete(r)
-}
+ // Watch the Memcached Custom Resource and trigger reconciliation whenever it
+ //is created, updated, or deleted
+ For(&cachev1alpha1.Memcached{}).
+ // Watch the Deployment managed by the Memcached controller. If any changes occur to the Deployment
+ // owned and managed by this controller, it will trigger reconciliation, ensuring that the cluster
+ // state aligns with the desired state.
+ Owns(&appsv1.Deployment{}).
+ Complete(r)
+ }
```
-
-Set the ownerRef for the Deployment
+#### But, How Does the Manager Know Which Resources Are Owned by It?
+
+We do not want our Controller to watch any Deployment on the cluster and trigger our
+reconciliation loop. Instead, we only want to trigger reconciliation when the specific
+Deployment running our Memcached instance is changed. For example,
+if someone accidentally deletes our Deployment or changes the number of replicas, we want
+to trigger the reconciliation to ensure that it returns to the desired state.
-See that when we create the Deployment to run the Memcached image we are setting the reference:
+The Manager knows which Deployment to observe because we set the `ownerRef` (Owner Reference):
```go
-// Set the ownerRef for the Deployment
-// More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/
if err := ctrl.SetControllerReference(memcached, dep, r.Scheme); err != nil {
return nil, err
}
-
```
+
+
+`ownerRef` and cascading event
+
+The ownerRef is crucial not only for allowing us to observe changes on the specific resource but also because,
+if we delete the Memcached Custom Resource (CR) from the cluster, we want all resources owned by it to be automatically
+deleted as well, in a cascading event.
+
+This ensures that when the parent resource (Memcached CR) is removed, all associated resources
+(like Deployments, Services, etc.) are also cleaned up, maintaining
+a tidy and consistent cluster state.
+
+For more information, see the Kubernetes documentation on [Owners and Dependents](https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/).
+
-### Setting the RBAC permissions
+### Granting Permissions
+
+It's important to ensure that the Controller has the necessary permissions(i.e. to create, get, update, and list)
+the resources it manages.
The [RBAC permissions][k8s-rbac] are now configured via [RBAC markers][rbac-markers], which are used to generate and update the
manifest files present in `config/rbac/`. These markers can be found (and should be defined) on the `Reconcile()` method of each controller, see
@@ -457,77 +380,32 @@ how it is implemented in our example:
// +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch
```
-It's important to highlight that if you wish to add or modify RBAC rules, you can do so by updating or adding the respective markers in the controller.
-After making the necessary changes, run the `make generate` command. This will prompt [controller-gen][controller-gen] to refresh the files located under `config/rbac`.
+After making changes to the controller, run the make generate command. This will prompt [controller-gen][controller-gen]
+to refresh the files located under `config/rbac`.
-
-RBAC generate under config/rbac
+config/rbac/role.yaml
: Our RBAC Role generated
-For each Kind, Kubebuilder will generate scaffold rules with view and edit permissions. (i.e. `memcached_editor_role.yaml` and `memcached_viewer_role.yaml`)
-Those rules are aimed to help system admins know what to allow when granting permissions to a group of users.
-
-
+```yaml
+{{#include ./getting-started/testdata/project/config/rbac/role.yaml}}
+```
+
### Manager (main.go)
-The [Manager][manager] plays a crucial role in overseeing Controllers, which in turn enable operations on the cluster side.
-If you inspect the `cmd/main.go` file, you'll come across the following:
+The [Manager][manager] in the `cmd/main.go` file is responsible for managing the controllers in your application.
+
+cmd/main.gol
: Our main.go
```go
-...
- mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
- Scheme: scheme,
- Metrics: metricsserver.Options{BindAddress: metricsAddr},
- HealthProbeBindAddress: probeAddr,
- LeaderElection: enableLeaderElection,
- LeaderElectionID: "1836d577.testproject.org",
- // LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
- // when the Manager ends. This requires the binary to immediately end when the
- // Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
- // speeds up voluntary leader transitions as the new leader doesn't have to wait
- // the LeaseDuration time first.
- //
- // In the default scaffold provided, the program ends immediately after
- // the manager stops, so it would be fine to enable this option. However,
- // if you are doing, or are intending to do, any operation such as perform cleanups
- // after the manager stops then its usage might be unsafe.
- // LeaderElectionReleaseOnCancel: true,
- })
- if err != nil {
- setupLog.Error(err, "unable to start manager")
- os.Exit(1)
- }
+{{#include ./getting-started/testdata/project/cmd/main.go}}
```
-
-The code snippet above outlines the configuration [options][options-manager] for the Manager. While we won't be altering this in our current example,
-it's crucial to understand its location and the initialization process of your operator-based image. The Manager is responsible for overseeing the controllers
-that are produced for your operator's APIs.
+
### Checking the Project running in the cluster
-At this point, you can execute the commands highlighted in the [quick-start][quick-start].
-By executing `make build IMG=myregistry/example:1.0.0`, you'll build the image for your project. For testing purposes, it's recommended to publish this image to a
-public registry. This ensures easy accessibility, eliminating the need for additional configurations. Once that's done, you can deploy the image
-to the cluster using the `make deploy IMG=myregistry/example:1.0.0` command.
-
-
-Consider use Kind
-
-This image ought to be published in the personal registry you specified. And it is required to have access to pull the image
-from the working environment. Make sure you have the proper permission
-to the registry if the above commands don't work.
-
-Consider incorporating Kind into your workflow for a faster, more efficient local development and CI experience.
-Note that, if you're using a Kind cluster, there's no need to push your image to a remote container registry.
-You can directly load your local image into your specified Kind cluster:
-
-```bash
-kind load docker-image :tag --name
-```
-
-For further information, see: [Using Kind For Development Purposes and CI](./reference/kind.md)
-
-
+At this point you can check the steps to validate the project
+on the cluster by looking the steps defined in the Quick Start,
+see: [Run It On the Cluster](./quick-start#run-it-on-the-cluster)
## Next Steps
@@ -557,4 +435,7 @@ implemented for your controller.
[quick-start]: ./quick-start.md
[best-practices]: ./reference/good-practices.md
[cronjob-tutorial]: https://book.kubebuilder.io/cronjob-tutorial/cronjob-tutorial.html
-[deploy-image]: ./plugins/deploy-image-plugin-v1-alpha.md
\ No newline at end of file
+[deploy-image]: ./plugins/deploy-image-plugin-v1-alpha.md
+[GOPATH-golang-docs]: https://golang.org/doc/code.html#GOPATH
+[go-modules-blogpost]: https://blog.golang.org/using-go-modules
+[watching-resources]: ./reference/watching-resources
\ No newline at end of file
diff --git a/docs/book/src/getting-started/testdata/project/PROJECT b/docs/book/src/getting-started/testdata/project/PROJECT
index 1867160fcb2..628fed927e7 100644
--- a/docs/book/src/getting-started/testdata/project/PROJECT
+++ b/docs/book/src/getting-started/testdata/project/PROJECT
@@ -5,18 +5,6 @@
domain: example.com
layout:
- go.kubebuilder.io/v4
-plugins:
- deploy-image.go.kubebuilder.io/v1-alpha:
- resources:
- - domain: example.com
- group: cache
- kind: Memcached
- options:
- containerCommand: memcached,-m=64,-o,modern,-v
- containerPort: "11211"
- image: memcached:1.4.36-alpine
- runAsUser: "1001"
- version: v1alpha1
projectName: project
repo: example.com/memcached
resources:
diff --git a/docs/book/src/getting-started/testdata/project/api/v1alpha1/memcached_types.go b/docs/book/src/getting-started/testdata/project/api/v1alpha1/memcached_types.go
index 7c4116e82a5..4dc7dac3b4b 100644
--- a/docs/book/src/getting-started/testdata/project/api/v1alpha1/memcached_types.go
+++ b/docs/book/src/getting-started/testdata/project/api/v1alpha1/memcached_types.go
@@ -13,6 +13,7 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
+// +kubebuilder:docs-gen:collapse=Apache License
package v1alpha1
@@ -23,6 +24,8 @@ import (
// EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN!
// NOTE: json tags are required. Any new fields you add must have json tags for the fields to be serialized.
+// +kubebuilder:docs-gen:collapse=Imports
+
// MemcachedSpec defines the desired state of Memcached
type MemcachedSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
@@ -35,9 +38,6 @@ type MemcachedSpec struct {
// +kubebuilder:validation:Maximum=3
// +kubebuilder:validation:ExclusiveMaximum=false
Size int32 `json:"size,omitempty"`
-
- // Port defines the port that will be used to init the container with the image
- ContainerPort int32 `json:"containerPort,omitempty"`
}
// MemcachedStatus defines the observed state of Memcached
diff --git a/docs/book/src/getting-started/testdata/project/cmd/main.go b/docs/book/src/getting-started/testdata/project/cmd/main.go
index a0f5203184d..b40205924db 100644
--- a/docs/book/src/getting-started/testdata/project/cmd/main.go
+++ b/docs/book/src/getting-started/testdata/project/cmd/main.go
@@ -145,9 +145,8 @@ func main() {
}
if err = (&controller.MemcachedReconciler{
- Client: mgr.GetClient(),
- Scheme: mgr.GetScheme(),
- Recorder: mgr.GetEventRecorderFor("memcached-controller"),
+ Client: mgr.GetClient(),
+ Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "Memcached")
os.Exit(1)
diff --git a/docs/book/src/getting-started/testdata/project/config/crd/bases/cache.example.com_memcacheds.yaml b/docs/book/src/getting-started/testdata/project/config/crd/bases/cache.example.com_memcacheds.yaml
index 6c155f5d33c..776b097795e 100644
--- a/docs/book/src/getting-started/testdata/project/config/crd/bases/cache.example.com_memcacheds.yaml
+++ b/docs/book/src/getting-started/testdata/project/config/crd/bases/cache.example.com_memcacheds.yaml
@@ -39,11 +39,6 @@ spec:
spec:
description: MemcachedSpec defines the desired state of Memcached
properties:
- containerPort:
- description: Port defines the port that will be used to init the container
- with the image
- format: int32
- type: integer
size:
description: |-
Size defines the number of Memcached instances
diff --git a/docs/book/src/getting-started/testdata/project/config/manager/manager.yaml b/docs/book/src/getting-started/testdata/project/config/manager/manager.yaml
index 6f7b81dd6eb..1bb9d5a6485 100644
--- a/docs/book/src/getting-started/testdata/project/config/manager/manager.yaml
+++ b/docs/book/src/getting-started/testdata/project/config/manager/manager.yaml
@@ -65,9 +65,6 @@ spec:
- --health-probe-bind-address=:8081
image: controller:latest
name: manager
- env:
- - name: MEMCACHED_IMAGE
- value: memcached:1.4.36-alpine
securityContext:
allowPrivilegeEscalation: false
capabilities:
diff --git a/docs/book/src/getting-started/testdata/project/config/samples/cache_v1alpha1_memcached.yaml b/docs/book/src/getting-started/testdata/project/config/samples/cache_v1alpha1_memcached.yaml
index 486bc341215..26614892b46 100644
--- a/docs/book/src/getting-started/testdata/project/config/samples/cache_v1alpha1_memcached.yaml
+++ b/docs/book/src/getting-started/testdata/project/config/samples/cache_v1alpha1_memcached.yaml
@@ -9,6 +9,3 @@ spec:
# TODO(user): edit the following value to ensure the number
# of Pods/Instances your Operand must have on cluster
size: 1
-
- # TODO(user): edit the following value to ensure the container has the right port to be initialized
- containerPort: 11211
diff --git a/docs/book/src/getting-started/testdata/project/internal/controller/memcached_controller.go b/docs/book/src/getting-started/testdata/project/internal/controller/memcached_controller.go
index ffb7560aaf1..e99689dd409 100644
--- a/docs/book/src/getting-started/testdata/project/internal/controller/memcached_controller.go
+++ b/docs/book/src/getting-started/testdata/project/internal/controller/memcached_controller.go
@@ -19,28 +19,22 @@ package controller
import (
"context"
"fmt"
- "os"
- "strings"
- "time"
-
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- "k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
- "k8s.io/client-go/tools/record"
+ "time"
+
+ "k8s.io/apimachinery/pkg/runtime"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
- "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/log"
cachev1alpha1 "example.com/memcached/api/v1alpha1"
)
-const memcachedFinalizer = "cache.example.com/finalizer"
-
// Definitions to manage status conditions
const (
// typeAvailableMemcached represents the status of the Deployment reconciliation
@@ -52,14 +46,9 @@ const (
// MemcachedReconciler reconciles a Memcached object
type MemcachedReconciler struct {
client.Client
- Scheme *runtime.Scheme
- Recorder record.EventRecorder
+ Scheme *runtime.Scheme
}
-// The following markers are used to generate the rules permissions (RBAC) on config/rbac using controller-gen
-// when the command is executed.
-// To know more about markers see: https://book.kubebuilder.io/reference/markers.html
-
// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update
@@ -77,6 +66,8 @@ type MemcachedReconciler struct {
// For further info:
// - About Operator Pattern: https://kubernetes.io/docs/concepts/extend-kubernetes/operator/
// - About Controllers: https://kubernetes.io/docs/concepts/architecture/controller/
+//
+// For more details, check Reconcile and its Result here:
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.19.0/pkg/reconcile
func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := log.FromContext(ctx)
@@ -117,79 +108,6 @@ func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
}
}
- // Let's add a finalizer. Then, we can define some operations which should
- // occur before the custom resource is deleted.
- // More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers
- if !controllerutil.ContainsFinalizer(memcached, memcachedFinalizer) {
- log.Info("Adding Finalizer for Memcached")
- if ok := controllerutil.AddFinalizer(memcached, memcachedFinalizer); !ok {
- log.Error(err, "Failed to add finalizer into the custom resource")
- return ctrl.Result{Requeue: true}, nil
- }
-
- if err = r.Update(ctx, memcached); err != nil {
- log.Error(err, "Failed to update custom resource to add finalizer")
- return ctrl.Result{}, err
- }
- }
-
- // Check if the Memcached instance is marked to be deleted, which is
- // indicated by the deletion timestamp being set.
- isMemcachedMarkedToBeDeleted := memcached.GetDeletionTimestamp() != nil
- if isMemcachedMarkedToBeDeleted {
- if controllerutil.ContainsFinalizer(memcached, memcachedFinalizer) {
- log.Info("Performing Finalizer Operations for Memcached before delete CR")
-
- // Let's add here a status "Downgrade" to reflect that this resource began its process to be terminated.
- meta.SetStatusCondition(&memcached.Status.Conditions, metav1.Condition{Type: typeDegradedMemcached,
- Status: metav1.ConditionUnknown, Reason: "Finalizing",
- Message: fmt.Sprintf("Performing finalizer operations for the custom resource: %s ", memcached.Name)})
-
- if err := r.Status().Update(ctx, memcached); err != nil {
- log.Error(err, "Failed to update Memcached status")
- return ctrl.Result{}, err
- }
-
- // Perform all operations required before removing the finalizer and allow
- // the Kubernetes API to remove the custom resource.
- r.doFinalizerOperationsForMemcached(memcached)
-
- // TODO(user): If you add operations to the doFinalizerOperationsForMemcached method
- // then you need to ensure that all worked fine before deleting and updating the Downgrade status
- // otherwise, you should requeue here.
-
- // Re-fetch the memcached Custom Resource before updating the status
- // so that we have the latest state of the resource on the cluster and we will avoid
- // raising the error "the object has been modified, please apply
- // your changes to the latest version and try again" which would re-trigger the reconciliation
- if err := r.Get(ctx, req.NamespacedName, memcached); err != nil {
- log.Error(err, "Failed to re-fetch memcached")
- return ctrl.Result{}, err
- }
-
- meta.SetStatusCondition(&memcached.Status.Conditions, metav1.Condition{Type: typeDegradedMemcached,
- Status: metav1.ConditionTrue, Reason: "Finalizing",
- Message: fmt.Sprintf("Finalizer operations for custom resource %s name were successfully accomplished", memcached.Name)})
-
- if err := r.Status().Update(ctx, memcached); err != nil {
- log.Error(err, "Failed to update Memcached status")
- return ctrl.Result{}, err
- }
-
- log.Info("Removing Finalizer for Memcached after successfully perform the operations")
- if ok := controllerutil.RemoveFinalizer(memcached, memcachedFinalizer); !ok {
- log.Error(err, "Failed to remove finalizer for Memcached")
- return ctrl.Result{Requeue: true}, nil
- }
-
- if err := r.Update(ctx, memcached); err != nil {
- log.Error(err, "Failed to remove finalizer for Memcached")
- return ctrl.Result{}, err
- }
- }
- return ctrl.Result{}, nil
- }
-
// Check if the deployment already exists, if not create a new one
found := &appsv1.Deployment{}
err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found)
@@ -282,37 +200,19 @@ func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
return ctrl.Result{}, nil
}
-// finalizeMemcached will perform the required operations before delete the CR.
-func (r *MemcachedReconciler) doFinalizerOperationsForMemcached(cr *cachev1alpha1.Memcached) {
- // TODO(user): Add the cleanup steps that the operator
- // needs to do before the CR can be deleted. Examples
- // of finalizers include performing backups and deleting
- // resources that are not owned by this CR, like a PVC.
-
- // Note: It is not recommended to use finalizers with the purpose of deleting resources which are
- // created and managed in the reconciliation. These ones, such as the Deployment created on this reconcile,
- // are defined as dependent of the custom resource. See that we use the method ctrl.SetControllerReference.
- // to set the ownerRef which means that the Deployment will be deleted by the Kubernetes API.
- // More info: https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/
-
- // The following implementation will raise an event
- r.Recorder.Event(cr, "Warning", "Deleting",
- fmt.Sprintf("Custom Resource %s is being deleted from the namespace %s",
- cr.Name,
- cr.Namespace))
+// SetupWithManager sets up the controller with the Manager.
+func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
+ return ctrl.NewControllerManagedBy(mgr).
+ For(&cachev1alpha1.Memcached{}).
+ Owns(&appsv1.Deployment{}).
+ Complete(r)
}
// deploymentForMemcached returns a Memcached Deployment object
func (r *MemcachedReconciler) deploymentForMemcached(
memcached *cachev1alpha1.Memcached) (*appsv1.Deployment, error) {
- ls := labelsForMemcached()
replicas := memcached.Spec.Size
-
- // Get the Operand image
- image, err := imageForMemcached()
- if err != nil {
- return nil, err
- }
+ image := "memcached:1.6.26-alpine3.19"
dep := &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
@@ -322,46 +222,15 @@ func (r *MemcachedReconciler) deploymentForMemcached(
Spec: appsv1.DeploymentSpec{
Replicas: &replicas,
Selector: &metav1.LabelSelector{
- MatchLabels: ls,
+ MatchLabels: map[string]string{"app.kubernetes.io/name": "project"},
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
- Labels: ls,
+ Labels: map[string]string{"app.kubernetes.io/name": "project"},
},
Spec: corev1.PodSpec{
- // TODO(user): Uncomment the following code to configure the nodeAffinity expression
- // according to the platforms which are supported by your solution. It is considered
- // best practice to support multiple architectures. build your manager image using the
- // makefile target docker-buildx. Also, you can use docker manifest inspect
- // to check what are the platforms supported.
- // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
- // Affinity: &corev1.Affinity{
- // NodeAffinity: &corev1.NodeAffinity{
- // RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{
- // NodeSelectorTerms: []corev1.NodeSelectorTerm{
- // {
- // MatchExpressions: []corev1.NodeSelectorRequirement{
- // {
- // Key: "kubernetes.io/arch",
- // Operator: "In",
- // Values: []string{"amd64", "arm64", "ppc64le", "s390x"},
- // },
- // {
- // Key: "kubernetes.io/os",
- // Operator: "In",
- // Values: []string{"linux"},
- // },
- // },
- // },
- // },
- // },
- // },
- // },
SecurityContext: &corev1.PodSecurityContext{
RunAsNonRoot: &[]bool{true}[0],
- // IMPORTANT: seccomProfile was introduced with Kubernetes 1.19
- // If you are looking for to produce solutions to be supported
- // on lower versions you must remove this option.
SeccompProfile: &corev1.SeccompProfile{
Type: corev1.SeccompProfileTypeRuntimeDefault,
},
@@ -383,7 +252,7 @@ func (r *MemcachedReconciler) deploymentForMemcached(
},
},
Ports: []corev1.ContainerPort{{
- ContainerPort: memcached.Spec.ContainerPort,
+ ContainerPort: 11211,
Name: "memcached",
}},
Command: []string{"memcached", "-m=64", "-o", "modern", "-v"},
@@ -400,38 +269,3 @@ func (r *MemcachedReconciler) deploymentForMemcached(
}
return dep, nil
}
-
-// labelsForMemcached returns the labels for selecting the resources
-// More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/
-func labelsForMemcached() map[string]string {
- var imageTag string
- image, err := imageForMemcached()
- if err == nil {
- imageTag = strings.Split(image, ":")[1]
- }
- return map[string]string{"app.kubernetes.io/name": "project",
- "app.kubernetes.io/version": imageTag,
- "app.kubernetes.io/managed-by": "MemcachedController",
- }
-}
-
-// imageForMemcached gets the Operand image which is managed by this controller
-// from the MEMCACHED_IMAGE environment variable defined in the config/manager/manager.yaml
-func imageForMemcached() (string, error) {
- var imageEnvVar = "MEMCACHED_IMAGE"
- image, found := os.LookupEnv(imageEnvVar)
- if !found {
- return "", fmt.Errorf("Unable to find %s environment variable with the image", imageEnvVar)
- }
- return image, nil
-}
-
-// SetupWithManager sets up the controller with the Manager.
-// Note that the Deployment will be also watched in order to ensure its
-// desirable state on the cluster
-func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
- return ctrl.NewControllerManagedBy(mgr).
- For(&cachev1alpha1.Memcached{}).
- Owns(&appsv1.Deployment{}).
- Complete(r)
-}
diff --git a/docs/book/src/getting-started/testdata/project/internal/controller/memcached_controller_test.go b/docs/book/src/getting-started/testdata/project/internal/controller/memcached_controller_test.go
index 906ed74619e..cdb72087958 100644
--- a/docs/book/src/getting-started/testdata/project/internal/controller/memcached_controller_test.go
+++ b/docs/book/src/getting-started/testdata/project/internal/controller/memcached_controller_test.go
@@ -19,110 +19,69 @@ package controller
import (
"context"
"fmt"
- "os"
"time"
- //nolint:golint
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
appsv1 "k8s.io/api/apps/v1"
- corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
- metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+
cachev1alpha1 "example.com/memcached/api/v1alpha1"
)
-var _ = Describe("Memcached controller", func() {
- Context("Memcached controller test", func() {
-
- const MemcachedName = "test-memcached"
+var _ = Describe("Memcached Controller", func() {
+ Context("When reconciling a resource", func() {
+ const resourceName = "test-resource"
ctx := context.Background()
- namespace := &corev1.Namespace{
- ObjectMeta: metav1.ObjectMeta{
- Name: MemcachedName,
- Namespace: MemcachedName,
- },
- }
-
typeNamespacedName := types.NamespacedName{
- Name: MemcachedName,
- Namespace: MemcachedName,
+ Name: resourceName,
+ Namespace: "default", // TODO(user):Modify as needed
}
memcached := &cachev1alpha1.Memcached{}
BeforeEach(func() {
- By("Creating the Namespace to perform the tests")
- err := k8sClient.Create(ctx, namespace)
- Expect(err).To(Not(HaveOccurred()))
-
- By("Setting the Image ENV VAR which stores the Operand image")
- err = os.Setenv("MEMCACHED_IMAGE", "example.com/image:test")
- Expect(err).To(Not(HaveOccurred()))
-
By("creating the custom resource for the Kind Memcached")
- err = k8sClient.Get(ctx, typeNamespacedName, memcached)
+ err := k8sClient.Get(ctx, typeNamespacedName, memcached)
if err != nil && errors.IsNotFound(err) {
- // Let's mock our custom resource at the same way that we would
- // apply on the cluster the manifest under config/samples
- memcached := &cachev1alpha1.Memcached{
+ resource := &cachev1alpha1.Memcached{
ObjectMeta: metav1.ObjectMeta{
- Name: MemcachedName,
- Namespace: namespace.Name,
+ Name: resourceName,
+ Namespace: "default",
},
Spec: cachev1alpha1.MemcachedSpec{
- Size: 1,
- ContainerPort: 11211,
+ Size: 1,
},
}
-
- err = k8sClient.Create(ctx, memcached)
- Expect(err).To(Not(HaveOccurred()))
+ Expect(k8sClient.Create(ctx, resource)).To(Succeed())
}
})
AfterEach(func() {
- By("removing the custom resource for the Kind Memcached")
- found := &cachev1alpha1.Memcached{}
- err := k8sClient.Get(ctx, typeNamespacedName, found)
- Expect(err).To(Not(HaveOccurred()))
+ // TODO(user): Cleanup logic after each test, like removing the resource instance.
+ resource := &cachev1alpha1.Memcached{}
+ err := k8sClient.Get(ctx, typeNamespacedName, resource)
+ Expect(err).NotTo(HaveOccurred())
- Eventually(func() error {
- return k8sClient.Delete(context.TODO(), found)
- }, 2*time.Minute, time.Second).Should(Succeed())
-
- // TODO(user): Attention if you improve this code by adding other context test you MUST
- // be aware of the current delete namespace limitations.
- // More info: https://book.kubebuilder.io/reference/envtest.html#testing-considerations
- By("Deleting the Namespace to perform the tests")
- _ = k8sClient.Delete(ctx, namespace)
-
- By("Removing the Image ENV VAR which stores the Operand image")
- _ = os.Unsetenv("MEMCACHED_IMAGE")
+ By("Cleanup the specific resource instance Memcached")
+ Expect(k8sClient.Delete(ctx, resource)).To(Succeed())
})
-
- It("should successfully reconcile a custom resource for Memcached", func() {
- By("Checking if the custom resource was successfully created")
- Eventually(func() error {
- found := &cachev1alpha1.Memcached{}
- return k8sClient.Get(ctx, typeNamespacedName, found)
- }, time.Minute, time.Second).Should(Succeed())
-
- By("Reconciling the custom resource created")
- memcachedReconciler := &MemcachedReconciler{
+ It("should successfully reconcile the resource", func() {
+ By("Reconciling the created resource")
+ controllerReconciler := &MemcachedReconciler{
Client: k8sClient,
Scheme: k8sClient.Scheme(),
}
- _, err := memcachedReconciler.Reconcile(ctx, reconcile.Request{
+ _, err := controllerReconciler.Reconcile(ctx, reconcile.Request{
NamespacedName: typeNamespacedName,
})
- Expect(err).To(Not(HaveOccurred()))
-
+ Expect(err).NotTo(HaveOccurred())
By("Checking if Deployment was successfully created in the reconciliation")
Eventually(func() error {
found := &appsv1.Deployment{}
diff --git a/hack/docs/internal/getting-started/generate_getting_started.go b/hack/docs/internal/getting-started/generate_getting_started.go
index fb327405579..6fe6012dfd5 100644
--- a/hack/docs/internal/getting-started/generate_getting_started.go
+++ b/hack/docs/internal/getting-started/generate_getting_started.go
@@ -18,6 +18,9 @@ package gettingstarted
import (
"os/exec"
+ "path/filepath"
+
+ pluginutil "sigs.k8s.io/kubebuilder/v4/pkg/plugin/util"
hackutils "sigs.k8s.io/kubebuilder/v4/hack/docs/utils"
@@ -36,7 +39,169 @@ func NewSample(binaryPath, samplePath string) Sample {
}
func (sp *Sample) UpdateTutorial() {
- log.Println("TODO: update tutorial")
+ sp.updateApi()
+ sp.updateSample()
+ sp.updateController()
+ sp.updateControllerTest()
+}
+
+func (sp *Sample) updateControllerTest() {
+ file := "internal/controller/memcached_controller_test.go"
+ err := pluginutil.ReplaceInFile(
+ filepath.Join(sp.ctx.Dir, file),
+ "\"context\"",
+ `"context"
+ "fmt"
+ "time"`,
+ )
+ hackutils.CheckError("add imports", err)
+
+ err = pluginutil.ReplaceInFile(
+ filepath.Join(sp.ctx.Dir, file),
+ ". \"github.com/onsi/gomega\"",
+ `. "github.com/onsi/gomega"
+ appsv1 "k8s.io/api/apps/v1"`,
+ )
+ hackutils.CheckError("add imports apis", err)
+
+ err = pluginutil.ReplaceInFile(
+ filepath.Join(sp.ctx.Dir, file),
+ "// TODO(user): Specify other spec details if needed.",
+ `Spec: cachev1alpha1.MemcachedSpec{
+ Size: 1,
+ },`,
+ )
+ hackutils.CheckError("add spec apis", err)
+
+ err = pluginutil.ReplaceInFile(
+ filepath.Join(sp.ctx.Dir, file),
+ `// TODO(user): Add more specific assertions depending on your controller's reconciliation logic.
+ // Example: If you expect a certain status condition after reconciliation, verify it here.`,
+ `By("Checking if Deployment was successfully created in the reconciliation")
+ Eventually(func() error {
+ found := &appsv1.Deployment{}
+ return k8sClient.Get(ctx, typeNamespacedName, found)
+ }, time.Minute, time.Second).Should(Succeed())
+
+ By("Checking the latest Status Condition added to the Memcached instance")
+ Eventually(func() error {
+ if memcached.Status.Conditions != nil &&
+ len(memcached.Status.Conditions) != 0 {
+ latestStatusCondition := memcached.Status.Conditions[len(memcached.Status.Conditions)-1]
+ expectedLatestStatusCondition := metav1.Condition{
+ Type: typeAvailableMemcached,
+ Status: metav1.ConditionTrue,
+ Reason: "Reconciling",
+ Message: fmt.Sprintf(
+ "Deployment for custom resource (%s) with %d replicas created successfully",
+ memcached.Name,
+ memcached.Spec.Size),
+ }
+ if latestStatusCondition != expectedLatestStatusCondition {
+ return fmt.Errorf("The latest status condition added to the Memcached instance is not as expected")
+ }
+ }
+ return nil
+ }, time.Minute, time.Second).Should(Succeed())`,
+ )
+ hackutils.CheckError("add spec apis", err)
+}
+
+func (sp *Sample) updateApi() {
+ var err error
+ path := "api/v1alpha1/memcached_types.go"
+ err = pluginutil.InsertCode(
+ filepath.Join(sp.ctx.Dir, path),
+ `limitations under the License.
+*/`,
+ `
+// +kubebuilder:docs-gen:collapse=Apache License
+
+`)
+ hackutils.CheckError("collapse license in memcached api", err)
+
+ err = pluginutil.InsertCode(
+ filepath.Join(sp.ctx.Dir, path),
+ `Any new fields you add must have json tags for the fields to be serialized.
+`,
+ `
+// +kubebuilder:docs-gen:collapse=Imports
+`)
+ hackutils.CheckError("collapse imports in memcached api", err)
+
+ err = pluginutil.ReplaceInFile(filepath.Join(sp.ctx.Dir, path), oldSpecApi, newSpecApi)
+ hackutils.CheckError("replace spec api", err)
+
+ err = pluginutil.ReplaceInFile(filepath.Join(sp.ctx.Dir, path), oldStatusApi, newStatusApi)
+ hackutils.CheckError("replace status api", err)
+}
+
+func (sp *Sample) updateSample() {
+ file := filepath.Join(sp.ctx.Dir, "config/samples/cache_v1alpha1_memcached.yaml")
+ err := pluginutil.ReplaceInFile(file, "# TODO(user): Add fields here", sampleSizeFragment)
+ hackutils.CheckError("update sample to add size", err)
+}
+
+func (sp *Sample) updateController() {
+ pathFile := "internal/controller/memcached_controller.go"
+ err := pluginutil.ReplaceInFile(
+ filepath.Join(sp.ctx.Dir, pathFile),
+ "\"context\"",
+ controllerImports,
+ )
+ hackutils.CheckError("add imports", err)
+
+ err = pluginutil.InsertCode(
+ filepath.Join(sp.ctx.Dir, pathFile),
+ "cachev1alpha1 \"example.com/memcached/api/v1alpha1\"\n)",
+ controllerStatusTypes,
+ )
+ hackutils.CheckError("add status types", err)
+
+ err = pluginutil.ReplaceInFile(
+ filepath.Join(sp.ctx.Dir, pathFile),
+ controllerInfoReconcileOld,
+ controllerInfoReconcileNew,
+ )
+ hackutils.CheckError("add status types", err)
+
+ err = pluginutil.InsertCode(
+ filepath.Join(sp.ctx.Dir, pathFile),
+ "// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update",
+ `
+// +kubebuilder:rbac:groups=core,resources=events,verbs=create;patch
+// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
+// +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch`,
+ )
+ hackutils.CheckError("add markers", err)
+
+ err = pluginutil.ReplaceInFile(
+ filepath.Join(sp.ctx.Dir, pathFile),
+ "_ = log.FromContext(ctx)",
+ "log := log.FromContext(ctx)",
+ )
+ hackutils.CheckError("add log var", err)
+
+ err = pluginutil.ReplaceInFile(
+ filepath.Join(sp.ctx.Dir, pathFile),
+ "// TODO(user): your logic here",
+ controllerReconcileImplementation,
+ )
+ hackutils.CheckError("add reconcile implementation", err)
+
+ err = pluginutil.AppendCodeIfNotExist(
+ filepath.Join(sp.ctx.Dir, pathFile),
+ controllerDeploymentFunc,
+ )
+ hackutils.CheckError("add func to create Deployment", err)
+
+ err = pluginutil.InsertCode(
+ filepath.Join(sp.ctx.Dir, pathFile),
+ "For(&cachev1alpha1.Memcached{}).",
+ `
+ Owns(&appsv1.Deployment{}).`,
+ )
+ hackutils.CheckError("add reconcile implementation", err)
}
// Prepare the Context for the sample project
@@ -65,12 +230,7 @@ func (sp *Sample) GenerateSampleProject() {
"--group", "cache",
"--version", "v1alpha1",
"--kind", "Memcached",
- "--image", "memcached:1.4.36-alpine",
- "--image-container-command", "memcached,-m=64,-o,modern,-v",
- "--image-container-port", "11211",
- "--run-as-user", "1001",
- "--plugins", "deploy-image/v1-alpha",
- "--make=false",
+ "--resource", "--controller",
)
hackutils.CheckError("Creating the API", err)
}
@@ -88,3 +248,250 @@ func (sp *Sample) CodeGen() {
_, err = sp.ctx.Run(cmd)
hackutils.CheckError("Failed to run go mod tidy all for getting started tutorial", err)
}
+
+const oldSpecApi = "// Foo is an example field of Memcached. Edit memcached_types.go to remove/update\n\tFoo string `json:\"foo,omitempty\"`"
+const newSpecApi = `// Size defines the number of Memcached instances
+ // The following markers will use OpenAPI v3 schema to validate the value
+ // More info: https://book.kubebuilder.io/reference/markers/crd-validation.html
+ // +kubebuilder:validation:Minimum=1
+ // +kubebuilder:validation:Maximum=3
+ // +kubebuilder:validation:ExclusiveMaximum=false
+ Size int32 ` + "`json:\"size,omitempty\"`"
+
+const oldStatusApi = `// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
+ // Important: Run "make" to regenerate code after modifying this file`
+
+const newStatusApi = `// Represents the observations of a Memcached's current state.
+ // Memcached.status.conditions.type are: "Available", "Progressing", and "Degraded"
+ // Memcached.status.conditions.status are one of True, False, Unknown.
+ // Memcached.status.conditions.reason the value should be a CamelCase string and producers of specific
+ // condition types may define expected values and meanings for this field, and whether the values
+ // are considered a guaranteed API.
+ // Memcached.status.conditions.Message is a human readable message indicating details about the transition.
+ // For further information see: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties
+
+ Conditions []metav1.Condition ` + "`json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\" protobuf:\"bytes,1,rep,name=conditions\"`"
+
+const sampleSizeFragment = `# TODO(user): edit the following value to ensure the number
+ # of Pods/Instances your Operand must have on cluster
+ size: 1`
+
+const controllerImports = `"context"
+ "fmt"
+ "time"
+ appsv1 "k8s.io/api/apps/v1"
+ corev1 "k8s.io/api/core/v1"
+ apierrors "k8s.io/apimachinery/pkg/api/errors"
+ "k8s.io/apimachinery/pkg/api/meta"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/types"
+`
+
+const controllerStatusTypes = `
+// Definitions to manage status conditions
+const (
+ // typeAvailableMemcached represents the status of the Deployment reconciliation
+ typeAvailableMemcached = "Available"
+ // typeDegradedMemcached represents the status used when the custom resource is deleted and the finalizer operations are yet to occur.
+ typeDegradedMemcached = "Degraded"
+)`
+
+const controllerInfoReconcileOld = `// TODO(user): Modify the Reconcile function to compare the state specified by
+// the Memcached object against the actual cluster state, and then
+// perform operations to make the cluster state reflect the state specified by
+// the user.`
+
+const controllerInfoReconcileNew = `// It is essential for the controller's reconciliation loop to be idempotent. By following the Operator
+// pattern you will create Controllers which provide a reconcile function
+// responsible for synchronizing resources until the desired state is reached on the cluster.
+// Breaking this recommendation goes against the design principles of controller-runtime.
+// and may lead to unforeseen consequences such as resources becoming stuck and requiring manual intervention.
+// For further info:
+// - About Operator Pattern: https://kubernetes.io/docs/concepts/extend-kubernetes/operator/
+// - About Controllers: https://kubernetes.io/docs/concepts/architecture/controller/`
+
+const controllerReconcileImplementation = `// Fetch the Memcached instance
+ // The purpose is check if the Custom Resource for the Kind Memcached
+ // is applied on the cluster if not we return nil to stop the reconciliation
+ memcached := &cachev1alpha1.Memcached{}
+ err := r.Get(ctx, req.NamespacedName, memcached)
+ if err != nil {
+ if apierrors.IsNotFound(err) {
+ // If the custom resource is not found then it usually means that it was deleted or not created
+ // In this way, we will stop the reconciliation
+ log.Info("memcached resource not found. Ignoring since object must be deleted")
+ return ctrl.Result{}, nil
+ }
+ // Error reading the object - requeue the request.
+ log.Error(err, "Failed to get memcached")
+ return ctrl.Result{}, err
+ }
+
+ // Let's just set the status as Unknown when no status is available
+ if memcached.Status.Conditions == nil || len(memcached.Status.Conditions) == 0 {
+ meta.SetStatusCondition(&memcached.Status.Conditions, metav1.Condition{Type: typeAvailableMemcached, Status: metav1.ConditionUnknown, Reason: "Reconciling", Message: "Starting reconciliation"})
+ if err = r.Status().Update(ctx, memcached); err != nil {
+ log.Error(err, "Failed to update Memcached status")
+ return ctrl.Result{}, err
+ }
+
+ // Let's re-fetch the memcached Custom Resource after updating the status
+ // so that we have the latest state of the resource on the cluster and we will avoid
+ // raising the error "the object has been modified, please apply
+ // your changes to the latest version and try again" which would re-trigger the reconciliation
+ // if we try to update it again in the following operations
+ if err := r.Get(ctx, req.NamespacedName, memcached); err != nil {
+ log.Error(err, "Failed to re-fetch memcached")
+ return ctrl.Result{}, err
+ }
+ }
+
+ // Check if the deployment already exists, if not create a new one
+ found := &appsv1.Deployment{}
+ err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found)
+ if err != nil && apierrors.IsNotFound(err) {
+ // Define a new deployment
+ dep, err := r.deploymentForMemcached(memcached)
+ if err != nil {
+ log.Error(err, "Failed to define new Deployment resource for Memcached")
+
+ // The following implementation will update the status
+ meta.SetStatusCondition(&memcached.Status.Conditions, metav1.Condition{Type: typeAvailableMemcached,
+ Status: metav1.ConditionFalse, Reason: "Reconciling",
+ Message: fmt.Sprintf("Failed to create Deployment for the custom resource (%s): (%s)", memcached.Name, err)})
+
+ if err := r.Status().Update(ctx, memcached); err != nil {
+ log.Error(err, "Failed to update Memcached status")
+ return ctrl.Result{}, err
+ }
+
+ return ctrl.Result{}, err
+ }
+
+ log.Info("Creating a new Deployment",
+ "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
+ if err = r.Create(ctx, dep); err != nil {
+ log.Error(err, "Failed to create new Deployment",
+ "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
+ return ctrl.Result{}, err
+ }
+
+ // Deployment created successfully
+ // We will requeue the reconciliation so that we can ensure the state
+ // and move forward for the next operations
+ return ctrl.Result{RequeueAfter: time.Minute}, nil
+ } else if err != nil {
+ log.Error(err, "Failed to get Deployment")
+ // Let's return the error for the reconciliation be re-trigged again
+ return ctrl.Result{}, err
+ }
+
+ // The CRD API defines that the Memcached type have a MemcachedSpec.Size field
+ // to set the quantity of Deployment instances to the desired state on the cluster.
+ // Therefore, the following code will ensure the Deployment size is the same as defined
+ // via the Size spec of the Custom Resource which we are reconciling.
+ size := memcached.Spec.Size
+ if *found.Spec.Replicas != size {
+ found.Spec.Replicas = &size
+ if err = r.Update(ctx, found); err != nil {
+ log.Error(err, "Failed to update Deployment",
+ "Deployment.Namespace", found.Namespace, "Deployment.Name", found.Name)
+
+ // Re-fetch the memcached Custom Resource before updating the status
+ // so that we have the latest state of the resource on the cluster and we will avoid
+ // raising the error "the object has been modified, please apply
+ // your changes to the latest version and try again" which would re-trigger the reconciliation
+ if err := r.Get(ctx, req.NamespacedName, memcached); err != nil {
+ log.Error(err, "Failed to re-fetch memcached")
+ return ctrl.Result{}, err
+ }
+
+ // The following implementation will update the status
+ meta.SetStatusCondition(&memcached.Status.Conditions, metav1.Condition{Type: typeAvailableMemcached,
+ Status: metav1.ConditionFalse, Reason: "Resizing",
+ Message: fmt.Sprintf("Failed to update the size for the custom resource (%s): (%s)", memcached.Name, err)})
+
+ if err := r.Status().Update(ctx, memcached); err != nil {
+ log.Error(err, "Failed to update Memcached status")
+ return ctrl.Result{}, err
+ }
+
+ return ctrl.Result{}, err
+ }
+
+ // Now, that we update the size we want to requeue the reconciliation
+ // so that we can ensure that we have the latest state of the resource before
+ // update. Also, it will help ensure the desired state on the cluster
+ return ctrl.Result{Requeue: true}, nil
+ }
+
+ // The following implementation will update the status
+ meta.SetStatusCondition(&memcached.Status.Conditions, metav1.Condition{Type: typeAvailableMemcached,
+ Status: metav1.ConditionTrue, Reason: "Reconciling",
+ Message: fmt.Sprintf("Deployment for custom resource (%s) with %d replicas created successfully", memcached.Name, size)})
+
+ if err := r.Status().Update(ctx, memcached); err != nil {
+ log.Error(err, "Failed to update Memcached status")
+ return ctrl.Result{}, err
+ }`
+const controllerDeploymentFunc = `// deploymentForMemcached returns a Memcached Deployment object
+func (r *MemcachedReconciler) deploymentForMemcached(
+ memcached *cachev1alpha1.Memcached) (*appsv1.Deployment, error) {
+ replicas := memcached.Spec.Size
+ image := "memcached:1.6.26-alpine3.19"
+
+ dep := &appsv1.Deployment{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: memcached.Name,
+ Namespace: memcached.Namespace,
+ },
+ Spec: appsv1.DeploymentSpec{
+ Replicas: &replicas,
+ Selector: &metav1.LabelSelector{
+ MatchLabels: map[string]string{"app.kubernetes.io/name": "project"},
+ },
+ Template: corev1.PodTemplateSpec{
+ ObjectMeta: metav1.ObjectMeta{
+ Labels: map[string]string{"app.kubernetes.io/name": "project"},
+ },
+ Spec: corev1.PodSpec{
+ SecurityContext: &corev1.PodSecurityContext{
+ RunAsNonRoot: &[]bool{true}[0],
+ SeccompProfile: &corev1.SeccompProfile{
+ Type: corev1.SeccompProfileTypeRuntimeDefault,
+ },
+ },
+ Containers: []corev1.Container{{
+ Image: image,
+ Name: "memcached",
+ ImagePullPolicy: corev1.PullIfNotPresent,
+ // Ensure restrictive context for the container
+ // More info: https://kubernetes.io/docs/concepts/security/pod-security-standards/#restricted
+ SecurityContext: &corev1.SecurityContext{
+ RunAsNonRoot: &[]bool{true}[0],
+ RunAsUser: &[]int64{1001}[0],
+ AllowPrivilegeEscalation: &[]bool{false}[0],
+ Capabilities: &corev1.Capabilities{
+ Drop: []corev1.Capability{
+ "ALL",
+ },
+ },
+ },
+ Ports: []corev1.ContainerPort{{
+ ContainerPort: 11211,
+ Name: "memcached",
+ }},
+ Command: []string{"memcached", "-m=64", "-o", "modern", "-v"},
+ }},
+ },
+ },
+ },
+ }
+
+ // Set the ownerRef for the Deployment
+ // More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/
+ if err := ctrl.SetControllerReference(memcached, dep, r.Scheme); err != nil {
+ return nil, err
+ }
+ return dep, nil
+}`