diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 4eccdf9ccfa..efe8b4c2263 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -87,27 +87,27 @@ To fully test the proposed plugin: 1. Create a new package(folder) under `test/e2e/`. 2. Create [e2e_suite_test.go](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/v4/e2e_suite_test.go), which imports the necessary testing framework. 3. Create `generate_test.go` ([ref](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/v4/generate_test.go)). That should: - - Introduce/Receive a `TextContext` instance - - Trigger the plugin's bound subcommands. See [Init](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L213), [CreateAPI](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.6.0/test/e2e/utils/test_context.go#L222) - - Use [PluginUtil](https://pkg.go.dev/sigs.k8s.io/kubebuilder/v3/pkg/plugin/util) to verify the scaffolded outputs. See [InsertCode](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/pkg/plugin/util/util.go#L67), [ReplaceInFile](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.6.0/pkg/plugin/util/util.go#L196), [UncommendCode](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.6.0/pkg/plugin/util/util.go#L86) +- Introduce/Receive a `TextContext` instance +- Trigger the plugin's bound subcommands. See [Init](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L213), [CreateAPI](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.6.0/test/e2e/utils/test_context.go#L222) +- Use [PluginUtil](https://pkg.go.dev/sigs.k8s.io/kubebuilder/v3/pkg/plugin/util) to verify the scaffolded outputs. See [InsertCode](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/pkg/plugin/util/util.go#L67), [ReplaceInFile](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.6.0/pkg/plugin/util/util.go#L196), [UncommendCode](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.6.0/pkg/plugin/util/util.go#L86) 4. Create `plugin_cluster_test.go` ([ref](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/v4/plugin_cluster_test.go)). That should: - - 4.1. Setup testing environment, e.g: +- 4.1. Setup testing environment, e.g: - - Cleanup environment, create temp dir. See [Prepare](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L97) - - If your test will cover the provided features then, ensure that you install prerequisites CRDs: See [InstallCertManager](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L138), [InstallPrometheusManager](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.6.0/test/e2e/utils/test_context.go#L171) +- Cleanup environment, create temp dir. See [Prepare](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L97) +- If your test will cover the provided features then, ensure that you install prerequisites CRDs: See [InstallCertManager](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L138), [InstallPrometheusManager](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.6.0/test/e2e/utils/test_context.go#L171) - - 4.2. Run the function from `generate_test.go`. +- 4.2. Run the function from `generate_test.go`. - - 4.3. Further make sure the scaffolded output works, e.g: +- 4.3. Further make sure the scaffolded output works, e.g: - - Execute commands in your `Makefile`. See [Make](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L240) - - Temporary load image of the testing controller. See [LoadImageToKindCluster](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L283) - - Call Kubectl to validate running resources. See [utils.Kubectl](https://pkg.go.dev/sigs.k8s.io/kubebuilder/v3/test/e2e/utils#Kubectl) +- Execute commands in your `Makefile`. See [Make](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L240) +- Temporary load image of the testing controller. See [LoadImageToKindCluster](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L283) +- Call Kubectl to validate running resources. See [utils.Kubectl](https://pkg.go.dev/sigs.k8s.io/kubebuilder/v3/test/e2e/utils#Kubectl) - - 4.4. Delete temporary resources after testing exited, e.g: - - Uninstall prerequisites CRDs: See [UninstallPrometheusOperManager](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L183) - - Delete temp dir. See [Destroy](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L255) +- 4.4. Delete temporary resources after testing exited, e.g: +- Uninstall prerequisites CRDs: See [UninstallPrometheusOperManager](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L183) +- Delete temp dir. See [Destroy](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L255) 5. Add the command in [test/e2e/plugin](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/setup.sh#L65) to run your testing code: diff --git a/DESIGN.md b/DESIGN.md index d736a0fa8d0..67655305a57 100644 --- a/DESIGN.md +++ b/DESIGN.md @@ -6,112 +6,112 @@ project and its various components. ## Overarching * **Libraries Over Code Generation**: Generated code is messy to maintain, - hard for humans to change and understand, and hard to update. Library - code is easy to update (just increase your dependency version), easier - to version using existing mechanisms, and more concise. +hard for humans to change and understand, and hard to update. Library +code is easy to update (just increase your dependency version), easier +to version using existing mechanisms, and more concise. * **Copy-pasting is bad**: Copy-pasted code suffers from similar problems - as code generation, except more acutely. Copy-pasted code is nearly - impossible to easy update, and frequently suffers from bugs and - misunderstandings. If something is being copy-pasted, it should - refactored into a library component or remote - [kustomize](https://sigs.k8s.io/kustomize) base. +as code generation, except more acutely. Copy-pasted code is nearly +impossible to easy update, and frequently suffers from bugs and +misunderstandings. If something is being copy-pasted, it should +refactored into a library component or remote +[kustomize](https://sigs.k8s.io/kustomize) base. * **Common Cases Should Be Easy**: The 80-90% common cases should be - simple and easy for users to understand. +simple and easy for users to understand. * **Uncommon Cases Should Be Possible**: There shouldn't be situations - where it's downright impossible to do something within - controller-runtime or controller-tools. It may take extra digging or - coding, and it may involve interoperating with lower-level components, - but it should be possible without unreasonable friction. +where it's downright impossible to do something within +controller-runtime or controller-tools. It may take extra digging or +coding, and it may involve interoperating with lower-level components, +but it should be possible without unreasonable friction. ## Kubebuilder * **Kubebuilder Has Opinions**: Kubebuilder exists as an opinionated - project generator. It should strive to give users a reasonable project - layout that's simple enough to understand when getting started, but - provides room to grow. It might not match everyone's opinions, but it - should strive to be useful to most. +project generator. It should strive to give users a reasonable project +layout that's simple enough to understand when getting started, but +provides room to grow. It might not match everyone's opinions, but it +should strive to be useful to most. * **Batteries Included**: Kubebuilder projects should contain enough - deployment information to reasonably develop and run the scaffolded - project. This includes testing, deployment files, and development - infrastructure to go from code to running containers. +deployment information to reasonably develop and run the scaffolded +project. This includes testing, deployment files, and development +infrastructure to go from code to running containers. ## controller-tools and controller-runtime * **Sufficient But Composable**: controller-tools and controller-runtime - should be sufficient for building a custom controller by hand. While - scaffolding and additional libraries may make life easier, building - without should be as painless as possible. That being said, they should - strive to be usable as building blocks for higher-level libraries as - well. +should be sufficient for building a custom controller by hand. While +scaffolding and additional libraries may make life easier, building +without should be as painless as possible. That being said, they should +strive to be usable as building blocks for higher-level libraries as +well. * **Self-Sufficient Docs**: controller-tools and controller-runtime should - strive to have self-sufficient docs (i.e. documentation that doesn't - require reading other libraries' documentation for common use cases). - Examples should be plentiful. +strive to have self-sufficient docs (i.e. documentation that doesn't +require reading other libraries' documentation for common use cases). +Examples should be plentiful. * **Contained Arcana**: Developers should not need to be experts in - Kubernetes API machinery to develop controllers, but those familiar with - Kubernetes API machinery should not feel out of place. Abstractions - should be intuitive to new users but feel familiar to experienced ones. - Abstractions should embrace the concepts of Kubernetes (e.g. declarative - idempotent reconcilers) while simplifying the details. +Kubernetes API machinery to develop controllers, but those familiar with +Kubernetes API machinery should not feel out of place. Abstractions +should be intuitive to new users but feel familiar to experienced ones. +Abstractions should embrace the concepts of Kubernetes (e.g. declarative +idempotent reconcilers) while simplifying the details. ## controller-runtime * **Abstractions Should Be Layered**: Abstractions should be built on top - of lower layers, such that advanced users can write custom logic while - still working within the existing model. For instance, the controller - builder is built on top of the event, source, and handler helpers, which - are in turn built for use with the event, source, and handler - interfaces. +of lower layers, such that advanced users can write custom logic while +still working within the existing model. For instance, the controller +builder is built on top of the event, source, and handler helpers, which +are in turn built for use with the event, source, and handler +interfaces. * **Repetitive Stress Injuries Are Bad**: - When possible, commonly used pieces should be exposed in a way that - enables clear, concise code. This includes aliasing groups of - functionality under "alias" or "prelude" packages to avoid having 40 - lines of imports, including common idioms as flexible helpers, and - infering resource information from the user's object types in client - code. +When possible, commonly used pieces should be exposed in a way that +enables clear, concise code. This includes aliasing groups of +functionality under "alias" or "prelude" packages to avoid having 40 +lines of imports, including common idioms as flexible helpers, and +infering resource information from the user's object types in client +code. * **A Little Bit of Magic Goes a Long Way**: In absence of generics, - reflection is acceptable, especially when it leads to clearer, conciser - code. However, when possible interfaces that use reflection should be - designed to avoid requiring the end-developer to use type assertions, - string splitting, which are error-prone and repetitive. These should be - dealt with inside controller-runtime internals. +reflection is acceptable, especially when it leads to clearer, conciser +code. However, when possible interfaces that use reflection should be +designed to avoid requiring the end-developer to use type assertions, +string splitting, which are error-prone and repetitive. These should be +dealt with inside controller-runtime internals. * **Defaults Over Constructors**: When not a huge performance impact, - favor auto-defaulting and `Options` structs over constructors. - Constructors quickly become unclear due to lack of names associated - with values, and don't work well with optional values. +favor auto-defaulting and `Options` structs over constructors. +Constructors quickly become unclear due to lack of names associated +with values, and don't work well with optional values. ## Development * **Words Are Better Than Letters**: Don't abbreviate variable names - unless it's blindingly obvious what they are (e.g. `ctx` for `Context`). - Single- and double-letter method receivers are acceptable, but single- - and double-letter variables quickly become confusing the longer a code - block gets. +unless it's blindingly obvious what they are (e.g. `ctx` for `Context`). +Single- and double-letter method receivers are acceptable, but single- +and double-letter variables quickly become confusing the longer a code +block gets. * **Well-commented code**: Code should be commented and given Godocs, even - private methods and functions. It may *seem* obvious what they do at the - time and why, but you might forget, and others will certainly come along. +private methods and functions. It may *seem* obvious what they do at the +time and why, but you might forget, and others will certainly come along. * **Test Behaviors**: Test cases should be comprehensible as sets of - expected behaviors. Test cases read without code (e.g. just using `It`, - `Describe`, `Context`, and `By` lines) should still be able to explain - what's required of the tested interface. Testing behaviors makes - internal refactors easier, and makes reading tests easier. +expected behaviors. Test cases read without code (e.g. just using `It`, +`Describe`, `Context`, and `By` lines) should still be able to explain +what's required of the tested interface. Testing behaviors makes +internal refactors easier, and makes reading tests easier. * **Real Components Over Mocks**: Avoid mocks and recording actions. Mocks - tend to be brittle and gradually become more complicated over time (e.g. - fake client implementations tend to grow into poorly-written, incomplete - API servers). Recording of actions tends to lead to brittle tests that - requires changes during refactors. Instead, test that the end desired - state is correct. Test the way the world should be, without caring how - it got there, and provide easy ways to set up the real components so - that mocks aren't required. +tend to be brittle and gradually become more complicated over time (e.g. +fake client implementations tend to grow into poorly-written, incomplete +API servers). Recording of actions tends to lead to brittle tests that +requires changes during refactors. Instead, test that the end desired +state is correct. Test the way the world should be, without caring how +it got there, and provide easy ways to set up the real components so +that mocks aren't required. diff --git a/README.md b/README.md index 7ab2727e630..d5f501192bb 100644 --- a/README.md +++ b/README.md @@ -23,10 +23,10 @@ Kubebuilder is developed on top of the [controller-runtime][controller-runtime] ### Kubebuilder is also a library Kubebuilder is extensible and can be used as a library in other projects. -[Operator-SDK][operator-sdk] is a good example of a project that uses Kubebuilder as a library. +[Operator-SDK][operator-sdk] is a good example of a project that uses Kubebuilder as a library. [Operator-SDK][operator-sdk] uses the plugin feature to include non-Go operators _e.g. operator-sdk's Ansible and Helm-based language Operators_. -To learn more see [how to create your own plugins][your-own-plugins]. +To learn more see [how to create your own plugins][your-own-plugins]. ### Installation @@ -39,9 +39,9 @@ See the [Getting Started](https://book.kubebuilder.io/quick-start.html) document ![Quick Start](docs/gif/kb-demo.v3.11.1.svg) -Also, ensure that you check out the [Deploy Image](https://book.kubebuilder.io/plugins/deploy-image-plugin-v1-alpha.html) -Plugin. This plugin allows users to scaffold API/Controllers to deploy and manage an -Operand (image) on the cluster following the guidelines and best practices. It abstracts the +Also, ensure that you check out the [Deploy Image](https://book.kubebuilder.io/plugins/deploy-image-plugin-v1-alpha.html) +Plugin. This plugin allows users to scaffold API/Controllers to deploy and manage an +Operand (image) on the cluster following the guidelines and best practices. It abstracts the complexities of achieving this goal while allowing users to customize the generated code. ## Documentation @@ -93,11 +93,11 @@ Provide clean library abstractions with clear and well exampled godocs. ## Techniques - Provide higher level libraries on top of low level client libraries - - Protect developers from breaking changes in low level libraries - - Start minimal and provide progressive discovery of functionality - - Provide sane defaults and allow users to override when they exist +- Protect developers from breaking changes in low level libraries +- Start minimal and provide progressive discovery of functionality +- Provide sane defaults and allow users to override when they exist - Provide code generators to maintain common boilerplate that can't be addressed by interfaces - - Driven off of `//+` comments +- Driven off of `//+` comments - Provide bootstrapping commands to initialize new packages ## Versioning and Releasing @@ -107,11 +107,11 @@ See [VERSIONING.md](VERSIONING.md). ## Troubleshooting - ### Bugs and Feature Requests: - If you have what looks like a bug, or you would like to make a feature request, please use the [Github issue tracking system.](https://github.com/kubernetes-sigs/kubebuilder/issues) +If you have what looks like a bug, or you would like to make a feature request, please use the [Github issue tracking system.](https://github.com/kubernetes-sigs/kubebuilder/issues) Before you file an issue, please search existing issues to see if your issue is already covered. - ### Slack - For realtime discussion, you can join the [#kubebuilder](https://slack.k8s.io/#kubebuilder) slack channel. Slack requires registration, but the Kubernetes team is open invitation to anyone to register here. Feel free to come and ask any questions. +For realtime discussion, you can join the [#kubebuilder](https://slack.k8s.io/#kubebuilder) slack channel. Slack requires registration, but the Kubernetes team is open invitation to anyone to register here. Feel free to come and ask any questions. ## Contributing @@ -121,8 +121,8 @@ Before starting any work, please either comment on an existing issue, or file a ## Supportability -Currently, Kubebuilder officially supports OSX and Linux platforms. -So, if you are using a Windows OS you may find issues. Contributions towards +Currently, Kubebuilder officially supports OSX and Linux platforms. +So, if you are using a Windows OS you may find issues. Contributions towards supporting Windows are welcome. ### Apple Silicon @@ -130,9 +130,9 @@ supporting Windows are welcome. Apple Silicon (`darwin/arm64`) support begins with the `go/v4` plugin. ## Community Meetings - + The following meetings happen biweekly: - + - Kubebuilder Meeting You are more than welcome to attend. For further info join to [kubebuilder@googlegroups.com](https://groups.google.com/g/kubebuilder). diff --git a/RELEASE.md b/RELEASE.md index f1329419e9d..51403290a91 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -20,19 +20,19 @@ process was done to ensure that we have an aligned process under the org (simila ### Now, let's generate the changelog 1. Create the changelog from the new branch `release-` (`git checkout release-`). - You will need to use the [kubebuilder-release-tools][kubebuilder-release-tools] to generate release notes. See [here][release-notes-generation] +You will need to use the [kubebuilder-release-tools][kubebuilder-release-tools] to generate release notes. See [here][release-notes-generation] > **Note** > - You will need to have checkout locally from the remote repository the previous branch > - Also, ensure that you fetch all tags from the remote `git fetch --all --tags` -> - Also, if you face issues to generate the release notes you might want to able to sort it out by running i.e.: +> - Also, if you face issues to generate the release notes you might want to able to sort it out by running i.e.: > `go run sigs.k8s.io/kubebuilder-release-tools/notes --use-upstream=false --from=v3.11.0 --branch=release-X` ### Draft a new release from GitHub 1. Create a new tag with the correct version from the new `release-` branch -2. Verify the Release Github Action. It should build the assets and publish in the draft release +2. Verify the Release Github Action. It should build the assets and publish in the draft release 3. You also need to manually add the changelog generated above on the release page and publish it. Now, the code source is released ! ### Update the website docs (https://book.kubebuilder.io/quick-start.html) @@ -48,7 +48,7 @@ process was done to ensure that we have an aligned process under the org (simila :announce: Kubebuilder v3.5.0 has been released! This release includes a Kubernetes dependency bump to v1.24. For more info, see the release page: https://github.com/kubernetes-sigs/kubebuilder/releases/tag/v3.5.0 - :tada: Thanks to all our contributors! +:tada: Thanks to all our contributors! ```` 2. Announce the new release via email is sent to `kubebuilder@googlegroups.com` with the subject `[ANNOUNCE] Kubebuilder $VERSION is released` @@ -75,7 +75,7 @@ This action will caall the job [./build/.goreleaser.yml](./build/.goreleaser.yml Kubebuilder projects requires artifacts which are used to do test with ENV TEST (when we call `make test` target) These artifacts can be checked in the service page: https://storage.googleapis.com/kubebuilder-tools -The build is made from the branch [tools-releases](https://github.com/kubernetes-sigs/kubebuilder/tree/tools-releases) and the trigger will call the `build/cloudbuild_tools.yaml` passing +The build is made from the branch [tools-releases](https://github.com/kubernetes-sigs/kubebuilder/tree/tools-releases) and the trigger will call the `build/cloudbuild_tools.yaml` passing as argument the architecture and the SO that should be used, e.g: Screenshot 2022-04-30 at 10 15 41 @@ -91,7 +91,7 @@ These images are can be checked in the consolse, see [here](https://console.clou The project `kube-rbac-proxy` is in the process to be donated to the k8s org. However, it is going on for a long time and then, we have no ETA for that to occur. When that occurs we can automate this process. But until there we need to generate these images -by bumping the versions/tags released by `kube-rbac-proxy` on the branch +by bumping the versions/tags released by `kube-rbac-proxy` on the branch [kube-rbac-proxy-releases](https://github.com/kubernetes-sigs/kubebuilder/tree/kube-rbac-proxy-releases) then the `build/cloudbuild_kube-rbac-proxy.yaml` will generate the images. @@ -107,19 +107,19 @@ In Kubebuilder, we have been using this project via the GitHub action [.github/w and not the image, see: ```yaml - verify: - name: Verify PR contents - runs-on: ubuntu-latest - steps: - - name: Verifier action - id: verifier - uses: kubernetes-sigs/kubebuilder-release-tools@v0.1.1 - with: - github_token: ${{ secrets.GITHUB_TOKEN }} +verify: +name: Verify PR contents +runs-on: ubuntu-latest +steps: +- name: Verifier action +id: verifier +uses: kubernetes-sigs/kubebuilder-release-tools@v0.1.1 +with: +github_token: ${{ secrets.GITHUB_TOKEN }} ``` -However, the image should still be built and maintained since other projects under the org might be using them. +However, the image should still be built and maintained since other projects under the org might be using them. [kubebuilder-release-tools]: https://github.com/kubernetes-sigs/kubebuilder-release-tools [release-notes-generation]: https://github.com/kubernetes-sigs/kubebuilder-release-tools/blob/master/README.md#release-notes-generation -[release-process]: https://github.com/kubernetes-sigs/kubebuilder/blob/master/VERSIONING.md#releasing +[release-process]: https://github.com/kubernetes-sigs/kubebuilder/blob/master/VERSIONING.md#releasing diff --git a/VERSIONING.md b/VERSIONING.md index d26a8552434..eb1882367d5 100644 --- a/VERSIONING.md +++ b/VERSIONING.md @@ -22,8 +22,8 @@ changes -- see [below](#understanding-the-versions) for more details. When releasing, you'll need to: - to update references in [the build directory](build/) to the latest - version of the [envtest tools](#tools-releases) **before tagging the - release.** +version of the [envtest tools](#tools-releases) **before tagging the +release.** - reset the book branch: see [below](#book-releases) diff --git a/designs/code-generate-image-plugin.md b/designs/code-generate-image-plugin.md index 719eba97eac..16864d2e2c2 100644 --- a/designs/code-generate-image-plugin.md +++ b/designs/code-generate-image-plugin.md @@ -7,11 +7,11 @@ ## Summary This proposal defines a new plugin which allow users get the scaffold with the - required code to have a project that will deploy and manage an image on the cluster following the guidelines and what have been considered as good practices. - +required code to have a project that will deploy and manage an image on the cluster following the guidelines and what have been considered as good practices. + ## Motivation -The biggest part of the Kubebuilder users looking for to create a project that will at the end only deploy an image. In this way, one of the mainly motivations of this proposal is to abstract the complexities to achieve this goal and still giving the possibility of users improve and customize their projects according to their requirements. +The biggest part of the Kubebuilder users looking for to create a project that will at the end only deploy an image. In this way, one of the mainly motivations of this proposal is to abstract the complexities to achieve this goal and still giving the possibility of users improve and customize their projects according to their requirements. **Note:** This plugin will address requests that has been raised for a while and for many users in the community. Check [here](https://github.com/operator-framework/operator-sdk/pull/2158), for example, a request done in the past for the SDK project which is integrated with Kubebuidler to address the same need. @@ -19,15 +19,15 @@ The biggest part of the Kubebuilder users looking for to create a project that w - Add a new plugin to generate the code required to deploy and manage an image on the cluster - Promote the best practices as give example of common implementations -- Make the process to develop operators projects easier and more agil. +- Make the process to develop operators projects easier and more agil. - Give flexibility to the users and allow them to change the code according to their needs - Provide examples of code implementations and of the most common features usage and reduce the learning curve - + ### Non-Goals -The idea of this proposal is provide a facility for the users. This plugin can be improved +The idea of this proposal is provide a facility for the users. This plugin can be improved in the future, however, this proposal just covers the basic requirements. In this way, is a non-goal -allow extra configurations such as; scaffold the project using webhooks and the controller covered by tests. +allow extra configurations such as; scaffold the project using webhooks and the controller covered by tests. ## Proposal @@ -38,42 +38,42 @@ Add the new plugin code generate which will scaffold code implementation to depl - Add an EnvVar on the manager manifest (`config/manager/manager.yaml`) which will store the image informed and shows its possibility to users: ```yaml - .. - spec: - containers: - - name: manager - env: - - name: {{ resource}}-IMAGE - value: {{image:tag}} - image: controller:latest - ... +.. +spec: +containers: +- name: manager +env: +- name: {{ resource}}-IMAGE +value: {{image:tag}} +image: controller:latest +... ``` - Add a check into reconcile to ensure that the replicas of the deployment on cluster are equals the size defined in the CR: ```go - // Ensure the deployment size is the same as the spec - size := {{ resource }}.Spec.Size - if *found.Spec.Replicas != size { - found.Spec.Replicas = &size - err = r.Update(ctx, found) - if err != nil { - log.Error(err, "Failed to update Deployment", "Deployment.Namespace", found.Namespace, "Deployment.Name", found.Name) - return ctrl.Result{}, err - } - // Spec updated - return and requeue - return ctrl.Result{Requeue: true}, nil - } +// Ensure the deployment size is the same as the spec +size := {{ resource }}.Spec.Size +if *found.Spec.Replicas != size { +found.Spec.Replicas = &size +err = r.Update(ctx, found) +if err != nil { +log.Error(err, "Failed to update Deployment", "Deployment.Namespace", found.Namespace, "Deployment.Name", found.Name) +return ctrl.Result{}, err +} +// Spec updated - return and requeue +return ctrl.Result{Requeue: true}, nil +} ``` -- Add the watch feature for the Deployment managed by the controller: +- Add the watch feature for the Deployment managed by the controller: -```go +```go func (r *{{ resource }}Reconciler) SetupWithManager(mgr ctrl.Manager) error { - return ctrl.NewControllerManagedBy(mgr). - For(&cachev1alpha1.{{ resource }}{}). - Owns(&appsv1.Deployment{}). - Complete(r) +return ctrl.NewControllerManagedBy(mgr). +For(&cachev1alpha1.{{ resource }}{}). +Owns(&appsv1.Deployment{}). +Complete(r) } ``` @@ -87,19 +87,19 @@ func (r *{{ resource }}Reconciler) SetupWithManager(mgr ctrl.Manager) error { - Add a [marker][markers] in the spec definition to demonstrate how to use OpenAPI schemas validation such as `+kubebuilder:validation:Minimum=1` -- Add the specs on the `_types.go` to generate the CRD/CR sample with default values for `ImagePullPolicy` (`Always`), `ContainerPort` (`80`) and the `Replicas Size` (`3`) +- Add the specs on the `_types.go` to generate the CRD/CR sample with default values for `ImagePullPolicy` (`Always`), `ContainerPort` (`80`) and the `Replicas Size` (`3`) + +- Add a finalizer implementation with TODO for the CR managed by the controller such as described in the SDK doc [Handle Cleanup on Deletion](https://sdk.operatorframework.io/docs/building-operators/golang/advanced-topics/#handle-cleanup-on-deletion) -- Add a finalizer implementation with TODO for the CR managed by the controller such as described in the SDK doc [Handle Cleanup on Deletion](https://sdk.operatorframework.io/docs/building-operators/golang/advanced-topics/#handle-cleanup-on-deletion) - ### User Stories -- I am as user, would like to use a command to scaffold my common need which is deploy an image of my application, so that I do not need to know exactly how to implement it +- I am as user, would like to use a command to scaffold my common need which is deploy an image of my application, so that I do not need to know exactly how to implement it -- I am as user, would like to have a good example code base which uses the common features, so that I can easily learn its concepts and have a good start point to address my needs. +- I am as user, would like to have a good example code base which uses the common features, so that I can easily learn its concepts and have a good start point to address my needs. - I am as maintainer, would like to have a good example to address the common questions, so that I can easily describe how to implement the projects and/or use the common features. - -### Implementation Details/Notes/Constraints + +### Implementation Details/Notes/Constraints **Example of the controller template** @@ -110,139 +110,139 @@ func (r *{{ resource }}Reconciler) SetupWithManager(mgr ctrl.Manager) error { // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete func (r *{{ resource }}.Reconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) { - ctx := context.Background() - log := r.Log.WithValues("{{ resource }}", req.NamespacedName) - - // Fetch the {{ resource }} instance - {{ resource }} := &{{ apiimportalias }}.{{ resource }}{} - err := r.Get(ctx, req.NamespacedName, {{ resource }}) - if err != nil { - if errors.IsNotFound(err) { - // Request object not found, could have been deleted after reconcile request. - // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers. - // Return and don't requeue - log.Info("{{ resource }} resource not found. Ignoring since object must be deleted") - return ctrl.Result{}, nil - } - // Error reading the object - requeue the request. - log.Error(err, "Failed to get {{ resource }}") - return ctrl.Result{}, err - } - - // Check if the deployment already exists, if not create a new one - found := &appsv1.Deployment{} - err = r.Get(ctx, types.NamespacedName{Name: {{ resource }}.Name, Namespace: {{ resource }}.Namespace}, found) - if err != nil && errors.IsNotFound(err) { - // Define a new deployment - dep := r.deploymentFor{{ resource }}({{ resource }}) - log.Info("Creating a new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name) - err = r.Create(ctx, dep) - if err != nil { - log.Error(err, "Failed to create new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name) - return ctrl.Result{}, err - } - // Deployment created successfully - return and requeue - return ctrl.Result{Requeue: true}, nil - } else if err != nil { - log.Error(err, "Failed to get Deployment") - return ctrl.Result{}, err - } - - // Ensure the deployment size is the same as the spec - size := {{ resource }}.Spec.Size - if *found.Spec.Replicas != size { - found.Spec.Replicas = &size - err = r.Update(ctx, found) - if err != nil { - log.Error(err, "Failed to update Deployment", "Deployment.Namespace", found.Namespace, "Deployment.Name", found.Name) - return ctrl.Result{}, err - } - // Spec updated - return and requeue - return ctrl.Result{Requeue: true}, nil - } - - // TODO: add here code implementation to update/manage the status - - return ctrl.Result{}, nil +ctx := context.Background() +log := r.Log.WithValues("{{ resource }}", req.NamespacedName) + +// Fetch the {{ resource }} instance +{{ resource }} := &{{ apiimportalias }}.{{ resource }}{} +err := r.Get(ctx, req.NamespacedName, {{ resource }}) +if err != nil { +if errors.IsNotFound(err) { +// Request object not found, could have been deleted after reconcile request. +// Owned objects are automatically garbage collected. For additional cleanup logic use finalizers. +// Return and don't requeue +log.Info("{{ resource }} resource not found. Ignoring since object must be deleted") +return ctrl.Result{}, nil +} +// Error reading the object - requeue the request. +log.Error(err, "Failed to get {{ resource }}") +return ctrl.Result{}, err +} + +// Check if the deployment already exists, if not create a new one +found := &appsv1.Deployment{} +err = r.Get(ctx, types.NamespacedName{Name: {{ resource }}.Name, Namespace: {{ resource }}.Namespace}, found) +if err != nil && errors.IsNotFound(err) { +// Define a new deployment +dep := r.deploymentFor{{ resource }}({{ resource }}) +log.Info("Creating a new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name) +err = r.Create(ctx, dep) +if err != nil { +log.Error(err, "Failed to create new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name) +return ctrl.Result{}, err +} +// Deployment created successfully - return and requeue +return ctrl.Result{Requeue: true}, nil +} else if err != nil { +log.Error(err, "Failed to get Deployment") +return ctrl.Result{}, err +} + +// Ensure the deployment size is the same as the spec +size := {{ resource }}.Spec.Size +if *found.Spec.Replicas != size { +found.Spec.Replicas = &size +err = r.Update(ctx, found) +if err != nil { +log.Error(err, "Failed to update Deployment", "Deployment.Namespace", found.Namespace, "Deployment.Name", found.Name) +return ctrl.Result{}, err +} +// Spec updated - return and requeue +return ctrl.Result{Requeue: true}, nil +} + +// TODO: add here code implementation to update/manage the status + +return ctrl.Result{}, nil } // deploymentFor{{ resource }} returns a {{ resource }} Deployment object func (r *{{ resource }}Reconciler) deploymentFor{{ resource }}(m *{{ apiimportalias }}.{{ resource }}) *appsv1.Deployment { - ls := labelsFor{{ resource }}(m.Name) - replicas := m.Spec.Size - - dep := &appsv1.Deployment{ - ObjectMeta: metav1.ObjectMeta{ - Name: m.Name, - Namespace: m.Namespace, - }, - Spec: appsv1.DeploymentSpec{ - Replicas: &replicas, - Selector: &metav1.LabelSelector{ - MatchLabels: ls, - }, - Template: corev1.PodTemplateSpec{ - ObjectMeta: metav1.ObjectMeta{ - Labels: ls, - }, - Spec: corev1.PodSpec{ - Containers: []corev1.Container{{ - Image: imageFor{{ resource }}(m.Name), - Name: {{ resource }}, - ImagePullPolicy: {{ resource }}.Spec.ContainerImagePullPolicy, - Command: []string{"{{ resource }}"}, - Ports: []corev1.ContainerPort{{ - ContainerPort: {{ resource }}.Spec.ContainerPort, - Name: "{{ resource }}", - }}, - }}, - }, - }, - }, - } - // Set {{ resource }} instance as the owner and controller - ctrl.SetControllerReference(m, dep, r.Scheme) - return dep +ls := labelsFor{{ resource }}(m.Name) +replicas := m.Spec.Size + +dep := &appsv1.Deployment{ +ObjectMeta: metav1.ObjectMeta{ +Name: m.Name, +Namespace: m.Namespace, +}, +Spec: appsv1.DeploymentSpec{ +Replicas: &replicas, +Selector: &metav1.LabelSelector{ +MatchLabels: ls, +}, +Template: corev1.PodTemplateSpec{ +ObjectMeta: metav1.ObjectMeta{ +Labels: ls, +}, +Spec: corev1.PodSpec{ +Containers: []corev1.Container{{ +Image: imageFor{{ resource }}(m.Name), +Name: {{ resource }}, +ImagePullPolicy: {{ resource }}.Spec.ContainerImagePullPolicy, +Command: []string{"{{ resource }}"}, +Ports: []corev1.ContainerPort{{ +ContainerPort: {{ resource }}.Spec.ContainerPort, +Name: "{{ resource }}", +}}, +}}, +}, +}, +}, +} +// Set {{ resource }} instance as the owner and controller +ctrl.SetControllerReference(m, dep, r.Scheme) +return dep } // labelsFor{{ resource }} returns the labels for selecting the resources // belonging to the given {{ resource }} CR name. func labelsFor{{ resource }}(name string) map[string]string { - return map[string]string{"type": "{{ resource }}", "{{ resource }}_cr": name} +return map[string]string{"type": "{{ resource }}", "{{ resource }}_cr": name} } // imageFor{{ resource }} returns the image for the resources // belonging to the given {{ resource }} CR name. func imageFor{{ resource }}(name string) string { - // TODO: this method will return the value of the envvar create to store the image:tag informed +// TODO: this method will return the value of the envvar create to store the image:tag informed } func (r *{{ resource }}Reconciler) SetupWithManager(mgr ctrl.Manager) error { - return ctrl.NewControllerManagedBy(mgr). - For(&cachev1alpha1.{{ resource }}{}). - Owns(&appsv1.Deployment{}). - Complete(r) +return ctrl.NewControllerManagedBy(mgr). +For(&cachev1alpha1.{{ resource }}{}). +Owns(&appsv1.Deployment{}). +Complete(r) } -``` +``` **Example of the spec for the _types.go template** ```go // {{ resource }}Spec defines the desired state of {{ resource }} type {{ resource }}Spec struct { - // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster - // Important: Run "make" to regenerate code after modifying this file +// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster +// Important: Run "make" to regenerate code after modifying this file - // +kubebuilder:validation:Minimum=1 - // Size defines the number of {{ resource }} instances - Size int32 `json:"size,omitempty"` +// +kubebuilder:validation:Minimum=1 +// Size defines the number of {{ resource }} instances +Size int32 `json:"size,omitempty"` - // ImagePullPolicy defines the policy to pull the container images - ImagePullPolicy string `json:"image-pull-policy,omitempty"` +// ImagePullPolicy defines the policy to pull the container images +ImagePullPolicy string `json:"image-pull-policy,omitempty"` - // ContainerPort specifies the port which will be used by the image container - ContainerPort int `json:"container-port,omitempty"` +// ContainerPort specifies the port which will be used by the image container +ContainerPort int `json:"container-port,omitempty"` } ``` @@ -255,46 +255,46 @@ To ensure this implementation a new project example should be generated in the [ ### Graduation Criteria -- The new plugin will only be support `project-version=3` -- The attribute image with the value informed should be added to the resources model in the PROJECT file to let the tool know that the Resource get done with the common basic code implementation: +- The new plugin will only be support `project-version=3` +- The attribute image with the value informed should be added to the resources model in the PROJECT file to let the tool know that the Resource get done with the common basic code implementation: ```yaml plugins: - deploy-image.go.kubebuilder.io/v1beta1: - resources: - - domain: example.io - group: crew - kind: Captain - version: v1 - image: "/: -``` +deploy-image.go.kubebuilder.io/v1beta1: +resources: +- domain: example.io +group: crew +kind: Captain +version: v1 +image: "/: +``` -For further information check the definition agreement register in the comment https://github.com/kubernetes-sigs/kubebuilder/issues/1941#issuecomment-778649947. +For further information check the definition agreement register in the comment https://github.com/kubernetes-sigs/kubebuilder/issues/1941#issuecomment-778649947. -## Open Questions +## Open Questions -1. Should we allow to scaffold the code for an API that is already created for the project? -No, at least in the first moment to keep the simplicity. +1. Should we allow to scaffold the code for an API that is already created for the project? +No, at least in the first moment to keep the simplicity. 2. Should we support StatefulSet and Deployments? The idea is we start it by using a Deployment. However, we can improve the feature in follow-ups to support more default types of scaffolds which could be like `kubebuilder create api --group=crew --version=v1 --image=myexample:0.0.1 --kind=App --plugins=deploy-image.go.kubebuilder.io/v1beta1 --type=[deployment|statefulset|webhook]` 3. Could this feature be useful to other languages or is it just valid to Go based operators? -This plugin would is reponsable to scaffold content and files for Go-based operators. In a future, if other language-based operators starts to be supported (either officially or by the community) this plugin could be used as reference to create an equivalent one for their languages. Therefore, it should probably not to be a `subdomain` of `go.kubebuilder.io.` +This plugin would is reponsable to scaffold content and files for Go-based operators. In a future, if other language-based operators starts to be supported (either officially or by the community) this plugin could be used as reference to create an equivalent one for their languages. Therefore, it should probably not to be a `subdomain` of `go.kubebuilder.io.` For its integration with SDK, it might be valid for the Ansible-based operators where a new `playbook/role` could be generated as well. However, for example, for the Helm plugin it might to be useless. E.g `deploy-image.ansible.sdk.operatorframework.io/v1beta1` -4. Should we consider create a separate repo for plugins? +4. Should we consider create a separate repo for plugins? -In the long term yes. However, see that currently, Kubebuilder has not too many plugins yet. And then, and the preliminary support for plugins did not indeed release. For more info see the [Extensible CLI and Scaffolding Plugins][plugins-phase1-design-doc]. +In the long term yes. However, see that currently, Kubebuilder has not too many plugins yet. And then, and the preliminary support for plugins did not indeed release. For more info see the [Extensible CLI and Scaffolding Plugins][plugins-phase1-design-doc]. -In this way, at this moment, it shows to be a little Premature Optimization. Note that the issue [#2016](https://github.com/kubernetes-sigs/kubebuilder/issues/1378) will check the possibility of the plugins be as separate binaries that can be discovered by the Kubebuilder CLI binary via user-specified plugin file paths. Then, the discussion over the best approach to dealing with many plugins and if they should or not leave in the Kubebuilder repository would be better addressed after that. +In this way, at this moment, it shows to be a little Premature Optimization. Note that the issue [#2016](https://github.com/kubernetes-sigs/kubebuilder/issues/1378) will check the possibility of the plugins be as separate binaries that can be discovered by the Kubebuilder CLI binary via user-specified plugin file paths. Then, the discussion over the best approach to dealing with many plugins and if they should or not leave in the Kubebuilder repository would be better addressed after that. -5. Is Kubebuilder prepared to receive this implementation already? +5. Is Kubebuilder prepared to receive this implementation already? The [Extensible CLI and Scaffolding Plugins - Phase 1.5](extensible-cli-and-scaffolding-plugins-phase-1-5.md) and the issue #1941 requires to be implemented before this proposal. Also, to have a better idea over the proposed solutions made so for the Plugin Ecosystem see the meta issue [#2016](https://github.com/kubernetes-sigs/kubebuilder/issues/2016) -[markers]: ../docs/book/src/reference/markers.md +[markers]: ../docs/book/src/reference/markers.md [conditions]: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties [plugins-phase1-design-doc]: https://github.com/kubernetes-sigs/kubebuilder/blob/master/designs/extensible-cli-and-scaffolding-plugins-phase-1.md \ No newline at end of file diff --git a/designs/crd_version_conversion.md b/designs/crd_version_conversion.md index e5358a0460d..9c62741440f 100644 --- a/designs/crd_version_conversion.md +++ b/designs/crd_version_conversion.md @@ -20,7 +20,7 @@ This document describes high level design and workflow for supporting multiple v ## Hub and Spoke -The basic concept is that all versions of an object share the storage. So say if you have versions v1, v2 and v3 of a Kind Toy, kubernetes will use one of the versions to persist the object in stable storage i.e. Etcd. User can specify the version to be used for storage in the Custom Resource definition for that API. +The basic concept is that all versions of an object share the storage. So say if you have versions v1, v2 and v3 of a Kind Toy, kubernetes will use one of the versions to persist the object in stable storage i.e. Etcd. User can specify the version to be used for storage in the Custom Resource definition for that API. One can think storage version as the hub and other versions as spoke to visualize the relationship between storage and other versions (as shown below in the diagram). The key thing to note is that conversion between storage and other version should be lossless (round trippable). As shown in the diagram below, v3 is the storage/hub version and v1, v2 and v4 are spoke version. The document uses storage version and hub interchangeably. @@ -34,16 +34,16 @@ We will introduce two interfaces in controller-runtime to express the above rela // Hub defines capability to indicate whether a versioned type is a Hub or not. type Hub interface { - runtime.Object - Hub() +runtime.Object +Hub() } // A versioned type is convertible if it can be converted to/from a hub type. type Convertible interface { - runtime.Object - ConvertTo(dst Hub) error - ConvertFrom(src Hub) error +runtime.Object +ConvertTo(dst Hub) error +ConvertFrom(src Hub) error } ``` @@ -53,28 +53,28 @@ A spoke type needs to implement Convertible interface. Kubebuilder can scaffold package v1 func (ej *ExternalJob) ConvertTo(dst conversion.Hub) error { - switch t := dst.(type) { - case *v3.ExternalJob: - jobv3 := dst.(*v3.ExternalJob) - jobv3.ObjectMeta = ej.ObjectMeta - // conversion implementation - // - return nil - default: - return fmt.Errorf("unsupported type %v", t) - } +switch t := dst.(type) { +case *v3.ExternalJob: +jobv3 := dst.(*v3.ExternalJob) +jobv3.ObjectMeta = ej.ObjectMeta +// conversion implementation +// +return nil +default: +return fmt.Errorf("unsupported type %v", t) +} } func (ej *ExternalJob) ConvertFrom(src conversion.Hub) error { - switch t := src.(type) { - case *v3.ExternalJob: - jobv3 := src.(*v3.ExternalJob) - ej.ObjectMeta = jobv3.ObjectMeta - // conversion implementation - return nil - default: - return fmt.Errorf("unsupported type %v", t) - } +switch t := src.(type) { +case *v3.ExternalJob: +jobv3 := src.(*v3.ExternalJob) +ej.ObjectMeta = jobv3.ObjectMeta +// conversion implementation +return nil +default: +return fmt.Errorf("unsupported type %v", t) +} } ``` @@ -92,80 +92,80 @@ Controller-runtime will implement a default conversion handler that can handle c ```Go type conversionHandler struct { - // scheme which has Go types for the APIs are registered. This will be injected by controller manager. - Scheme runtime.Scheme - // decoder which will be injected by the webhook server - // decoder knows how to decode a conversion request and API objects. - Decoder decoder.Decoder +// scheme which has Go types for the APIs are registered. This will be injected by controller manager. +Scheme runtime.Scheme +// decoder which will be injected by the webhook server +// decoder knows how to decode a conversion request and API objects. +Decoder decoder.Decoder } // This is the default handler which will be mounted on the webhook server. func (ch *conversionHandler) Handle(r *http.Request, w http.Response) { - // decode the request to converReview request object - convertReq := ch.Decode(r.Body) - for _, obj := range convertReq.Objects { - // decode the incoming object - src, gvk, _ := ch.Decoder.Decode(obj.raw) +// decode the request to converReview request object +convertReq := ch.Decode(r.Body) +for _, obj := range convertReq.Objects { +// decode the incoming object +src, gvk, _ := ch.Decoder.Decode(obj.raw) - // get target object instance for convertReq.DesiredAPIVersion and gvk.Kind - dst, _ := getTargetObject(convertReq.DesiredAPIVersion, gvk.Kind) +// get target object instance for convertReq.DesiredAPIVersion and gvk.Kind +dst, _ := getTargetObject(convertReq.DesiredAPIVersion, gvk.Kind) - // this is where conversion between objects happens +// this is where conversion between objects happens - ch.ConvertObject(src, dst) +ch.ConvertObject(src, dst) - // append dst to converted object list +// append dst to converted object list } - // create a conversion response with converted objects +// create a conversion response with converted objects } func (ch *conversionHandler) convertObject(src, dst runtime.Object) error { - // check if src and dst are of same type, then may be return with error because API server will never invoke this handler for same version. - srcIsHub, dstIsHub := isHub(src), isHub(dst) - srcIsConvertible, dstIsConvertible := isConvertible(src), isConvertable(dst) - if srcIsHub { - if dstIsConvertible { - return dst.(conversion.Convertable).ConvertFrom(src.(conversion.Hub)) - } else { - // this is error case, this can be flagged at setup time ? - return fmt.Errorf("%T is not convertible to", src) - } - } - - if dstIsHub { - if srcIsConvertible { - return src.(conversion.Convertable).ConvertTo(dst.(conversion.Hub)) - } else { - // this is error case. - return fmt.Errorf("%T is not convertible", src) - } - } - - // neither src or dst are Hub, means both of them are spoke, so lets get the hub - // version type. - - hub, err := getHub(scheme, src) - if err != nil { - return err - } - - // shall we get Hub for dst type as well and ensure hubs are same ? - // src and dst needs to be convertible for it to work - if !srcIsConvertable || !dstIsConvertable { - return fmt.Errorf("%T and %T needs to be both convertible", src, dst) - } - - err = src.(conversion.Convertible).ConvertTo(hub) - if err != nil { - return fmt.Errorf("%T failed to convert to hub version %T : %v", src, hub, err) - } - - err = dst.(conversion.Convertible).ConvertFrom(hub) - if err != nil { - return fmt.Errorf("%T failed to convert from hub version %T : %v", dst, hub, err) - } - return nil +// check if src and dst are of same type, then may be return with error because API server will never invoke this handler for same version. +srcIsHub, dstIsHub := isHub(src), isHub(dst) +srcIsConvertible, dstIsConvertible := isConvertible(src), isConvertable(dst) +if srcIsHub { +if dstIsConvertible { +return dst.(conversion.Convertable).ConvertFrom(src.(conversion.Hub)) +} else { +// this is error case, this can be flagged at setup time ? +return fmt.Errorf("%T is not convertible to", src) +} +} + +if dstIsHub { +if srcIsConvertible { +return src.(conversion.Convertable).ConvertTo(dst.(conversion.Hub)) +} else { +// this is error case. +return fmt.Errorf("%T is not convertible", src) +} +} + +// neither src or dst are Hub, means both of them are spoke, so lets get the hub +// version type. + +hub, err := getHub(scheme, src) +if err != nil { +return err +} + +// shall we get Hub for dst type as well and ensure hubs are same ? +// src and dst needs to be convertible for it to work +if !srcIsConvertable || !dstIsConvertable { +return fmt.Errorf("%T and %T needs to be both convertible", src, dst) +} + +err = src.(conversion.Convertible).ConvertTo(hub) +if err != nil { +return fmt.Errorf("%T failed to convert to hub version %T : %v", src, hub, err) +} + +err = dst.(conversion.Convertible).ConvertFrom(hub) +if err != nil { +return fmt.Errorf("%T failed to convert from hub version %T : %v", dst, hub, err) +} +return nil } ``` @@ -181,14 +181,14 @@ The tool that generates the CRD manifests lives under controller-tools repo. Cur ## Storage/Serve annotations: -The resource annotation will be extended to indicate storage/serve attributes as shown below. +The resource annotation will be extended to indicate storage/serve attributes as shown below. ```Go // ... // +kubebuilder:resource:storage=true,serve=true // … type APIName struct { - ... +... } ``` @@ -198,15 +198,15 @@ CRD generation will be extended to support the following: * If multiple versions are detected for an API: - * Ensure only one version is marked as storage version. Assume default value of *storage* to be *false* for this case. +* Ensure only one version is marked as storage version. Assume default value of *storage* to be *false* for this case. - * Ensure version specific fields such as *OpenAPIValidationSchema, SubResources and AdditionalPrinterColumn* are added per version and omitted from the top level CRD definition. +* Ensure version specific fields such as *OpenAPIValidationSchema, SubResources and AdditionalPrinterColumn* are added per version and omitted from the top level CRD definition. * In case of single version, - * Do not use version specific field in CRD spec because users are most likely running with k8s version < 1.13 which doesn’t support version specific specs for *OpenAPIValidationSchema, SubResources and AdditionalPrinterColumn. *This is critical to maintain backward compatibility. +* Do not use version specific field in CRD spec because users are most likely running with k8s version < 1.13 which doesn’t support version specific specs for *OpenAPIValidationSchema, SubResources and AdditionalPrinterColumn. *This is critical to maintain backward compatibility. - * Assume default value for storage attribute to be *true* for this case. +* Assume default value for storage attribute to be *true* for this case. The above two requirements will require CRD generation logic to be divided in two phases. In first phase, parse and store CRD information in an internal structure for all versions and then generate the CRD manifest on the basis of multi-version/single-version scenario. @@ -247,22 +247,22 @@ Generally users have one controller per group/kind, we will avoid scaffolding co Version History - - - - - - - - + + + + + + + - - - - - - - + + + + + + +
VersionUpdated onDescription
Draft01/30/2019 +
VersionUpdated onDescription
Draft01/30/2019 Initial version
1.002/27/2019Updated the design as per POC implementation
Initial version
1.002/27/2019Updated the design as per POC implementation
diff --git a/designs/extensible-cli-and-scaffolding-plugins-phase-1-5.md b/designs/extensible-cli-and-scaffolding-plugins-phase-1-5.md index 463cecf3848..27bf3b4e471 100644 --- a/designs/extensible-cli-and-scaffolding-plugins-phase-1-5.md +++ b/designs/extensible-cli-and-scaffolding-plugins-phase-1-5.md @@ -33,21 +33,21 @@ not available, the adopted temporal solution is to wrap the base go plugin and p after its `Run` method has been executed. This solution faces several issues: - Wrapper plugins are unable to access the data of the wrapped plugins, as they weren't designed for this - purpose, and therefore, most of its internal data is non-exported. An example of this inaccessible data - would be the `Resource` objects created inside the `create api` and `create webhook` commands. +purpose, and therefore, most of its internal data is non-exported. An example of this inaccessible data +would be the `Resource` objects created inside the `create api` and `create webhook` commands. - Wrapper plugins are dependent on their wrapped plugins, and therefore can't be used for other plugins. - Under the hood, subcommands implement a second hidden interface: `RunOptions`, which further accentuates - these issues. +these issues. Plugin chaining solves the aforementioned problems but the current plugin API, and more specifically the `Subcommand` interface, does not support plugin chaining. - The `RunOptions` interface implemented under the hood is not part of the plugin API, and therefore - the cli is not able to run post-scaffold logic (implemented in `RunOptions.PostScaffold` method) after - all the plugins have scaffolded their part. +the cli is not able to run post-scaffold logic (implemented in `RunOptions.PostScaffold` method) after +all the plugins have scaffolded their part. - `Resource`-related commands can't bind flags like `--group`, `--version` or `--kind` in each plugin, - it must be created outside the plugins and then injected into them similar to the approach followed - currently for `Config` objects. +it must be created outside the plugins and then injected into them similar to the approach followed +currently for `Config` objects. ## Proposal @@ -71,7 +71,7 @@ Different ordering guarantees can be considered: - Hook order guarantee: a hook for a plugin will be called after its previous hooks succeeded. - Steps order guarantee: hooks will be called when all plugins have finished the previous hook. - Plugin order guarantee: same hook for each plugin will be called in the order specified - by the plugin position at the plugin chain. +by the plugin position at the plugin chain. All of the hooks will offer plugin order guarantee, as they all modify/update some item so the order of plugins is important. Execution hooks need to guarantee step order, as the items that are being modified @@ -88,29 +88,29 @@ therefore, use this error to tell the CLI that no further execution step should ### Initialization hooks #### Update metadata -This hook will be used for two purposes. It provides CLI-related metadata to the Subcommand (e.g., +This hook will be used for two purposes. It provides CLI-related metadata to the Subcommand (e.g., command name) and update the subcommands metadata such as the description or examples. - Required/optional - - [ ] Required - - [x] Optional +- [ ] Required +- [x] Optional - Subcommands - - [x] Init - - [x] Edit - - [x] Create API - - [x] Create webhook +- [x] Init +- [x] Edit +- [x] Create API +- [x] Create webhook #### Bind flags This hook will allow subcommands to define specific flags. - Required/optional - - [ ] Required - - [x] Optional +- [ ] Required +- [x] Optional - Subcommands - - [x] Init - - [x] Edit - - [x] Create API - - [x] Create webhook +- [x] Init +- [x] Edit +- [x] Create API +- [x] Create webhook ### Execution methods @@ -119,25 +119,25 @@ This hook will be used to inject the `Config` object that the plugin can modify The CLI will create/load/save this configuration object. - Required/optional - - [ ] Required - - [x] Optional +- [ ] Required +- [x] Optional - Subcommands - - [x] Init - - [x] Edit - - [x] Create API - - [x] Create webhook +- [x] Init +- [x] Edit +- [x] Create API +- [x] Create webhook #### Inject resource This hook will be used to inject the `Resource` object created by the CLI. - Required/optional - - [x] Required - - [ ] Optional +- [x] Required +- [ ] Optional - Subcommands - - [ ] Init - - [ ] Edit - - [x] Create API - - [x] Create webhook +- [ ] Init +- [ ] Edit +- [x] Create API +- [x] Create webhook #### Pre-scaffold This hook will be used to take actions before the main scaffolding is performed, e.g. validations. @@ -145,13 +145,13 @@ This hook will be used to take actions before the main scaffolding is performed, NOTE: a filesystem abstraction will be passed to this hook, but it should not be used for scaffolding. - Required/optional - - [ ] Required - - [x] Optional +- [ ] Required +- [x] Optional - Subcommands - - [x] Init - - [x] Edit - - [x] Create API - - [x] Create webhook +- [x] Init +- [x] Edit +- [x] Create API +- [x] Create webhook #### Scaffold This hook will be used to perform the main scaffolding. @@ -159,13 +159,13 @@ This hook will be used to perform the main scaffolding. NOTE: a filesystem abstraction will be passed to this hook that must be used for scaffolding. - Required/optional - - [x] Required - - [ ] Optional +- [x] Required +- [ ] Optional - Subcommands - - [x] Init - - [x] Edit - - [x] Create API - - [x] Create webhook +- [x] Init +- [x] Edit +- [x] Create API +- [x] Create webhook #### Post-scaffold This hook will be used to take actions after the main scaffolding is performed, e.g. cleanup. @@ -177,14 +177,14 @@ NOTE 2: the project configuration is saved by the CLI before calling this hook, configuration at this hook will not be persisted. - Required/optional - - [ ] Required - - [x] Optional +- [ ] Required +- [x] Optional - Subcommands - - [x] Init - - [x] Edit - - [x] Create API - - [x] Create webhook - +- [x] Init +- [x] Edit +- [x] Create API +- [x] Create webhook + ### Override plugins for single subcommand calls Defining plugins at initialization and using them for every command call will solve most of the cases. @@ -213,10 +213,10 @@ While the `plugin` field may seem like a better fit to store the plugin chain, a contain multiple values, there are several issues with this alternative approach: - A map does not provide any order guarantee, and the plugin chain order is relevant. - Some plugins do not store plugin-specific configuration information, e.g. the `go`-plugins. So - the absence of a plugin key doesn't mean that the plugin is not part of the plugin chain. +the absence of a plugin key doesn't mean that the plugin is not part of the plugin chain. - The desire of running a different set of plugins for a single subcommand call has already been - mentioned. Some of these out-of-chain plugins may need to store plugin-specific configuration, - so the presence of a plugin doesn't mean that is part of the plugin chain. +mentioned. Some of these out-of-chain plugins may need to store plugin-specific configuration, +so the presence of a plugin doesn't mean that is part of the plugin chain. The next project configuration version could consider this new requirements to define the names/types of these two fields. @@ -232,7 +232,7 @@ behaves as a plugin: - It has a name: provided at creation. - It has a version: provided at creation. - It has a list of supported project versions: computed from the common supported project - versions of all the plugins in the bundled. +versions of all the plugins in the bundled. Instead of implementing the optional getter methods that return a subcommand, it offers a way to retrieve the list of bundled plugins. This process will be done after plugin resolution. @@ -248,60 +248,60 @@ The following types are used as input/output values of the described hooks: ```go // CLIMetadata is the runtime meta-data of the CLI type CLIMetadata struct { - // CommandName is the root command name. - CommandName string +// CommandName is the root command name. +CommandName string } // SubcommandMetadata is the runtime meta-data for a subcommand type SubcommandMetadata struct { - // Description is a description of what this subcommand does. It is used to display help. - Description string - // Examples are one or more examples of the command-line usage of this subcommand. It is used to display help. - Examples string +// Description is a description of what this subcommand does. It is used to display help. +Description string +// Examples are one or more examples of the command-line usage of this subcommand. It is used to display help. +Examples string } type ExitError struct { - Plugin string - Reason string +Plugin string +Reason string } func (e ExitError) Error() string { - return fmt.Sprintf("plugin %s exit early: %s", e.Plugin, e.Reason) +return fmt.Sprintf("plugin %s exit early: %s", e.Plugin, e.Reason) } ``` The described hooks are implemented through the use of the following interfaces. ```go type RequiresCLIMetadata interface { - InjectCLIMetadata(CLIMetadata) +InjectCLIMetadata(CLIMetadata) } type UpdatesSubcommandMetadata interface { - UpdateSubcommandMetadata(*SubcommandMetadata) +UpdateSubcommandMetadata(*SubcommandMetadata) } type HasFlags interface { - BindFlags(*pflag.FlagSet) +BindFlags(*pflag.FlagSet) } type RequiresConfig interface { - InjectConfig(config.Config) error +InjectConfig(config.Config) error } type RequiresResource interface { - InjectResource(*resource.Resource) error +InjectResource(*resource.Resource) error } type HasPreScaffold interface { - PreScaffold(machinery.Filesystem) error +PreScaffold(machinery.Filesystem) error } type Scaffolder interface { - Scaffold(machinery.Filesystem) error +Scaffold(machinery.Filesystem) error } type HasPostScaffold interface { - PostScaffold() error +PostScaffold() error } ``` @@ -309,31 +309,31 @@ Additional interfaces define the required method for each type of plugin: ```go // InitSubcommand is the specific interface for subcommands returned by init plugins. type InitSubcommand interface { - Scaffolder +Scaffolder } // EditSubcommand is the specific interface for subcommands returned by edit plugins. type EditSubcommand interface { - Scaffolder +Scaffolder } // CreateAPISubcommand is the specific interface for subcommands returned by create API plugins. type CreateAPISubcommand interface { - RequiresResource - Scaffolder +RequiresResource +Scaffolder } // CreateWebhookSubcommand is the specific interface for subcommands returned by create webhook plugins. type CreateWebhookSubcommand interface { - RequiresResource - Scaffolder +RequiresResource +Scaffolder } ``` An additional interface defines the bundle method to return the wrapped plugins: ```go type Bundle interface { - Plugin - Plugins() []Plugin +Plugin +Plugins() []Plugin } ``` diff --git a/designs/extensible-cli-and-scaffolding-plugins-phase-1.md b/designs/extensible-cli-and-scaffolding-plugins-phase-1.md index 0b1f679a3cf..56a761930eb 100644 --- a/designs/extensible-cli-and-scaffolding-plugins-phase-1.md +++ b/designs/extensible-cli-and-scaffolding-plugins-phase-1.md @@ -30,19 +30,19 @@ Each plugin would minimally be required to implement the `Plugin` interface. ```go type Plugin interface { - // Version returns the plugin's semantic version, ex. "v1.2.3". - // - // Note: this version is different from config version. - Version() string - // Name returns a DNS1123 label string defining the plugin type. - // For example, Kubebuilder's main plugin would return "go". - // - // Plugin names can be fully-qualified, and non-fully-qualified names are - // prepended to ".kubebuilder.io" to prevent conflicts. - Name() string - // SupportedProjectVersions lists all project configuration versions this - // plugin supports, ex. []string{"2", "3"}. The returned slice cannot be empty. - SupportedProjectVersions() []string +// Version returns the plugin's semantic version, ex. "v1.2.3". +// +// Note: this version is different from config version. +Version() string +// Name returns a DNS1123 label string defining the plugin type. +// For example, Kubebuilder's main plugin would return "go". +// +// Plugin names can be fully-qualified, and non-fully-qualified names are +// prepended to ".kubebuilder.io" to prevent conflicts. +Name() string +// SupportedProjectVersions lists all project configuration versions this +// plugin supports, ex. []string{"2", "3"}. The returned slice cannot be empty. +SupportedProjectVersions() []string } ``` @@ -66,13 +66,13 @@ Each of these interfaces would follow the same pattern (see the `InitPlugin` int ```go type InitPluginGetter interface { - Plugin - // GetInitPlugin returns the underlying InitPlugin interface. - GetInitPlugin() InitPlugin +Plugin +// GetInitPlugin returns the underlying InitPlugin interface. +GetInitPlugin() InitPlugin } type InitPlugin interface { - GenericSubcommand +GenericSubcommand } ``` @@ -80,26 +80,26 @@ Each specialized plugin interface can leverage a generic subcommand interface, w ```go type GenericSubcommand interface { - // UpdateContext updates a PluginContext with command-specific help text, like description and examples. - // Can be a no-op if default help text is desired. - UpdateContext(*PluginContext) - // BindFlags binds the plugin's flags to the CLI. This allows each plugin to define its own - // command line flags for the kubebuilder subcommand. - BindFlags(*pflag.FlagSet) - // Run runs the subcommand. - Run() error - // InjectConfig passes a config to a plugin. The plugin may modify the - // config. Initializing, loading, and saving the config is managed by the - // cli package. - InjectConfig(*config.Config) +// UpdateContext updates a PluginContext with command-specific help text, like description and examples. +// Can be a no-op if default help text is desired. +UpdateContext(*PluginContext) +// BindFlags binds the plugin's flags to the CLI. This allows each plugin to define its own +// command line flags for the kubebuilder subcommand. +BindFlags(*pflag.FlagSet) +// Run runs the subcommand. +Run() error +// InjectConfig passes a config to a plugin. The plugin may modify the +// config. Initializing, loading, and saving the config is managed by the +// cli package. +InjectConfig(*config.Config) } type PluginContext struct { - // Description is a description of what this subcommand does. It is used to display help. - Description string - // Examples are one or more examples of the command-line usage - // of this plugin's project subcommand support. It is used to display help. - Examples string +// Description is a description of what this subcommand does. It is used to display help. +Description string +// Examples are one or more examples of the command-line usage +// of this plugin's project subcommand support. It is used to display help. +Examples string } ``` @@ -112,9 +112,9 @@ To generically support deprecated project versions, we could also add a `Depreca // that the plugin is deprecated. The CLI uses this to print deprecation // warnings when the plugin is in use. type Deprecated interface { - // DeprecationWarning returns a deprecation message that callers - // can use to warn users of deprecations - DeprecationWarning() string +// DeprecationWarning returns a deprecation message that callers +// can use to warn users of deprecations +DeprecationWarning() string } ``` @@ -144,8 +144,8 @@ domain: testproject.org repo: github.com/test-inc/testproject resources: - group: crew - kind: Captain - version: v1 +kind: Captain +version: v1 ``` ## CLI @@ -156,18 +156,18 @@ Example Kubebuilder main.go: ```go func main() { - c, err := cli.New( - cli.WithPlugins( - &golangv1.Plugin{}, - &golangv2.Plugin{}, - ), - ) - if err != nil { - log.Fatal(err) - } - if err := c.Run(); err != nil { - log.Fatal(err) - } +c, err := cli.New( +cli.WithPlugins( +&golangv1.Plugin{}, +&golangv2.Plugin{}, +), +) +if err != nil { +log.Fatal(err) +} +if err := c.Run(); err != nil { +log.Fatal(err) +} } ``` @@ -175,23 +175,23 @@ Example Operator SDK main.go: ```go func main() { - c, err := cli.New( - cli.WithCommandName("operator-sdk"), - cli.WithDefaultProjectVersion("2"), - cli.WithExtraCommands(newCustomCobraCmd()), - cli.WithPlugins( - &golangv1.Plugin{}, - &golangv2.Plugin{}, - &helmv1.Plugin{}, - &ansiblev1.Plugin{}, - ), - ) - if err != nil { - log.Fatal(err) - } - if err := c.Run(); err != nil { - log.Fatal(err) - } +c, err := cli.New( +cli.WithCommandName("operator-sdk"), +cli.WithDefaultProjectVersion("2"), +cli.WithExtraCommands(newCustomCobraCmd()), +cli.WithPlugins( +&golangv1.Plugin{}, +&golangv2.Plugin{}, +&helmv1.Plugin{}, +&ansiblev1.Plugin{}, +), +) +if err != nil { +log.Fatal(err) +} +if err := c.Run(); err != nil { +log.Fatal(err) +} } ``` diff --git a/designs/extensible-cli-and-scaffolding-plugins-phase-2.md b/designs/extensible-cli-and-scaffolding-plugins-phase-2.md index 13da9089211..4e032604d00 100644 --- a/designs/extensible-cli-and-scaffolding-plugins-phase-2.md +++ b/designs/extensible-cli-and-scaffolding-plugins-phase-2.md @@ -24,7 +24,7 @@ Plugin [Phase 1.5](https://github.com/kubernetes-sigs/kubebuilder/blob/master/de * As a plugin developer, I would like to be able to provide external plugins path for the CLI to perform the scaffolds, so that I could take advantage of external initiatives which are implemented using Kubebuilder as a lib and following its standards but are not shipped with its CLI binaries. * As a Kubebuilder maintainer, I would like to support external plugins not maintained by the core project. - * For example, once the Phase 2 plugin implementation is completed, some internal plugins can be re-implemented as external plugins removing the necessity to build those plugins in the `kubebuilder` binary. +* For example, once the Phase 2 plugin implementation is completed, some internal plugins can be re-implemented as external plugins removing the necessity to build those plugins in the `kubebuilder` binary. ### Goals @@ -45,21 +45,21 @@ Plugin [Phase 1.5](https://github.com/kubernetes-sigs/kubebuilder/blob/master/de * Discovering plugin binaries that are not locally present on the machine (i.e. binary exists in a remote repository). * Providing other options (other than standard streams such as `stdin/stdout/stderr`) for inter-process communication between `kubebuilder` and external plugins. - * Other IPC methods may be allowed in the future, although EPs are required for those methods. +* Other IPC methods may be allowed in the future, although EPs are required for those methods. ### Examples * `kubebuilder create api --plugins=myexternalplugin/v1` - * should scaffold files using the external plugin as defined in its implementation of the `create api` method. +* should scaffold files using the external plugin as defined in its implementation of the `create api` method. * `kubebuilder create api --plugins=myexternalplugin/v1,myotherexternalplugin/v2` - * should scaffold files using the external plugin as defined in their implementation of the `create api` method (by respecting the plugin chaining order, i.e. in the order of `create api` of v1 and then `create api` of v2 as specified in the layout field in the configuration). +* should scaffold files using the external plugin as defined in their implementation of the `create api` method (by respecting the plugin chaining order, i.e. in the order of `create api` of v1 and then `create api` of v2 as specified in the layout field in the configuration). * `kubebuilder create api --plugins=myexternalplugin/v1 --help` - * should display help information of the plugin which is not shipped in the binary (myexternalplugin/v1 is present outside of the `kubebuilder` binary). +* should display help information of the plugin which is not shipped in the binary (myexternalplugin/v1 is present outside of the `kubebuilder` binary). * `kubebuilder create api --plugins=go/v3,myexternalplugin/v2` - * should create files using the `go/v3` plugin, then pass those files to `myexternalplugin/v2` as defined in its implementation of the `create api` method by respecting the plugin chaining order. +* should create files using the `go/v3` plugin, then pass those files to `myexternalplugin/v2` as defined in its implementation of the `create api` method by respecting the plugin chaining order. ## Proposal @@ -74,7 +74,7 @@ Every plugin gets its own directory as below. On Linux: ```shell - $XDG_CONFIG_HOME/kubebuilder/plugins/${name}/${version} +$XDG_CONFIG_HOME/kubebuilder/plugins/${name}/${version} ``` The default value of XDG_CONFIG_HOME is `$HOME/.config`. @@ -82,24 +82,24 @@ The default value of XDG_CONFIG_HOME is `$HOME/.config`. On OSX: ```shell - ~/Library/Application Support/kubebuilder/plugins/${name}/${version} +~/Library/Application Support/kubebuilder/plugins/${name}/${version} ``` Based on the above directory scheme, let's say that if the value passed to the `--plugins` CLI flag is `myexternalplugin/v1`: * On Linux: - * `kubebuilder` will search for the `myexternalplugin` binary in `$XDG_CONFIG_HOME/kubebuilder/plugins/myexternalplugin/v1`, where the base of this path in is the binary name. +* `kubebuilder` will search for the `myexternalplugin` binary in `$XDG_CONFIG_HOME/kubebuilder/plugins/myexternalplugin/v1`, where the base of this path in is the binary name. * On OSX: - * Kubebuilder will search for the `myexternalplugin` binary in `$HOME/Library/Application Support/kubebuilder/plugins/myexternalplugin/v1`. +* Kubebuilder will search for the `myexternalplugin` binary in `$HOME/Library/Application Support/kubebuilder/plugins/myexternalplugin/v1`. Note: If the name is ambiguous, then the qualified name `myexternalplugin.my.domain` would be used, so the path would be `$XDG_CONFIG_HOME/kubebuilder/plugins/my/domain/myexternalplugin/v1` on Linux and `$HOME/Library/Application Support/kubebuilder/plugins/my/domain/myexternalplugin/v1` on OSX. * Pros - * `kustomize` which is popular and robust tool, follows this approach in which `apiVersion` and `kind` fields are used to locate the plugin. +* `kustomize` which is popular and robust tool, follows this approach in which `apiVersion` and `kind` fields are used to locate the plugin. - * This approach enforces naming constraints as the permitted character set must be directory name-compatible following naming rules for both Linux and OSX systems. +* This approach enforces naming constraints as the permitted character set must be directory name-compatible following naming rules for both Linux and OSX systems. - * The one-plugin-per-directory requirement eases creation of a plugin bundle for sharing. +* The one-plugin-per-directory requirement eases creation of a plugin bundle for sharing. ### What Plugin system should we use @@ -121,47 +121,47 @@ Currently, the project configuration has two fields to store plugin specific inf * So, where should external plugins be defined in the configuration? - * I propose that the external plugin should get encoded in the project configuration as a part of the `layout` field. - * For example, external plugin `myexternalplugin/v2` can be specified through the `--plugins` flag for every subcommand and also be defined in the project configuration in the `layout` field for plugin resolution. +* I propose that the external plugin should get encoded in the project configuration as a part of the `layout` field. +* For example, external plugin `myexternalplugin/v2` can be specified through the `--plugins` flag for every subcommand and also be defined in the project configuration in the `layout` field for plugin resolution. Example `PROJECT` file: ```yaml version: "3" domain: testproject.org -layout: +layout: - go.kubebuilder.io/v3 - myexternalplugin/v2 plugins: - myexternalplugin/v2: - resources: - - domain: testproject.org - group: crew - kind: Captain - version: v2 - declarative.go.kubebuilder.io/v1: - resources: - - domain: testproject.org - group: crew - kind: FirstMate - version: v1 +myexternalplugin/v2: +resources: +- domain: testproject.org +group: crew +kind: Captain +version: v2 +declarative.go.kubebuilder.io/v1: +resources: +- domain: testproject.org +group: crew +kind: FirstMate +version: v1 repo: github.com/test-inc/testproject resources: - group: crew - kind: Captain - version: v1 +kind: Captain +version: v1 ``` ### Communication between `kubebuilder` and external plugins * Why do we need communication between `kubebuilder` and external plugins? - * The in-tree plugins do not need any inter-process communication as they are the same process, and hence, direct calls are made to the respective functions (also referred as hooks) based on the supported subcommands for an in-tree plugin. As Phase 2 plugins is tackling out-of-tree or external plugins, there's a need for inter-process communication between `kubebuilder` and the external plugin as they are two separate processes/binaries. `kubebuilder` needs to communicate the subcommand that the external plugin should run, and all the arguments received in the CLI request by the user. These arguments contain flags which will have to be directly passed to all plugins in the chain. Additionally, it's important to have context of all the files that were scaffolded until that point especially if there is more than one external plugin in the chain. `kubebuilder` attaches that information in the request, along with the command and arguments. For the external plugin, it would need to communicate the subcommand it ran and the updated file contents information that the external plugin scaffolded to `kubebuilder`. The external plugin would also need to provide its help text if requested by `kubebuilder`. As discussed earlier, standard streams seems to be a desirable IPC method of communication for the use-cases that Phase 2 is trying to solve that involves discovery and chaining of external plugins. +* The in-tree plugins do not need any inter-process communication as they are the same process, and hence, direct calls are made to the respective functions (also referred as hooks) based on the supported subcommands for an in-tree plugin. As Phase 2 plugins is tackling out-of-tree or external plugins, there's a need for inter-process communication between `kubebuilder` and the external plugin as they are two separate processes/binaries. `kubebuilder` needs to communicate the subcommand that the external plugin should run, and all the arguments received in the CLI request by the user. These arguments contain flags which will have to be directly passed to all plugins in the chain. Additionally, it's important to have context of all the files that were scaffolded until that point especially if there is more than one external plugin in the chain. `kubebuilder` attaches that information in the request, along with the command and arguments. For the external plugin, it would need to communicate the subcommand it ran and the updated file contents information that the external plugin scaffolded to `kubebuilder`. The external plugin would also need to provide its help text if requested by `kubebuilder`. As discussed earlier, standard streams seems to be a desirable IPC method of communication for the use-cases that Phase 2 is trying to solve that involves discovery and chaining of external plugins. * How does `kubebuilder` communicate to external plugins? - * Standard streams have three I/O connections: standard input (`stdin`), standard output (`stdout`) and standard error (`stderr`) and they work well with chaining applications, meaning that output stream of one program can be redirected to the input stream of another. - * Let's say there are two external plugins in the plugin chain. Below is the sequence of how `kubebuilder` communicates to the plugins `myfirstexternalplugin/v1` and `mysecondexternalplugin/v1`. +* Standard streams have three I/O connections: standard input (`stdin`), standard output (`stdout`) and standard error (`stderr`) and they work well with chaining applications, meaning that output stream of one program can be redirected to the input stream of another. +* Let's say there are two external plugins in the plugin chain. Below is the sequence of how `kubebuilder` communicates to the plugins `myfirstexternalplugin/v1` and `mysecondexternalplugin/v1`. ![Kubebuilder to external plugins sequence diagram](https://github.com/rashmigottipati/POC-Phase2-Plugins/blob/main/docs/externalplugins-sequence-diagram.png) @@ -172,16 +172,16 @@ Message passing between `kubebuilder` and the external plugin will occur through The following scenarios shows what `kubebuilder` will send/receive to the external plugin: * `kubebuilder` to external plugin: - * `kubebuilder` constructs a `PluginRequest` that contains the `Command` (such as `init`, `create api`, or `create webhook`), `Args` containing all the raw flags from the CLI request and license boilerplate without comment delimiters, and an empty `Universe` that contains the current virtual state of file contents that is not written to the disk yet. `kubebuilder` writes the `PluginRequest` through `stdin`. +* `kubebuilder` constructs a `PluginRequest` that contains the `Command` (such as `init`, `create api`, or `create webhook`), `Args` containing all the raw flags from the CLI request and license boilerplate without comment delimiters, and an empty `Universe` that contains the current virtual state of file contents that is not written to the disk yet. `kubebuilder` writes the `PluginRequest` through `stdin`. * External plugin to `kubebuilder`: - * The plugin reads the `PluginRequest` through its `stdin` and processes the request based on the `Command` that was sent. If the `Command` doesn't match what the plugin supports, it writes back an error immediately without any further processing. If the `Command` matches what the plugin supports, it constructs a `PluginResponse` containing the `Command` that was executed by the plugin, and modified `Universe` based on the new files that were scaffolded by the external plugin, `Error` and `ErrorMsg` that add any error information, and writes the `PluginResponse` back to `kubebuilder` through `stdout`. +* The plugin reads the `PluginRequest` through its `stdin` and processes the request based on the `Command` that was sent. If the `Command` doesn't match what the plugin supports, it writes back an error immediately without any further processing. If the `Command` matches what the plugin supports, it constructs a `PluginResponse` containing the `Command` that was executed by the plugin, and modified `Universe` based on the new files that were scaffolded by the external plugin, `Error` and `ErrorMsg` that add any error information, and writes the `PluginResponse` back to `kubebuilder` through `stdout`. * Note: If `--help` flag is being passed from `kubebuilder` to the external plugin through `PluginRequest`, the plugin attaches its help text information in the `Metadata` field of the `PluginResponse`. Both `PluginRequest` and `PluginResponse` also contain `APIVersion` field to have compatible versioned schemas. * Handling plugin failures across the chain: - * If any plugin in the chain fails, the plugin reports errors back through `PluginResponse` to `kubebuilder` and plugin chain execution will be halted, as one plugin may be dependent on the success of another. All the files that were scaffolded already until that point will not be written to disk to prevent a half committed state. +* If any plugin in the chain fails, the plugin reports errors back through `PluginResponse` to `kubebuilder` and plugin chain execution will be halted, as one plugin may be dependent on the success of another. All the files that were scaffolded already until that point will not be written to disk to prevent a half committed state. ## Implementation Details/Notes/Constraints @@ -191,43 +191,43 @@ The following scenarios shows what `kubebuilder` will send/receive to the extern // PluginRequest contains all information kubebuilder received from the CLI // and plugins executed before it. type PluginRequest struct { - // Command contains the command to be executed by the plugin such as init, create api, etc. - Command string `json:"command"` +// Command contains the command to be executed by the plugin such as init, create api, etc. +Command string `json:"command"` - // APIVersion defines the versioned schema of the PluginRequest that is encoded and sent from Kubebuilder to plugin. - // Initially, this will be marked as alpha (v1alpha1). - APIVersion string `json:"apiVersion"` +// APIVersion defines the versioned schema of the PluginRequest that is encoded and sent from Kubebuilder to plugin. +// Initially, this will be marked as alpha (v1alpha1). +APIVersion string `json:"apiVersion"` - // Args holds the plugin specific arguments that are received from the CLI which are to be passed down to the plugin. - Args []string `json:"args"` +// Args holds the plugin specific arguments that are received from the CLI which are to be passed down to the plugin. +Args []string `json:"args"` - // Universe represents the modified file contents that gets updated over a series of plugin runs - // across the plugin chain. Initially, it starts out as empty. - Universe map[string]string `json:"universe"` +// Universe represents the modified file contents that gets updated over a series of plugin runs +// across the plugin chain. Initially, it starts out as empty. +Universe map[string]string `json:"universe"` } // PluginResponse is returned to kubebuilder by the plugin and contains all files // written by the plugin following a certain command. type PluginResponse struct { - // Command holds the command that gets executed by the plugin such as init, create api, etc. - Command string `json:"command"` +// Command holds the command that gets executed by the plugin such as init, create api, etc. +Command string `json:"command"` - // Metadata contains the plugin specific help text that the plugin returns to Kubebuilder when it receives - // `--help` flag from Kubebuilder. - Metadata plugin.SubcommandMetadata `json:"metadata"` +// Metadata contains the plugin specific help text that the plugin returns to Kubebuilder when it receives +// `--help` flag from Kubebuilder. +Metadata plugin.SubcommandMetadata `json:"metadata"` - // APIVersion defines the versioned schema of the PluginResponse that will be written back to kubebuilder. - // Initially, this will be marked as alpha (v1alpha1). - APIVersion string `json:"apiVersion"` +// APIVersion defines the versioned schema of the PluginResponse that will be written back to kubebuilder. +// Initially, this will be marked as alpha (v1alpha1). +APIVersion string `json:"apiVersion"` - // Universe in the PluginResponse represents the updated file contents that was written by the plugin. - Universe map[string]string `json:"universe"` +// Universe in the PluginResponse represents the updated file contents that was written by the plugin. +Universe map[string]string `json:"universe"` - // Error is a boolean type that indicates whether there were any errors due to plugin failures. - Error bool `json:"error,omitempty"` +// Error is a boolean type that indicates whether there were any errors due to plugin failures. +Error bool `json:"error,omitempty"` - // ErrorMsg holds the specific error message of plugin failures. - ErrorMsg string `json:"error_msg,omitempty"` +// ErrorMsg holds the specific error message of plugin failures. +ErrorMsg string `json:"error_msg,omitempty"` } ``` @@ -235,26 +235,26 @@ The following function handles construction of the `PluginRequest` based on the ```go func (p *ExternalPlugin) runExternalProgram(req PluginRequest) (res PluginResponse, err error) { - pluginReq, err := json.Marshal(req) - if err != nil { - return res, err - } - - cmd := exec.Command(p.Path) - cmd.Dir = p.DirContext - cmd.Stdin = bytes.NewBuffer(pluginReq) - cmd.Stderr = os.Stderr - - out, err := cmd.Output() - if err != nil { - fmt.Fprint(os.Stdout, string(out)) - return res, err - } - - if json.Unmarshal(out, &res); err != nil { - return res, err - } - return res, nil +pluginReq, err := json.Marshal(req) +if err != nil { +return res, err +} + +cmd := exec.Command(p.Path) +cmd.Dir = p.DirContext +cmd.Stdin = bytes.NewBuffer(pluginReq) +cmd.Stderr = os.Stderr + +out, err := cmd.Output() +if err != nil { +fmt.Fprint(os.Stdout, string(out)) +return res, err +} + +if json.Unmarshal(out, &res); err != nil { +return res, err +} +return res, nil } ``` @@ -278,10 +278,10 @@ What happens when the above is invoked? `PluginRequest JSON`: ```JSON -{ - "command":"init", - "args":["--domain","example.com"], - "universe":{} +{ +"command":"init", +"args":["--domain","example.com"], +"universe":{} } ``` @@ -299,11 +299,11 @@ What happens when the above is invoked? ```JSON { - "command": "init", - "universe": { - "LICENSE": "Apache 2.0 License\n", - "main.py": "..." - } +"command": "init", +"universe": { +"LICENSE": "Apache 2.0 License\n", +"main.py": "..." +} } ``` @@ -319,42 +319,42 @@ A user will provide a list of file paths for `kubebuilder` to discover the plugi * Alternatively, this could be handled in a way that [helm kustomize plugin](https://helm.sh/docs/topics/advanced/#post-rendering) discovers the plugin based on the non-existence of a separator in the path provided, in which case `kubebuilder` will search in `$PATH`, otherwise resolve any relative paths to a fully qualified path. * Pros - * This provides flexibility for the user to specify the file paths that the plugin would be placed in and `kubebuilder` could discover the binaries in those user specified file paths. +* This provides flexibility for the user to specify the file paths that the plugin would be placed in and `kubebuilder` could discover the binaries in those user specified file paths. - * No constraints on plugin binary naming or directory placements from the Kubebuilder side. +* No constraints on plugin binary naming or directory placements from the Kubebuilder side. - * Provides a default value for the plugin directory in case user wants to use that to drop their plugins. +* Provides a default value for the plugin directory in case user wants to use that to drop their plugins. #### Prefixed plugin executable names in $PATH Another approach is adding plugin executables with a prefix `kubebuilder-` followed by the plugin name to the PATH variable. This will enable `kubebuilder` to traverse through the PATH looking for the plugin executables starting with the prefix `kubebuilder-` and matching by the plugin name that was provided in the CLI. Furthermore, a check should be added to verify that the match is an executable or not and return an error if it's not an executable. This approach provides a lot of flexibility in terms of plugin discovery as all the user needs to do is to add the plugin executable to the PATH and `kubebuilder` will discover it. * Pros - * `kubectl` and `git` follow the same approach for discovering plugins, so there’s prior art. +* `kubectl` and `git` follow the same approach for discovering plugins, so there’s prior art. - * There’s a lot of flexibility in just dropping plugin binaries to PATH variable and enabling the discovery without having to enforce any other constraints on the placements of the plugins. +* There’s a lot of flexibility in just dropping plugin binaries to PATH variable and enabling the discovery without having to enforce any other constraints on the placements of the plugins. * Cons - * Enumerating the list of all available plugins might be a bit tough compared to having a single folder with the list of available plugins and having to enumerate those. +* Enumerating the list of all available plugins might be a bit tough compared to having a single folder with the list of available plugins and having to enumerate those. - * These plugin binaries cannot be run in a standalone manner outside of Kubebuilder, so may not be very ideal to add them to the PATH var. +* These plugin binaries cannot be run in a standalone manner outside of Kubebuilder, so may not be very ideal to add them to the PATH var. ## Open questions * Do we want to support the addition of new arbitrary subcommands other than the subcommands (init, create api, create webhook) that we already support? - * Not for the EP or initial implementation, but can revisit later. +* Not for the EP or initial implementation, but can revisit later. * Do we need to discover flags by calling the plugin binary or should we have users define them in the project configuration? - * Flags will be passed directly to the external plugins as a string. Flag parse errors will be passed back via `PluginResponse`. +* Flags will be passed directly to the external plugins as a string. Flag parse errors will be passed back via `PluginResponse`. * What alternatives to stdin/stdout exist and why shouldn't we use them? - * Other alternatives exist such as named pipe and sockets, but stdin/stdout seems to be more suitable for our needs. +* Other alternatives exist such as named pipe and sockets, but stdin/stdout seems to be more suitable for our needs. * What happens when two plugins bind the same flag name? Will there be any conflicts? - * As mentioned in the implementation details section, flags are passed directly as a string to plugins and the same string will be passed to each plugin in the chain, so all plugins get the same flag set. Errors should not be returned if an unrecognized flag is parsed. +* As mentioned in the implementation details section, flags are passed directly as a string to plugins and the same string will be passed to each plugin in the chain, so all plugins get the same flag set. Errors should not be returned if an unrecognized flag is parsed. * How should we handle environment variables? - * We would pass the entire CLI environment to the plugin to permit simple external plugin configuration without jumping through hoops. +* We would pass the entire CLI environment to the plugin to permit simple external plugin configuration without jumping through hoops. * Should the API version be a part of the plugin request spec? - * It would be nice to encode APIVersion for `PluginRequest` and `PluginResponse` so the initial schemas can be marked as `v1alpha1`. +* It would be nice to encode APIVersion for `PluginRequest` and `PluginResponse` so the initial schemas can be marked as `v1alpha1`. diff --git a/designs/helper_to_upgrade_projects_by_rescaffolding.md b/designs/helper_to_upgrade_projects_by_rescaffolding.md index 3ec6c329e5b..f5e01e8a8a2 100644 --- a/designs/helper_to_upgrade_projects_by_rescaffolding.md +++ b/designs/helper_to_upgrade_projects_by_rescaffolding.md @@ -37,8 +37,8 @@ provided for the same plugin version. Therefore, you will need to: - You will run the command in the root directory of your project: `kubebuilder alpha generate` - Then, the command will remove the content of your local directory and re-scaffold the project from the scratch - It will allow you to compare your local branch with the remote branch of your project to re-add the code on top OR - if you do not use the flag `--no-backup` then you can compare the local directory with the copy of your project - copied to the path `.backup/project-name/` before the re-scaffold be done. +if you do not use the flag `--no-backup` then you can compare the local directory with the copy of your project +copied to the path `.backup/project-name/` before the re-scaffold be done. - Therefore, you can run make all and test the final result. You will have after all your project updated. **To update the project with major changes provided** @@ -84,7 +84,7 @@ make less painful this process. Examples: - Be able to perform the project upgrade to the latest changes without human interactions - Deal and support external plugins - Provides support to [declarative](https://book.kubebuilder.io/plugins/declarative-v1.html) plugin - since it is desired and planned to decouple this solution and donate this plugin to its own authors [More info](https://github.com/kubernetes-sigs/kubebuilder/issues/3186) +since it is desired and planned to decouple this solution and donate this plugin to its own authors [More info](https://github.com/kubernetes-sigs/kubebuilder/issues/3186) - Provide support to older version before having the Project config (Kubebuilder < 3x) and the go/v2 layout which exists to ensure a backwards compatibility with legacy layout provided by Kubebuilder 2x ## Proposal @@ -94,21 +94,21 @@ in the example section above, see: ```shell kubebuilder alpha generate \ - --input-dir= - --output-dir= - --no-backup - --backup-path= - --plugins= +--input-dir= +--output-dir= +--no-backup +--backup-path= +--plugins= ``` **Where**: - input-dir: [Optional] If not informed, then, by default, it is the current directory (project directory). If the `PROJECT` file does not exist, it will fail. - output-dir: [Optional] If not informed then, it should be the current repository. -- no-backup: [Optional] If not informed then, the current directory should be copied to the path `.backup/project-name` -- backup: [Optional] If not informed then, the backup will be copied to the path `.backup/project-name` +- no-backup: [Optional] If not informed then, the current directory should be copied to the path `.backup/project-name` +- backup: [Optional] If not informed then, the backup will be copied to the path `.backup/project-name` - plugins: [Optional] If not informed then, it is the same plugin chain available in the layout field -- binary: [Optional] If not informed then, the command will use KubeBuilder binary installed globaly. +- binary: [Optional] If not informed then, the command will use KubeBuilder binary installed globaly. > Note that the backup created in the current directory must be prefixed with `.`. Otherwise the tool will not able to perform the scaffold to create a new project from the scratch. @@ -118,26 +118,26 @@ This command would mainly perform the following operations: - 1. Check the flags - 2. If the backup flag be used, then check if is a valid path and make a backup of the current project - 3. Copy the whole current directory to `.backup/project-name` -- 4. Ensure that the output path is clean. By default it is the current directory project where the project was scaffolded previously and it should be cleaned up before to do the re-scaffold. +- 4. Ensure that the output path is clean. By default it is the current directory project where the project was scaffolded previously and it should be cleaned up before to do the re-scaffold. Only the content under `.backup/project-name` should be kept. - 4. Read the [PROJECT config][project-config] - 5. Re-run all commands using the KubeBuilder binary to recreate the project in the output directory -The command should also provide a comprensive help with examples of the proposed workflows. So that, users +The command should also provide a comprensive help with examples of the proposed workflows. So that, users are able to understand how to use it when run `--help`. ### User Stories **As an Operator author:** -- I can re-generate my project from scratch based on the proposed helper, which executes all the -commands according to my previous input to the project. That way, I can easily migrate my project to the new layout +- I can re-generate my project from scratch based on the proposed helper, which executes all the +commands according to my previous input to the project. That way, I can easily migrate my project to the new layout using the newer CLI/plugin versions, which support the latest changes, bug fixes, and features. - I can regenerate my project from the scratch based on all commands that I used the tool to build my project previously but informing a new init plugin chain, so that I could upgrade my current project to new layout versions and experiment alpha ones. - I would like to re-generate the project from the scratch using the same config provide in the PROJECT file and inform -a path to do a backup of my current directory so that I can also use the backup to compare with the new scaffold and add my custom code +a path to do a backup of my current directory so that I can also use the backup to compare with the new scaffold and add my custom code on top again without the need to compare my local directory and new scaffold with any outside source. **As a Kubebuiler maintainer:** @@ -185,7 +185,7 @@ be checked in this [pull request](https://github.com/kubernetes-sigs/kubebuilder ## Drawbacks - If the value that feature provides does not pay off the effort to keep it - maintained, then we would need to deprecate and remove the feature in the long term. +maintained, then we would need to deprecate and remove the feature in the long term. ## Alternatives diff --git a/designs/simplified-scaffolding.md b/designs/simplified-scaffolding.md index bdbacaa8d8f..5cdc2944f06 100644 --- a/designs/simplified-scaffolding.md +++ b/designs/simplified-scaffolding.md @@ -206,11 +206,11 @@ In this new layout, `main.go` constructs the reconciler: ```go // ... func main() { - // ... - err := (&controllers.MyReconciler{ - MySuperSpecialAppClient: doSomeThingsWithFlags(), - }).SetupWithManager(mgr) - // ... +// ... +err := (&controllers.MyReconciler{ +MySuperSpecialAppClient: doSomeThingsWithFlags(), +}).SetupWithManager(mgr) +// ... } ``` @@ -219,10 +219,10 @@ reconciler: ```go func (r *MyReconciler) SetupWithManager(mgr ctrl.Manager) error { - return ctrl.NewControllerManagedBy(mgr). - For(&api.MyAppType{}). - Owns(&corev1.Pod{}). - Complete(r) +return ctrl.NewControllerManagedBy(mgr). +For(&api.MyAppType{}). +Owns(&corev1.Pod{}). +Complete(r) } ``` @@ -293,23 +293,23 @@ $ tree ./test/project/api │   └── v1 │   └── types.go └── groupb -    └── v1 -    └── types.go +└── v1 +└── types.go ``` There are three options here: 1. Scaffold with the more complex API structure (this looks pretty close - to what we do today). It doesn't add a ton of complexity, but does - bury types deeper in a directory structure. +to what we do today). It doesn't add a ton of complexity, but does +bury types deeper in a directory structure. 2. Try to move things and rename references. This takes a lot more effort - on the Kubebuilder maintainers' part if we try to rename references - across the codebase. Not so much if we force the user to, but that's - a poorer experience. +on the Kubebuilder maintainers' part if we try to rename references +across the codebase. Not so much if we force the user to, but that's +a poorer experience. 3. Tell users to move things, and scaffold out with the new structure. - This is fairly messy for the user. +This is fairly messy for the user. Since growing to multiple API groups seems to be fairly uncommon, it's mostly like safe to take a hybrid approach here -- allow manually @@ -322,14 +322,14 @@ Multiple controllers don't need their own package, but we'd want to scaffold out the builder. We have two options here: 1. Looking for a particular code comment, and appending a new builder - after it. This is a bit more complicated for us, but perhaps provides - a nicer UX. +after it. This is a bit more complicated for us, but perhaps provides +a nicer UX. 2. Simply adding a new controller, and reminding the user to add the - builder themselves. This is easier for the maintainers, but perhaps - a slightly poorer UX for the users. However, writing out a builder by - hand is significantly less complex than adding a controller by hand in - the current structure. +builder themselves. This is easier for the maintainers, but perhaps +a slightly poorer UX for the users. However, writing out a builder by +hand is significantly less complex than adding a controller by hand in +the current structure. Option 1 should be fairly simple, since the logic is already needed for registering types to the scheme, and we can always fall back to emitting @@ -358,10 +358,10 @@ project gets very complicated. ### Additional Tooling Work * Currently the `api/` package will need a `doc.go` file to make - `deepcopy-gen` happy. We should fix this. +`deepcopy-gen` happy. We should fix this. * Currently, `controller-gen crd` needs the `api` directory to be - `pkg/apis//`. We should fix this. +`pkg/apis//`. We should fix this. ## Example @@ -398,53 +398,53 @@ $ tree . package main import ( - "os" +"os" - ctrl "sigs.k8s.io/controller-runtime" - "sigs.k8s.io/controller-runtime/pkg/log/zap" - "k8s.io/apimachinery/pkg/runtime" +ctrl "sigs.k8s.io/controller-runtime" +"sigs.k8s.io/controller-runtime/pkg/log/zap" +"k8s.io/apimachinery/pkg/runtime" - "my.repo/api/v1beta1" - "my.repo/api/v1" - "my.repo/controllers" +"my.repo/api/v1beta1" +"my.repo/api/v1" +"my.repo/controllers" ) var ( - scheme = runtime.NewScheme() - setupLog = ctrl.Log.WithName("setup") +scheme = runtime.NewScheme() +setupLog = ctrl.Log.WithName("setup") ) func init() { - v1beta1.AddToScheme(scheme) - v1.AddToScheme(scheme) - // +kubebuilder:scaffold:scheme +v1beta1.AddToScheme(scheme) +v1.AddToScheme(scheme) +// +kubebuilder:scaffold:scheme } func main() { - ctrl.SetLogger(zap.New(zap.UseDevMode(true))) - - mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{Scheme: scheme}) - if err != nil { - setupLog.Error(err, "unable to start manager") - os.Exit(1) - } - - err = (&controllers.MyKindReconciler{ - Client: mgr.GetClient(), - log: ctrl.Log.WithName("mykind-controller"), - }).SetupWithManager(mgr) - if err != nil { - setupLog.Error(err, "unable to create controller", "controller", "mykind") - os.Exit(1) - } - - // +kubebuilder:scaffold:builder - - setupLog.Info("starting manager") - if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil { - setupLog.Error(err, "problem running manager") - os.Exit(1) - } +ctrl.SetLogger(zap.New(zap.UseDevMode(true))) + +mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{Scheme: scheme}) +if err != nil { +setupLog.Error(err, "unable to start manager") +os.Exit(1) +} + +err = (&controllers.MyKindReconciler{ +Client: mgr.GetClient(), +log: ctrl.Log.WithName("mykind-controller"), +}).SetupWithManager(mgr) +if err != nil { +setupLog.Error(err, "unable to create controller", "controller", "mykind") +os.Exit(1) +} + +// +kubebuilder:scaffold:builder + +setupLog.Info("starting manager") +if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil { +setupLog.Error(err, "problem running manager") +os.Exit(1) +} } ``` @@ -458,33 +458,33 @@ func main() { package controllers import ( - "context" +"context" - ctrl "sigs.k8s.io/controller-runtime" - "sigs.k8s.io/controller-runtime/pkg/client" - "github.com/go-logr/logr" +ctrl "sigs.k8s.io/controller-runtime" +"sigs.k8s.io/controller-runtime/pkg/client" +"github.com/go-logr/logr" - "my.repo/api/v1" +"my.repo/api/v1" ) type MyKindReconciler struct { - client.Client - log logr.Logger +client.Client +log logr.Logger } func (r *MyKindReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) { - ctx := context.Background() - log := r.log.WithValues("mykind", req.NamespacedName) +ctx := context.Background() +log := r.log.WithValues("mykind", req.NamespacedName) - // your logic here +// your logic here - return req.Result{}, nil +return req.Result{}, nil } func (r *MyKindReconciler) SetupWithManager(mgr ctrl.Manager) error { - return ctrl.NewControllerManagedBy(mgr). - For(v1.MyKind{}). - Complete(r) +return ctrl.NewControllerManagedBy(mgr). +For(v1.MyKind{}). +Complete(r) } ``` @@ -500,18 +500,18 @@ func (r *MyKindReconciler) SetupWithManager(mgr ctrl.Manager) error { package v1 import ( - "sigs.k8s.io/controller-runtime/pkg/scheme" - "k8s.io/apimachinery/pkg/runtime/schema" +"sigs.k8s.io/controller-runtime/pkg/scheme" +"k8s.io/apimachinery/pkg/runtime/schema" ) var ( - GroupVersion = schema.GroupVersion{Group: "mygroup.test.k8s.io", Version: "v1"} +GroupVersion = schema.GroupVersion{Group: "mygroup.test.k8s.io", Version: "v1"} - // SchemeBuilder is used to add go types to the GroupVersionKind scheme - SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion} +// SchemeBuilder is used to add go types to the GroupVersionKind scheme +SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion} - // AddToScheme adds the types in this group-version to the given scheme. - AddToScheme = SchemeBuilder.AddToScheme +// AddToScheme adds the types in this group-version to the given scheme. +AddToScheme = SchemeBuilder.AddToScheme ) ``` diff --git a/designs/template.md b/designs/template.md index 52ae3ddab85..94f3971cf6b 100644 --- a/designs/template.md +++ b/designs/template.md @@ -29,7 +29,7 @@ design. --> ## Summary diff --git a/docs/CONTRIBUTING-ROLES.md b/docs/CONTRIBUTING-ROLES.md index d7536b03924..0be0c296dc5 100644 --- a/docs/CONTRIBUTING-ROLES.md +++ b/docs/CONTRIBUTING-ROLES.md @@ -58,7 +58,7 @@ The criteria for becoming a reviewer are: - Give 5-10 reviews on PRs - Contribute or review 3-5 PRs substantially (i.e. take on the role of the - defacto "main" reviewer for the PR, contribute a bugfix or feature, etc) +defacto "main" reviewer for the PR, contribute a bugfix or feature, etc) Usually, this will need to occur within a single repository, but if you've worked on a cross-cutting feature, it's ok to count PRs across @@ -84,7 +84,7 @@ Things to look for: - Will it accommodate new changes in the future? - Is it extesnible/layerable (see [DESIGN.md](../DESIGN.md))? - Does it expose a new type from `k8s.io/XYZ`, and, if so, is it worth it? - Is that piece well-designed? +Is that piece well-designed? **For large changes, approvers are responsible for getting reasonble consensus**. With the power to approve such changes comes the @@ -98,8 +98,8 @@ an approver are: - Be a reviewer in the area for a couple months - Be the "main" reviewer or contributor for 5-10 substantial (bugfixes, - features, etc) PRs where approvers did not need to leave substantial - additional comments (i.e. where you were acting as a defacto approver). +features, etc) PRs where approvers did not need to leave substantial +additional comments (i.e. where you were acting as a defacto approver). Once you've met those criteria, you can submit yourself as an approver using a PR that edits the revelant `OWNERS` files appropriately. The @@ -125,22 +125,22 @@ responsible for using the following commands to mark PRs and issues with one or more labels, and should also feel free to help answer questions: - `/kind {bug|feature|documentation}`: things that are broken/new - things/things with lots of words, repsectively +things/things with lots of words, repsectively - `/triage support`: questions, and things that might be bugs but might - just be confusion of how to use something +just be confusion of how to use something - `/priority {backlog|important-longterm|important-soon|critical-urgent}`: - how soon we need to deal with the thing (if someone wants - to/eventually/pretty soon/RIGHT NOW OMG THINGS ARE ON FIRE, - respectively) +how soon we need to deal with the thing (if someone wants +to/eventually/pretty soon/RIGHT NOW OMG THINGS ARE ON FIRE, +respectively) - `/good-first-issue`: this is pretty straightforward to implement, has - a clear plan, and clear criteria for being complete +a clear plan, and clear criteria for being complete - `/help`: this could feasibly still be picked up by someone new-ish, but - has some wrinkles or nitty-gritty details that might not make it a good - first issue +has some wrinkles or nitty-gritty details that might not make it a good +first issue See the [Prow reference](https://prow.k8s.io/command-help) for more details. diff --git a/docs/README.md b/docs/README.md index ec9980fda5a..2f816b5ee97 100644 --- a/docs/README.md +++ b/docs/README.md @@ -3,7 +3,7 @@ The kubebuilder book is served using [mdBook](https://github.com/rust-lang-nursery/mdBook). If you want to test changes to the book locally, follow these directions: 1. Follow the instructions at [https://github.com/rust-lang-nursery/mdBook#installation](https://github.com/rust-lang-nursery/mdBook#installation) to - install mdBook. +install mdBook. 2. Make sure [controller-gen](https://pkg.go.dev/sigs.k8s.io/controller-tools/cmd/controller-gen) is installed in `$GOPATH`. 3. cd into the `docs/book` directory 4. Run `mdbook serve` diff --git a/docs/book/src/SUMMARY.md b/docs/book/src/SUMMARY.md index cb389867449..c9dbf525b40 100644 --- a/docs/book/src/SUMMARY.md +++ b/docs/book/src/SUMMARY.md @@ -12,69 +12,69 @@ - [Tutorial: Building CronJob](cronjob-tutorial/cronjob-tutorial.md) - - [What's in a basic project?](./cronjob-tutorial/basic-project.md) - - [Every journey needs a start, every program a main](./cronjob-tutorial/empty-main.md) - - [Groups and Versions and Kinds, oh my!](./cronjob-tutorial/gvks.md) - - [Adding a new API](./cronjob-tutorial/new-api.md) - - [Designing an API](./cronjob-tutorial/api-design.md) +- [What's in a basic project?](./cronjob-tutorial/basic-project.md) +- [Every journey needs a start, every program a main](./cronjob-tutorial/empty-main.md) +- [Groups and Versions and Kinds, oh my!](./cronjob-tutorial/gvks.md) +- [Adding a new API](./cronjob-tutorial/new-api.md) +- [Designing an API](./cronjob-tutorial/api-design.md) - - [A Brief Aside: What's the rest of this stuff?](./cronjob-tutorial/other-api-files.md) +- [A Brief Aside: What's the rest of this stuff?](./cronjob-tutorial/other-api-files.md) - - [What's in a controller?](./cronjob-tutorial/controller-overview.md) - - [Implementing a controller](./cronjob-tutorial/controller-implementation.md) +- [What's in a controller?](./cronjob-tutorial/controller-overview.md) +- [Implementing a controller](./cronjob-tutorial/controller-implementation.md) - - [You said something about main?](./cronjob-tutorial/main-revisited.md) +- [You said something about main?](./cronjob-tutorial/main-revisited.md) - - [Implementing defaulting/validating webhooks](./cronjob-tutorial/webhook-implementation.md) - - [Running and deploying the controller](./cronjob-tutorial/running.md) +- [Implementing defaulting/validating webhooks](./cronjob-tutorial/webhook-implementation.md) +- [Running and deploying the controller](./cronjob-tutorial/running.md) - - [Deploying cert-manager](./cronjob-tutorial/cert-manager.md) - - [Deploying webhooks](./cronjob-tutorial/running-webhook.md) +- [Deploying cert-manager](./cronjob-tutorial/cert-manager.md) +- [Deploying webhooks](./cronjob-tutorial/running-webhook.md) - - [Writing tests](./cronjob-tutorial/writing-tests.md) +- [Writing tests](./cronjob-tutorial/writing-tests.md) - - [Epilogue](./cronjob-tutorial/epilogue.md) +- [Epilogue](./cronjob-tutorial/epilogue.md) - [Tutorial: Multi-Version API](./multiversion-tutorial/tutorial.md) - - [Changing things up](./multiversion-tutorial/api-changes.md) - - [Hubs, spokes, and other wheel metaphors](./multiversion-tutorial/conversion-concepts.md) - - [Implementing conversion](./multiversion-tutorial/conversion.md) +- [Changing things up](./multiversion-tutorial/api-changes.md) +- [Hubs, spokes, and other wheel metaphors](./multiversion-tutorial/conversion-concepts.md) +- [Implementing conversion](./multiversion-tutorial/conversion.md) - - [and setting up the webhooks](./multiversion-tutorial/webhooks.md) +- [and setting up the webhooks](./multiversion-tutorial/webhooks.md) - - [Deployment and Testing](./multiversion-tutorial/deployment.md) +- [Deployment and Testing](./multiversion-tutorial/deployment.md) - [Tutorial: Component Config](./component-config-tutorial/tutorial.md) - - [Changing things up](./component-config-tutorial/api-changes.md) - - [Defining your Config](./component-config-tutorial/define-config.md) +- [Changing things up](./component-config-tutorial/api-changes.md) +- [Defining your Config](./component-config-tutorial/define-config.md) - - [Using a custom type](./component-config-tutorial/custom-type.md) +- [Using a custom type](./component-config-tutorial/custom-type.md) - - [Adding a new Config Type](./component-config-tutorial/config-type.md) - - [Updating main](./component-config-tutorial/updating-main.md) - - [Defining your Custom Config](./component-config-tutorial/define-custom-config.md) +- [Adding a new Config Type](./component-config-tutorial/config-type.md) +- [Updating main](./component-config-tutorial/updating-main.md) +- [Defining your Custom Config](./component-config-tutorial/define-custom-config.md) --- - [Migrations](./migrations.md) - - [Legacy (before <= v3.0.0)](./migration/legacy.md) - - [Kubebuilder v1 vs v2](migration/legacy/v1vsv2.md) +- [Legacy (before <= v3.0.0)](./migration/legacy.md) +- [Kubebuilder v1 vs v2](migration/legacy/v1vsv2.md) - - [Migration Guide](./migration/legacy/migration_guide_v1tov2.md) +- [Migration Guide](./migration/legacy/migration_guide_v1tov2.md) - - [Kubebuilder v2 vs v3](migration/legacy/v2vsv3.md) +- [Kubebuilder v2 vs v3](migration/legacy/v2vsv3.md) - - [Migration Guide](migration/legacy/migration_guide_v2tov3.md) - - [Migration by updating the files](migration/legacy/manually_migration_guide_v2_v3.md) - - [From v3.0.0 with plugins](./migration/v3-plugins.md) - - [go/v3 vs go/v4](migration/v3vsv4.md) - - - [Migration Guide](migration/migration_guide_gov3_to_gov4.md) - - [Migration by updating the files](migration/manually_migration_guide_gov3_to_gov4.md) - - [Single Group to Multi-Group](./migration/multi-group.md) +- [Migration Guide](migration/legacy/migration_guide_v2tov3.md) +- [Migration by updating the files](migration/legacy/manually_migration_guide_v2_v3.md) +- [From v3.0.0 with plugins](./migration/v3-plugins.md) +- [go/v3 vs go/v4](migration/v3vsv4.md) + +- [Migration Guide](migration/migration_guide_gov3_to_gov4.md) +- [Migration by updating the files](migration/manually_migration_guide_gov3_to_gov4.md) +- [Single Group to Multi-Group](./migration/multi-group.md) - [Project Upgrade Assistant](./reference/rescaffold.md) @@ -82,64 +82,64 @@ - [Reference](./reference/reference.md) - - [Generating CRDs](./reference/generating-crd.md) - - [Using Finalizers](./reference/using-finalizers.md) - - [Good Practices](./reference/good-practices.md) - - [Raising Events](./reference/raising-events.md) - - [Watching Resources](./reference/watching-resources.md) - - [Resources Managed by the Operator](./reference/watching-resources/operator-managed.md) - - [Externally Managed Resources](./reference/watching-resources/externally-managed.md) - - [Kind cluster](reference/kind.md) - - [What's a webhook?](reference/webhook-overview.md) - - [Admission webhook](reference/admission-webhook.md) - - [Webhooks for Core Types](reference/webhook-for-core-types.md) - - [Markers for Config/Code Generation](./reference/markers.md) +- [Generating CRDs](./reference/generating-crd.md) +- [Using Finalizers](./reference/using-finalizers.md) +- [Good Practices](./reference/good-practices.md) +- [Raising Events](./reference/raising-events.md) +- [Watching Resources](./reference/watching-resources.md) +- [Resources Managed by the Operator](./reference/watching-resources/operator-managed.md) +- [Externally Managed Resources](./reference/watching-resources/externally-managed.md) +- [Kind cluster](reference/kind.md) +- [What's a webhook?](reference/webhook-overview.md) +- [Admission webhook](reference/admission-webhook.md) +- [Webhooks for Core Types](reference/webhook-for-core-types.md) +- [Markers for Config/Code Generation](./reference/markers.md) - - [CRD Generation](./reference/markers/crd.md) - - [CRD Validation](./reference/markers/crd-validation.md) - - [CRD Processing](./reference/markers/crd-processing.md) - - [Webhook](./reference/markers/webhook.md) - - [Object/DeepCopy](./reference/markers/object.md) - - [RBAC](./reference/markers/rbac.md) +- [CRD Generation](./reference/markers/crd.md) +- [CRD Validation](./reference/markers/crd-validation.md) +- [CRD Processing](./reference/markers/crd-processing.md) +- [Webhook](./reference/markers/webhook.md) +- [Object/DeepCopy](./reference/markers/object.md) +- [RBAC](./reference/markers/rbac.md) - - [controller-gen CLI](./reference/controller-gen.md) - - [completion](./reference/completion.md) - - [Artifacts](./reference/artifacts.md) - - [Platform Support](./reference/platform.md) +- [controller-gen CLI](./reference/controller-gen.md) +- [completion](./reference/completion.md) +- [Artifacts](./reference/artifacts.md) +- [Platform Support](./reference/platform.md) - - [Sub-Module Layouts](./reference/submodule-layouts.md) - - [Using an external Type / API](./reference/using_an_external_type.md) +- [Sub-Module Layouts](./reference/submodule-layouts.md) +- [Using an external Type / API](./reference/using_an_external_type.md) - - [Configuring EnvTest](./reference/envtest.md) +- [Configuring EnvTest](./reference/envtest.md) - - [Metrics](./reference/metrics.md) +- [Metrics](./reference/metrics.md) - - [Reference](./reference/metrics-reference.md) +- [Reference](./reference/metrics-reference.md) - - [Makefile Helpers](./reference/makefile-helpers.md) - - [Project config](./reference/project-config.md) +- [Makefile Helpers](./reference/makefile-helpers.md) +- [Project config](./reference/project-config.md) --- - [Plugins][plugins] - - [Available Plugins](./plugins/available-plugins.md) - - [To scaffold a project](./plugins/to-scaffold-project.md) - - [go/v2 (Deprecated)](./plugins/go-v2-plugin.md) - - [go/v3 (Deprecated)](./plugins/go-v3-plugin.md) - - [go/v4 (Default init scaffold)](./plugins/go-v4-plugin.md) - - [To add optional features](./plugins/to-add-optional-features.md) - - [declarative/v1 (Deprecated)](./plugins/declarative-v1.md) - - [grafana/v1-alpha](./plugins/grafana-v1-alpha.md) - - [deploy-image/v1-alpha](./plugins/deploy-image-plugin-v1-alpha.md) - - [To be extended for others tools](./plugins/to-be-extended.md) - - [kustomize/v1 (Deprecated)](./plugins/kustomize-v1.md) - - [kustomize/v2](./plugins/kustomize-v2.md) - - [Extending the CLI](./plugins/extending-cli.md) - - [Creating your own plugins](./plugins/creating-plugins.md) - - [Testing your own plugins](./plugins/testing-plugins.md) - - [Plugins Versioning](./plugins/plugins-versioning.md) - - [Creating external plugins](./plugins/external-plugins.md) +- [Available Plugins](./plugins/available-plugins.md) +- [To scaffold a project](./plugins/to-scaffold-project.md) +- [go/v2 (Deprecated)](./plugins/go-v2-plugin.md) +- [go/v3 (Deprecated)](./plugins/go-v3-plugin.md) +- [go/v4 (Default init scaffold)](./plugins/go-v4-plugin.md) +- [To add optional features](./plugins/to-add-optional-features.md) +- [declarative/v1 (Deprecated)](./plugins/declarative-v1.md) +- [grafana/v1-alpha](./plugins/grafana-v1-alpha.md) +- [deploy-image/v1-alpha](./plugins/deploy-image-plugin-v1-alpha.md) +- [To be extended for others tools](./plugins/to-be-extended.md) +- [kustomize/v1 (Deprecated)](./plugins/kustomize-v1.md) +- [kustomize/v2](./plugins/kustomize-v2.md) +- [Extending the CLI](./plugins/extending-cli.md) +- [Creating your own plugins](./plugins/creating-plugins.md) +- [Testing your own plugins](./plugins/testing-plugins.md) +- [Plugins Versioning](./plugins/plugins-versioning.md) +- [Creating external plugins](./plugins/external-plugins.md) --- diff --git a/docs/book/src/architecture.md b/docs/book/src/architecture.md index ef12bd22fa2..bccb71898c1 100644 --- a/docs/book/src/architecture.md +++ b/docs/book/src/architecture.md @@ -1,6 +1,6 @@ # Architecture Concept Diagram -The following diagram will help you get a better idea over the Kubebuilder concepts and architecture. +The following diagram will help you get a better idea over the Kubebuilder concepts and architecture. {{#include ./kb_concept_diagram.svg}} diff --git a/docs/book/src/component-config-tutorial/api-changes.md b/docs/book/src/component-config-tutorial/api-changes.md index edd39223edd..e09b6b7328d 100644 --- a/docs/book/src/component-config-tutorial/api-changes.md +++ b/docs/book/src/component-config-tutorial/api-changes.md @@ -14,8 +14,8 @@ Please, be aware that it will force Kubebuilder remove this option soon in futur This tutorial will show you how to create a custom configuration file for your project by modifying a project generated with the `--component-config` flag -passed to the `init` command. The full tutorial's source can be found -[here][tutorial-source]. Make sure you've gone through the [installation +passed to the `init` command. The full tutorial's source can be found +[here][tutorial-source]. Make sure you've gone through the [installation steps](/quick-start.md#installation) before continuing. ## New project: @@ -37,9 +37,9 @@ should be loaded from. ```go var configFile string flag.StringVar(&configFile, "config", "", - "The controller will load its initial configuration from this file. "+ - "Omit this flag to use the default configuration values. "+ - "Command-line flags override configuration from this file.") +"The controller will load its initial configuration from this file. "+ +"Omit this flag to use the default configuration values. "+ +"Command-line flags override configuration from this file.") ``` Now, we can setup the `Options` struct and check if the `configFile` is set, @@ -51,11 +51,11 @@ function on `Options` to parse and populate the `Options` from the config. var err error options := ctrl.Options{Scheme: scheme} if configFile != "" { - options, err = options.AndFrom(ctrl.ConfigFile().AtPath(configFile)) - if err != nil { - setupLog.Error(err, "unable to load the config file") - os.Exit(1) - } +options, err = options.AndFrom(ctrl.ConfigFile().AtPath(configFile)) +if err != nil { +setupLog.Error(err, "unable to load the config file") +os.Exit(1) +} } ``` @@ -63,7 +63,7 @@ if configFile != "" {

Your Options may have defaults from flags.

-If you have previously allowed other `flags` like `--metrics-bind-addr` or +If you have previously allowed other `flags` like `--metrics-bind-addr` or `--enable-leader-election`, you'll want to set those on the `Options` before loading the config from the file. @@ -86,14 +86,14 @@ Create the file `/config/manager/controller_manager_config.yaml` with the follow apiVersion: controller-runtime.sigs.k8s.io/v1alpha1 kind: ControllerManagerConfig health: - healthProbeBindAddress: :8081 +healthProbeBindAddress: :8081 metrics: - bindAddress: 127.0.0.1:8080 +bindAddress: 127.0.0.1:8080 webhook: - port: 9443 +port: 9443 leaderElection: - leaderElect: true - resourceName: ecaf1259.tutorial.kubebuilder.io +leaderElect: true +resourceName: ecaf1259.tutorial.kubebuilder.io # leaderElectionReleaseOnCancel defines if the leader should step down volume # when the Manager ends. This requires the binary to immediately end when the # Manager is stopped, otherwise, this setting is unsafe. Setting this significantly @@ -110,12 +110,12 @@ Update the file `/config/manager/kustomization.yaml` by adding at the bottom the ```yaml generatorOptions: - disableNameSuffixHash: true +disableNameSuffixHash: true configMapGenerator: - name: manager-config - files: - - controller_manager_config.yaml +files: +- controller_manager_config.yaml ``` Update the file `default/kustomization.yaml` by adding under the [`patchesStrategicMerge:` key](https://kubectl.docs.kubernetes.io/references/kustomize/builtins/#_patchesstrategicmerge_) the following patch: @@ -131,18 +131,18 @@ Update the file `default/manager_config_patch.yaml` by adding under the `spec:` ```yaml spec: - template: - spec: - containers: - - name: manager - args: - - "--config=controller_manager_config.yaml" - volumeMounts: - - name: manager-config - mountPath: /controller_manager_config.yaml - subPath: controller_manager_config.yaml - volumes: - - name: manager-config - configMap: - name: manager-config +template: +spec: +containers: +- name: manager +args: +- "--config=controller_manager_config.yaml" +volumeMounts: +- name: manager-config +mountPath: /controller_manager_config.yaml +subPath: controller_manager_config.yaml +volumes: +- name: manager-config +configMap: +name: manager-config ``` diff --git a/docs/book/src/component-config-tutorial/custom-type.md b/docs/book/src/component-config-tutorial/custom-type.md index 55910168e48..a147b4f96a8 100644 --- a/docs/book/src/component-config-tutorial/custom-type.md +++ b/docs/book/src/component-config-tutorial/custom-type.md @@ -28,4 +28,4 @@ configurations, e.g. `ClusterName`, `Region` or anything serializable into updating your `main.go` to setup the new type for parsing. The rest of this tutorial will walk through implementing a custom component -config type. \ No newline at end of file +config type. \ No newline at end of file diff --git a/docs/book/src/component-config-tutorial/define-config.md b/docs/book/src/component-config-tutorial/define-config.md index aad23970c08..c337954d9b3 100644 --- a/docs/book/src/component-config-tutorial/define-config.md +++ b/docs/book/src/component-config-tutorial/define-config.md @@ -13,7 +13,7 @@ Please, be aware that it will force Kubebuilder remove this option soon in futur Now that you have a component config base project we need to customize the -values that are passed into the controller, to do this we can take a look at +values that are passed into the controller, to do this we can take a look at `config/manager/controller_manager_config.yaml`. {{#literatego ./testdata/controller_manager_config.yaml}} diff --git a/docs/book/src/component-config-tutorial/define-custom-config.md b/docs/book/src/component-config-tutorial/define-custom-config.md index 454b45b3c6a..13a65f2f4c5 100644 --- a/docs/book/src/component-config-tutorial/define-custom-config.md +++ b/docs/book/src/component-config-tutorial/define-custom-config.md @@ -12,7 +12,7 @@ Please, be aware that it will force Kubebuilder remove this option soon in futur -Now that you have a custom component config we change the +Now that you have a custom component config we change the `config/manager/controller_manager_config.yaml` to use the new GVK you defined. {{#literatego ./testdata/project/config/manager/controller_manager_config.yaml}} diff --git a/docs/book/src/component-config-tutorial/testdata/project/README.md b/docs/book/src/component-config-tutorial/testdata/project/README.md index 3fca7276d17..f5b468f0cde 100644 --- a/docs/book/src/component-config-tutorial/testdata/project/README.md +++ b/docs/book/src/component-config-tutorial/testdata/project/README.md @@ -19,8 +19,8 @@ make docker-build docker-push IMG=/project:tag ``` -**NOTE:** This image ought to be published in the personal registry you specified. -And it is required to have access to pull the image from the working environment. +**NOTE:** This image ought to be published in the personal registry you specified. +And it is required to have access to pull the image from the working environment. Make sure you have the proper permission to the registry if the above commands don’t work. **Install the CRDs into the cluster:** @@ -35,7 +35,7 @@ make install make deploy IMG=/project:tag ``` -> **NOTE**: If you encounter RBAC errors, you may need to grant yourself cluster-admin +> **NOTE**: If you encounter RBAC errors, you may need to grant yourself cluster-admin privileges or be logged in as admin. **Create instances of your solution** @@ -104,7 +104,7 @@ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at - http://www.apache.org/licenses/LICENSE-2.0 +http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, diff --git a/docs/book/src/component-config-tutorial/tutorial.md b/docs/book/src/component-config-tutorial/tutorial.md index 4482f686bb2..b388f653c41 100644 --- a/docs/book/src/component-config-tutorial/tutorial.md +++ b/docs/book/src/component-config-tutorial/tutorial.md @@ -3,9 +3,9 @@ ## When to use it ? -If you are looking to scaffold the kustomize configuration manifests for your own language plugin +If you are looking to scaffold the kustomize configuration manifests for your own language plugin ## How to use it ? If you are looking to define that your language plugin should use kustomize use the [Bundle Plugin][bundle] to specify that your language plugin is a composition with your plugin responsible for scaffold -all that is language specific and kustomize for its configuration, see: +all that is language specific and kustomize for its configuration, see: ```go - // Bundle plugin which built the golang projects scaffold by Kubebuilder go/v3 - // The follow code is creating a new plugin with its name and version via composition - // You can define that one plugin is composite by 1 or Many others plugins - gov3Bundle, _ := plugin.NewBundle(plugin.WithName(golang.DefaultNameQualifier), - plugin.WithVersion(plugin.Version{Number: 3}), - plugin.WithPlugins(kustomizecommonv1.Plugin{}, golangv3.Plugin{}), // scaffold the config/ directory and all kustomize files - // Scaffold the Golang files and all that specific for the language e.g. go.mod, apis, controllers - ) +// Bundle plugin which built the golang projects scaffold by Kubebuilder go/v3 +// The follow code is creating a new plugin with its name and version via composition +// You can define that one plugin is composite by 1 or Many others plugins +gov3Bundle, _ := plugin.NewBundle(plugin.WithName(golang.DefaultNameQualifier), +plugin.WithVersion(plugin.Version{Number: 3}), +plugin.WithPlugins(kustomizecommonv1.Plugin{}, golangv3.Plugin{}), // scaffold the config/ directory and all kustomize files +// Scaffold the Golang files and all that specific for the language e.g. go.mod, apis, controllers +) ``` Also, with Kubebuilder, you can use kustomize alone via: ```sh -kubebuilder init --plugins=kustomize/v1 -$ ls -la +kubebuilder init --plugins=kustomize/v1 +$ ls -la total 24 drwxr-xr-x 6 camilamacedo86 staff 192 31 Mar 09:56 . drwxr-xr-x 11 camilamacedo86 staff 352 29 Mar 21:23 .. @@ -84,7 +84,7 @@ Or combined with the base language plugins: ```sh # Provides the same scaffold of go/v3 plugin which is a composition (kubebuilder init --plugins=go/v3) -kubebuilder init --plugins=kustomize/v1,base.go.kubebuilder.io/v3 --domain example.org --repo example.org/guestbook-operator +kubebuilder init --plugins=kustomize/v1,base.go.kubebuilder.io/v3 --domain example.org --repo example.org/guestbook-operator ``` ## Subcommands @@ -102,7 +102,7 @@ Its implementation for the subcommand create api will scaffold the kustomize man which are specific for each API, see [here][kustomize-create-api]. The same applies to its implementation for create webhook. - + ## Affected files @@ -112,7 +112,7 @@ The following scaffolds will be created or updated by this plugin: ## Further resources -* Check the kustomize [plugin implementation](https://github.com/kubernetes-sigs/kubebuilder/tree/master/pkg/plugins/common/kustomize) +* Check the kustomize [plugin implementation](https://github.com/kubernetes-sigs/kubebuilder/tree/master/pkg/plugins/common/kustomize) * Check the [kustomize documentation][kustomize-docs] * Check the [kustomize repository][kustomize-github] diff --git a/docs/book/src/plugins/kustomize-v2.md b/docs/book/src/plugins/kustomize-v2.md index b8ae3d9348d..068827a5c98 100644 --- a/docs/book/src/plugins/kustomize-v2.md +++ b/docs/book/src/plugins/kustomize-v2.md @@ -1,4 +1,4 @@ -# [Default Scaffold] Kustomize v2 +# [Default Scaffold] Kustomize v2 The kustomize plugin allows you to scaffold all kustomize manifests used to work with the language base plugin `base.go.kubebuilder.io/v4`. This plugin is used to generate the manifest under `config/` directory for the projects build within the go/v4 plugin (default scaffold). @@ -36,19 +36,19 @@ all that is language specific and kustomize for its configuration, see: ```go import ( ... - kustomizecommonv2alpha "sigs.k8s.io/kubebuilder/v3/pkg/plugins/common/kustomize/v2" - golangv4 "sigs.k8s.io/kubebuilder/v3/pkg/plugins/golang/v4" +kustomizecommonv2alpha "sigs.k8s.io/kubebuilder/v3/pkg/plugins/common/kustomize/v2" +golangv4 "sigs.k8s.io/kubebuilder/v3/pkg/plugins/golang/v4" ... ) - // Bundle plugin which built the golang projects scaffold by Kubebuilder go/v3 - // The follow code is creating a new plugin with its name and version via composition - // You can define that one plugin is composite by 1 or Many others plugins - gov3Bundle, _ := plugin.NewBundle(plugin.WithName(golang.DefaultNameQualifier), - plugin.WithVersion(plugin.Version{Number: 3}), - plugin.WithPlugins(kustomizecommonv2.Plugin{}, golangv3.Plugin{}), // scaffold the config/ directory and all kustomize files - // Scaffold the Golang files and all that specific for the language e.g. go.mod, apis, controllers - ) +// Bundle plugin which built the golang projects scaffold by Kubebuilder go/v3 +// The follow code is creating a new plugin with its name and version via composition +// You can define that one plugin is composite by 1 or Many others plugins +gov3Bundle, _ := plugin.NewBundle(plugin.WithName(golang.DefaultNameQualifier), +plugin.WithVersion(plugin.Version{Number: 3}), +plugin.WithPlugins(kustomizecommonv2.Plugin{}, golangv3.Plugin{}), // scaffold the config/ directory and all kustomize files +// Scaffold the Golang files and all that specific for the language e.g. go.mod, apis, controllers +) ``` Also, with Kubebuilder, you can use kustomize/v2 alone via: diff --git a/docs/book/src/plugins/plugins-versioning.md b/docs/book/src/plugins/plugins-versioning.md index 8104a5ca31b..add3389f875 100644 --- a/docs/book/src/plugins/plugins-versioning.md +++ b/docs/book/src/plugins/plugins-versioning.md @@ -1,7 +1,7 @@ # Plugins Versioning | Name | Example | Description | -|----------|-------------|--------| +|----------|-------------|--------| | Kubebuilder version | `v2.2.0`, `v2.3.0`, `v2.3.1` | Tagged versions of the Kubebuilder project, representing changes to the source code in this repository. See the [releases][kb-releases] page for binary releases. | | Project version | `"1"`, `"2"`, `"3"` | Project version defines the scheme of a `PROJECT` configuration file. This version is defined in a `PROJECT` file's `version`. | | Plugin version | `v2`, `v3` | Represents the version of an individual plugin, as well as the corresponding scaffolding that it generates. This version is defined in a plugin key, ex. `go.kubebuilder.io/v2`. See the [design doc][cli-plugins-versioning] for more details. | diff --git a/docs/book/src/plugins/testing-plugins.md b/docs/book/src/plugins/testing-plugins.md index 904039e9859..e8d65d7ea5e 100644 --- a/docs/book/src/plugins/testing-plugins.md +++ b/docs/book/src/plugins/testing-plugins.md @@ -10,26 +10,26 @@ You can test your plugin in two dimension: You can check [Kubebuilder/v3/test/e2e/utils](https://pkg.go.dev/sigs.k8s.io/kubebuilder/v3/test/e2e/utils) package that offers `TestContext` of rich methods: - [NewTestContext](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L51) helps define: - - Temporary folder for testing projects - - Temporary controller-manager image - - [Kubectl execution method](https://pkg.go.dev/sigs.k8s.io/kubebuilder/v3/test/e2e/utils#Kubectl) - - The cli executable (`kubebuilder`, `operator-sdk`, OR your extended-cli) +- Temporary folder for testing projects +- Temporary controller-manager image +- [Kubectl execution method](https://pkg.go.dev/sigs.k8s.io/kubebuilder/v3/test/e2e/utils#Kubectl) +- The cli executable (`kubebuilder`, `operator-sdk`, OR your extended-cli) Once defined, you can use `TestContext` to: 1. Setup testing environment, e.g: - - Clean up the environment, create temp dir. See [Prepare](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L97) - - Install prerequisites CRDs: See [InstallCertManager](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L138), [InstallPrometheusManager](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.6.0/test/e2e/utils/test_context.go#L171) +- Clean up the environment, create temp dir. See [Prepare](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L97) +- Install prerequisites CRDs: See [InstallCertManager](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L138), [InstallPrometheusManager](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.6.0/test/e2e/utils/test_context.go#L171) 2. Validate the plugin behavior, e.g: - - Trigger the plugin's bound subcommands. See [Init](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L213), [CreateAPI](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.6.0/test/e2e/utils/test_context.go#L222) - - Use [PluginUtil](https://pkg.go.dev/sigs.k8s.io/kubebuilder/v3/pkg/plugin/util) to verify the scaffolded outputs. See [InsertCode](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/pkg/plugin/util/util.go#L67), [ReplaceInFile](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.6.0/pkg/plugin/util/util.go#L196), [UncommendCode](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.6.0/pkg/plugin/util/util.go#L86) +- Trigger the plugin's bound subcommands. See [Init](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L213), [CreateAPI](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.6.0/test/e2e/utils/test_context.go#L222) +- Use [PluginUtil](https://pkg.go.dev/sigs.k8s.io/kubebuilder/v3/pkg/plugin/util) to verify the scaffolded outputs. See [InsertCode](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/pkg/plugin/util/util.go#L67), [ReplaceInFile](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.6.0/pkg/plugin/util/util.go#L196), [UncommendCode](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.6.0/pkg/plugin/util/util.go#L86) 3. Further make sure the scaffolded output works, e.g: - - Execute commands in your `Makefile`. See [Make](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L240) - - Temporary load image of the testing controller. See [LoadImageToKindCluster](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L283) - - Call Kubectl to validate running resources. See [utils.Kubectl](https://pkg.go.dev/sigs.k8s.io/kubebuilder/v3/test/e2e/utils#Kubectl) +- Execute commands in your `Makefile`. See [Make](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L240) +- Temporary load image of the testing controller. See [LoadImageToKindCluster](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L283) +- Call Kubectl to validate running resources. See [utils.Kubectl](https://pkg.go.dev/sigs.k8s.io/kubebuilder/v3/test/e2e/utils#Kubectl) 4. Delete temporary resources after testing exited, e.g: - - Uninstall prerequisites CRDs: See [UninstallPrometheusOperManager](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L183) - - Delete temp dir. See [Destroy](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L255) +- Uninstall prerequisites CRDs: See [UninstallPrometheusOperManager](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L183) +- Delete temp dir. See [Destroy](https://github.com/kubernetes-sigs/kubebuilder/blob/v3.7.0/test/e2e/utils/test_context.go#L255) **References:** [operator-sdk e2e tests](https://github.com/operator-framework/operator-sdk/tree/master/test/e2e/go), [kubebuiler e2e tests](https://github.com/kubernetes-sigs/kubebuilder/tree/master/test/e2e/v3) @@ -45,40 +45,40 @@ The commands are very similar as mentioned in [creating-plugins](creating-plugin Following is a general workflow to create a sample by the plugin `go/v3`: (`kbc` is an instance of `TextContext`) - To initialized a project: - ```go - By("initializing a project") - err = kbc.Init( - "--plugins", "go/v3", - "--project-version", "3", - "--domain", kbc.Domain, - "--fetch-deps=false", - "--component-config=true", - ) - ExpectWithOffset(1, err).NotTo(HaveOccurred()) - ``` +```go +By("initializing a project") +err = kbc.Init( +"--plugins", "go/v3", +"--project-version", "3", +"--domain", kbc.Domain, +"--fetch-deps=false", +"--component-config=true", +) +ExpectWithOffset(1, err).NotTo(HaveOccurred()) +``` - To define API: - ```go - By("creating API definition") - err = kbc.CreateAPI( - "--group", kbc.Group, - "--version", kbc.Version, - "--kind", kbc.Kind, - "--namespaced", - "--resource", - "--controller", - "--make=false", - ) - ExpectWithOffset(1, err).NotTo(HaveOccurred()) - ``` +```go +By("creating API definition") +err = kbc.CreateAPI( +"--group", kbc.Group, +"--version", kbc.Version, +"--kind", kbc.Kind, +"--namespaced", +"--resource", +"--controller", +"--make=false", +) +ExpectWithOffset(1, err).NotTo(HaveOccurred()) +``` - To scaffold webhook configurations: - ```go - By("scaffolding mutating and validating webhooks") - err = kbc.CreateWebhook( - "--group", kbc.Group, - "--version", kbc.Version, - "--kind", kbc.Kind, - "--defaulting", - "--programmatic-validation", - ) - ExpectWithOffset(1, err).NotTo(HaveOccurred()) - ``` +```go +By("scaffolding mutating and validating webhooks") +err = kbc.CreateWebhook( +"--group", kbc.Group, +"--version", kbc.Version, +"--kind", kbc.Kind, +"--defaulting", +"--programmatic-validation", +) +ExpectWithOffset(1, err).NotTo(HaveOccurred()) +``` diff --git a/docs/book/src/quick-start.md b/docs/book/src/quick-start.md index 2abb74fab1f..07578f1882b 100644 --- a/docs/book/src/quick-start.md +++ b/docs/book/src/quick-start.md @@ -102,33 +102,33 @@ make manifests ```go // GuestbookSpec defines the desired state of Guestbook type GuestbookSpec struct { - // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster - // Important: Run "make" to regenerate code after modifying this file +// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster +// Important: Run "make" to regenerate code after modifying this file - // Quantity of instances - // +kubebuilder:validation:Minimum=1 - // +kubebuilder:validation:Maximum=10 - Size int32 `json:"size"` +// Quantity of instances +// +kubebuilder:validation:Minimum=1 +// +kubebuilder:validation:Maximum=10 +Size int32 `json:"size"` - // Name of the ConfigMap for GuestbookSpec's configuration - // +kubebuilder:validation:MaxLength=15 - // +kubebuilder:validation:MinLength=1 - ConfigMapName string `json:"configMapName"` +// Name of the ConfigMap for GuestbookSpec's configuration +// +kubebuilder:validation:MaxLength=15 +// +kubebuilder:validation:MinLength=1 +ConfigMapName string `json:"configMapName"` - // +kubebuilder:validation:Enum=Phone;Address;Name - Type string `json:"alias,omitempty"` +// +kubebuilder:validation:Enum=Phone;Address;Name +Type string `json:"alias,omitempty"` } // GuestbookStatus defines the observed state of Guestbook type GuestbookStatus struct { - // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster - // Important: Run "make" to regenerate code after modifying this file +// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster +// Important: Run "make" to regenerate code after modifying this file - // PodName of the active Guestbook node. - Active string `json:"active"` +// PodName of the active Guestbook node. +Active string `json:"active"` - // PodNames of the standby Guestbook nodes. - Standby []string `json:"standby"` +// PodNames of the standby Guestbook nodes. +Standby []string `json:"standby"` } // +kubebuilder:object:root=true @@ -137,11 +137,11 @@ type GuestbookStatus struct { // Guestbook is the Schema for the guestbooks API type Guestbook struct { - metav1.TypeMeta `json:",inline"` - metav1.ObjectMeta `json:"metadata,omitempty"` +metav1.TypeMeta `json:",inline"` +metav1.ObjectMeta `json:"metadata,omitempty"` - Spec GuestbookSpec `json:"spec,omitempty"` - Status GuestbookStatus `json:"status,omitempty"` +Spec GuestbookSpec `json:"spec,omitempty"` +Status GuestbookStatus `json:"status,omitempty"` } ``` @@ -230,7 +230,7 @@ make undeploy ## Next Step Now, see the [architecture concept diagram][architecture-concept-diagram] for a better overview and follow up the -[CronJob tutorial][cronjob-tutorial] to better understand how it works by developing a +[CronJob tutorial][cronjob-tutorial] to better understand how it works by developing a demo example project. diff --git a/docs/book/src/reference/makefile-helpers.md b/docs/book/src/reference/makefile-helpers.md index 984c4a9f2f5..800c50cf71c 100644 --- a/docs/book/src/reference/makefile-helpers.md +++ b/docs/book/src/reference/makefile-helpers.md @@ -10,8 +10,8 @@ The projects are built with Go and you have a lot of ways to do that. One of the # Run with Delve for development purposes against the configured Kubernetes cluster in ~/.kube/config # Delve is a debugger for the Go programming language. More info: https://github.com/go-delve/delve run-delve: generate fmt vet manifests - go build -gcflags "all=-trimpath=$(shell go env GOPATH)" -o bin/manager main.go - dlv --listen=:2345 --headless=true --api-version=2 --accept-multiclient exec ./bin/manager +go build -gcflags "all=-trimpath=$(shell go env GOPATH)" -o bin/manager main.go +dlv --listen=:2345 --headless=true --api-version=2 --accept-multiclient exec ./bin/manager ``` ## To change the version of CRDs @@ -21,7 +21,7 @@ generates CRDs for kubebuilder projects, wrapped in the following `make` rule: ```makefile manifests: controller-gen - $(CONTROLLER_GEN) rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases +$(CONTROLLER_GEN) rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases ``` `controller-gen` lets you specify what CRD API version to generate (either "v1", the default, or "v1beta1"). @@ -32,7 +32,7 @@ found at the top of your Makefile: CRD_OPTIONS ?= "crd:crdVersions={v1beta1},preserveUnknownFields=false" manifests: controller-gen - $(CONTROLLER_GEN) rbac:roleName=manager-role $(CRD_OPTIONS) webhook paths="./..." output:crd:artifacts:config=config/crd/bases +$(CONTROLLER_GEN) rbac:roleName=manager-role $(CRD_OPTIONS) webhook paths="./..." output:crd:artifacts:config=config/crd/bases ``` ## To get all the manifests without deploying @@ -43,7 +43,7 @@ To accomplish this, add the following lines to the Makefile: ```makefile dry-run: manifests - cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG} - mkdir -p dry-run - $(KUSTOMIZE) build config/default > dry-run/manifests.yaml +cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG} +mkdir -p dry-run +$(KUSTOMIZE) build config/default > dry-run/manifests.yaml ``` diff --git a/docs/book/src/reference/markers.md b/docs/book/src/reference/markers.md index 0520717b0f2..b0653dd75fa 100644 --- a/docs/book/src/reference/markers.md +++ b/docs/book/src/reference/markers.md @@ -37,12 +37,12 @@ Kubebuilder projects have two `make` targets that make use of controller-gen: - `make manifests` generates Kubernetes object YAML, like - [CustomResourceDefinitions](./markers/crd.md), - [WebhookConfigurations](./markers/webhook.md), and [RBAC - roles](./markers/rbac.md). +[CustomResourceDefinitions](./markers/crd.md), +[WebhookConfigurations](./markers/webhook.md), and [RBAC +roles](./markers/rbac.md). - `make generate` generates code, like [runtime.Object/DeepCopy - implementations](./markers/object.md). +implementations](./markers/object.md). See [Generating CRDs](./generating-crd.md) for a comprehensive overview. @@ -54,17 +54,17 @@ controller-tools](https://pkg.go.dev/sigs.k8s.io/controller-tools/pkg/markers?ta In general, markers may either be: - **Empty** (`+kubebuilder:validation:Optional`): empty markers are like boolean flags on the command line - -- just specifying them enables some behavior. +-- just specifying them enables some behavior. - **Anonymous** (`+kubebuilder:validation:MaxItems=2`): anonymous markers take - a single value as their argument. +a single value as their argument. - **Multi-option** - (`+kubebuilder:printcolumn:JSONPath=".status.replicas",name=Replicas,type=string`): multi-option - markers take one or more named arguments. The first argument is - separated from the name by a colon, and latter arguments are - comma-separated. Order of arguments doesn't matter. Some arguments may - be optional. +(`+kubebuilder:printcolumn:JSONPath=".status.replicas",name=Replicas,type=string`): multi-option +markers take one or more named arguments. The first argument is +separated from the name by a colon, and latter arguments are +comma-separated. Order of arguments doesn't matter. Some arguments may +be optional. Marker arguments may be strings, ints, bools, slices, or maps thereof. Strings, ints, and bools follow their Go syntax: diff --git a/docs/book/src/reference/metrics.md b/docs/book/src/reference/metrics.md index 33e7e3b0a13..30616dbccc7 100644 --- a/docs/book/src/reference/metrics.md +++ b/docs/book/src/reference/metrics.md @@ -30,15 +30,15 @@ You can also apply the following `ClusterRoleBinding`: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: - name: prometheus-k8s-rolebinding +name: prometheus-k8s-rolebinding roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: prometheus-k8s-role +apiGroup: rbac.authorization.k8s.io +kind: ClusterRole +name: prometheus-k8s-role subjects: - - kind: ServiceAccount - name: - namespace: +- kind: ServiceAccount +name: +namespace: ``` The `prometheus-k8s-role` referenced here should provide the necessary permissions to allow prometheus scrape metrics from operator pods. @@ -48,12 +48,12 @@ The `prometheus-k8s-role` referenced here should provide the necessary permissio Follow the steps below to export the metrics using the Prometheus Operator: 1. Install Prometheus and Prometheus Operator. - We recommend using [kube-prometheus](https://github.com/coreos/kube-prometheus#installing) - in production if you don't have your own monitoring system. - If you are just experimenting, you can only install Prometheus and Prometheus Operator. +We recommend using [kube-prometheus](https://github.com/coreos/kube-prometheus#installing) +in production if you don't have your own monitoring system. +If you are just experimenting, you can only install Prometheus and Prometheus Operator. 2. Uncomment the line `- ../prometheus` in the `config/default/kustomization.yaml`. - It creates the `ServiceMonitor` resource which enables exporting the metrics. +It creates the `ServiceMonitor` resource which enables exporting the metrics. ```yaml # [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'. @@ -103,28 +103,28 @@ For example: ```go import ( - "github.com/prometheus/client_golang/prometheus" - "sigs.k8s.io/controller-runtime/pkg/metrics" +"github.com/prometheus/client_golang/prometheus" +"sigs.k8s.io/controller-runtime/pkg/metrics" ) var ( - goobers = prometheus.NewCounter( - prometheus.CounterOpts{ - Name: "goobers_total", - Help: "Number of goobers proccessed", - }, - ) - gooberFailures = prometheus.NewCounter( - prometheus.CounterOpts{ - Name: "goober_failures_total", - Help: "Number of failed goobers", - }, - ) +goobers = prometheus.NewCounter( +prometheus.CounterOpts{ +Name: "goobers_total", +Help: "Number of goobers proccessed", +}, +) +gooberFailures = prometheus.NewCounter( +prometheus.CounterOpts{ +Name: "goober_failures_total", +Help: "Number of failed goobers", +}, +) ) func init() { - // Register custom metrics with the global prometheus registry - metrics.Registry.MustRegister(goobers, gooberFailures) +// Register custom metrics with the global prometheus registry +metrics.Registry.MustRegister(goobers, gooberFailures) } ``` diff --git a/docs/book/src/reference/platform.md b/docs/book/src/reference/platform.md index b8c93cfab68..1ca16d14d1b 100644 --- a/docs/book/src/reference/platform.md +++ b/docs/book/src/reference/platform.md @@ -1,12 +1,12 @@ # Platforms Supported -Kubebuilder produces solutions that by default can work on multiple platforms or specific ones, depending on how you +Kubebuilder produces solutions that by default can work on multiple platforms or specific ones, depending on how you build and configure your workloads. This guide aims to help you properly configure your projects according to your needs. ## Overview -To provide support on specific or multiple platforms, you must ensure that all images used in workloads are built to -support the desired platforms. Note that they may not be the same as the platform where you develop your solutions and use KubeBuilder, but instead the platform(s) where your solution should run and be distributed. +To provide support on specific or multiple platforms, you must ensure that all images used in workloads are built to +support the desired platforms. Note that they may not be the same as the platform where you develop your solutions and use KubeBuilder, but instead the platform(s) where your solution should run and be distributed. It is recommended to build solutions that work on multiple platforms so that your project works on any Kubernetes cluster regardless of the underlying operating system and architecture. @@ -16,124 +16,124 @@ The following covers what you need to do to provide the support for one or more ### 1) Build workload images to provide the support for other platform(s) -The images used in workloads such as in your Pods/Deployments will need to provide the support for this other platform. -You can inspect the images using a ManifestList of supported platforms using the command +The images used in workloads such as in your Pods/Deployments will need to provide the support for this other platform. +You can inspect the images using a ManifestList of supported platforms using the command [docker manifest inspect ][docker-manifest], i.e.: ```shell $ docker manifest inspect myresgystry/example/myimage:v0.0.1 { - "schemaVersion": 2, - "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json", - "manifests": [ - { - "mediaType": "application/vnd.docker.distribution.manifest.v2+json", - "size": 739, - "digest": "sha256:a274a1a2af811a1daf3fd6b48ff3d08feb757c2c3f3e98c59c7f85e550a99a32", - "platform": { - "architecture": "arm64", - "os": "linux" - } - }, - { - "mediaType": "application/vnd.docker.distribution.manifest.v2+json", - "size": 739, - "digest": "sha256:d801c41875f12ffd8211fffef2b3a3d1a301d99f149488d31f245676fa8bc5d9", - "platform": { - "architecture": "amd64", - "os": "linux" - } - }, - { - "mediaType": "application/vnd.docker.distribution.manifest.v2+json", - "size": 739, - "digest": "sha256:f4423c8667edb5372fb0eafb6ec599bae8212e75b87f67da3286f0291b4c8732", - "platform": { - "architecture": "s390x", - "os": "linux" - } - }, - { - "mediaType": "application/vnd.docker.distribution.manifest.v2+json", - "size": 739, - "digest": "sha256:621288f6573c012d7cf6642f6d9ab20dbaa35de3be6ac2c7a718257ec3aff333", - "platform": { - "architecture": "ppc64le", - "os": "linux" - } - }, - ] +"schemaVersion": 2, +"mediaType": "application/vnd.docker.distribution.manifest.list.v2+json", +"manifests": [ +{ +"mediaType": "application/vnd.docker.distribution.manifest.v2+json", +"size": 739, +"digest": "sha256:a274a1a2af811a1daf3fd6b48ff3d08feb757c2c3f3e98c59c7f85e550a99a32", +"platform": { +"architecture": "arm64", +"os": "linux" +} +}, +{ +"mediaType": "application/vnd.docker.distribution.manifest.v2+json", +"size": 739, +"digest": "sha256:d801c41875f12ffd8211fffef2b3a3d1a301d99f149488d31f245676fa8bc5d9", +"platform": { +"architecture": "amd64", +"os": "linux" +} +}, +{ +"mediaType": "application/vnd.docker.distribution.manifest.v2+json", +"size": 739, +"digest": "sha256:f4423c8667edb5372fb0eafb6ec599bae8212e75b87f67da3286f0291b4c8732", +"platform": { +"architecture": "s390x", +"os": "linux" +} +}, +{ +"mediaType": "application/vnd.docker.distribution.manifest.v2+json", +"size": 739, +"digest": "sha256:621288f6573c012d7cf6642f6d9ab20dbaa35de3be6ac2c7a718257ec3aff333", +"platform": { +"architecture": "ppc64le", +"os": "linux" +} +}, +] } ``` ### 2) (Recommended as a Best Practice) Ensure that node affinity expressions are set to match the supported platforms -Kubernetes provides a mechanism called [nodeAffinity][node-affinity] which can be used to limit the possible node -targets where a pod can be scheduled. This is especially important to ensure correct scheduling behavior in clusters +Kubernetes provides a mechanism called [nodeAffinity][node-affinity] which can be used to limit the possible node +targets where a pod can be scheduled. This is especially important to ensure correct scheduling behavior in clusters with nodes that span across multiple platforms (i.e. heterogeneous clusters). **Kubernetes manifest example** ```yaml affinity: - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: kubernetes.io/arch - operator: In - values: - - amd64 - - arm64 - - ppc64le - - s390x - - key: kubernetes.io/os - operator: In - values: - - linux +nodeAffinity: +requiredDuringSchedulingIgnoredDuringExecution: +nodeSelectorTerms: +- matchExpressions: +- key: kubernetes.io/arch +operator: In +values: +- amd64 +- arm64 +- ppc64le +- s390x +- key: kubernetes.io/os +operator: In +values: +- linux ``` **Golang Example** -```go +```go Template: corev1.PodTemplateSpec{ - ... - Spec: corev1.PodSpec{ - Affinity: &corev1.Affinity{ - NodeAffinity: &corev1.NodeAffinity{ - RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ - NodeSelectorTerms: []corev1.NodeSelectorTerm{ - { - MatchExpressions: []corev1.NodeSelectorRequirement{ - { - Key: "kubernetes.io/arch", - Operator: "In", - Values: []string{"amd64"}, - }, - { - Key: "kubernetes.io/os", - Operator: "In", - Values: []string{"linux"}, - }, - }, - }, - }, - }, - }, - }, - SecurityContext: &corev1.PodSecurityContext{ - ... - }, - Containers: []corev1.Container{{ - ... - }}, - }, +... +Spec: corev1.PodSpec{ +Affinity: &corev1.Affinity{ +NodeAffinity: &corev1.NodeAffinity{ +RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ +NodeSelectorTerms: []corev1.NodeSelectorTerm{ +{ +MatchExpressions: []corev1.NodeSelectorRequirement{ +{ +Key: "kubernetes.io/arch", +Operator: "In", +Values: []string{"amd64"}, +}, +{ +Key: "kubernetes.io/os", +Operator: "In", +Values: []string{"linux"}, +}, +}, +}, +}, +}, +}, +}, +SecurityContext: &corev1.PodSecurityContext{ +... +}, +Containers: []corev1.Container{{ +... +}}, +}, ``` @@ -149,48 +149,48 @@ See that projects scaffold with the latest versions of Kubebuilder have the Make $ make docker-buildx IMG=myregistry/myoperator:v0.0.1 ``` -Note that you need to ensure that all images and workloads required and used by your project will provide the same +Note that you need to ensure that all images and workloads required and used by your project will provide the same support as recommended above, and that you properly configure the [nodeAffinity][node-affinity] for all your workloads. Therefore, ensure that you uncomment the following code in the `config/manager/manager.yaml` file ```yaml - # TODO(user): Uncomment the following code to configure the nodeAffinity expression - # according to the platforms which are supported by your solution. - # It is considered best practice to support multiple architectures. You can - # build your manager image using the makefile target docker-buildx. - # affinity: - # nodeAffinity: - # requiredDuringSchedulingIgnoredDuringExecution: - # nodeSelectorTerms: - # - matchExpressions: - # - key: kubernetes.io/arch - # operator: In - # values: - # - amd64 - # - arm64 - # - ppc64le - # - s390x - # - key: kubernetes.io/os - # operator: In - # values: - # - linux +# TODO(user): Uncomment the following code to configure the nodeAffinity expression +# according to the platforms which are supported by your solution. +# It is considered best practice to support multiple architectures. You can +# build your manager image using the makefile target docker-buildx. +# affinity: +# nodeAffinity: +# requiredDuringSchedulingIgnoredDuringExecution: +# nodeSelectorTerms: +# - matchExpressions: +# - key: kubernetes.io/arch +# operator: In +# values: +# - amd64 +# - arm64 +# - ppc64le +# - s390x +# - key: kubernetes.io/os +# operator: In +# values: +# - linux ``` @@ -201,12 +201,12 @@ Projects created with the Kubebuilder CLI have two workloads which are: ### Manager -The container to run the manager implementation is configured in the `config/manager/manager.yaml` file. +The container to run the manager implementation is configured in the `config/manager/manager.yaml` file. This image is built with the Dockerfile file scaffolded by default and contains the binary of the project \ which will be built via the command `go build -a -o manager main.go`. -Note that when you run `make docker-build` OR `make docker-build IMG=myregistry/myprojectname:` -an image will be built from the client host (local environment) and produce an image for +Note that when you run `make docker-build` OR `make docker-build IMG=myregistry/myprojectname:` +an image will be built from the client host (local environment) and produce an image for the client os/arch, which is commonly linux/amd64 or linux/arm64. ### How to be able to raise Events? -Following are the steps with examples to help you raise events in your controller's reconciliations. +Following are the steps with examples to help you raise events in your controller's reconciliations. Events are published from a Controller using an [EventRecorder][Events]`type CorrelatorOptions struct`, which can be created for a Controller by calling `GetRecorder(name string)` on a Manager. See that we will change the implementation scaffolded in `cmd/main.go`: ```go - if err = (&controller.MyKindReconciler{ - Client: mgr.GetClient(), - Scheme: mgr.GetScheme(), - // Note that we added the following line: - Recorder: mgr.GetEventRecorderFor("mykind-controller"), - }).SetupWithManager(mgr); err != nil { - setupLog.Error(err, "unable to create controller", "controller", "MyKind") - os.Exit(1) - } +if err = (&controller.MyKindReconciler{ +Client: mgr.GetClient(), +Scheme: mgr.GetScheme(), +// Note that we added the following line: +Recorder: mgr.GetEventRecorderFor("mykind-controller"), +}).SetupWithManager(mgr); err != nil { +setupLog.Error(err, "unable to create controller", "controller", "MyKind") +os.Exit(1) +} ``` ### Allowing usage of EventRecorder on the Controller -To raise an event, you must have access to `record.EventRecorder` in the Controller. Therefore, firstly let's update the controller implementation: +To raise an event, you must have access to `record.EventRecorder` in the Controller. Therefore, firstly let's update the controller implementation: ```go import ( - ... - "k8s.io/client-go/tools/record" - ... +... +"k8s.io/client-go/tools/record" +... ) // MyKindReconciler reconciles a MyKind object type MyKindReconciler struct { - client.Client - Scheme *runtime.Scheme - // See that we added the following code to allow us to pass the record.EventRecorder - Recorder record.EventRecorder +client.Client +Scheme *runtime.Scheme +// See that we added the following code to allow us to pass the record.EventRecorder +Recorder record.EventRecorder } ``` ### Passing the EventRecorder to the Controller @@ -82,15 +82,15 @@ Events are published from a Controller using an [EventRecorder]`type CorrelatorO which can be created for a Controller by calling `GetRecorder(name string)` on a Manager. See that we will change the implementation scaffolded in `cmd/main.go`: ```go - if err = (&controller.MyKindReconciler{ - Client: mgr.GetClient(), - Scheme: mgr.GetScheme(), - // Note that we added the following line: - Recorder: mgr.GetEventRecorderFor("mykind-controller"), - }).SetupWithManager(mgr); err != nil { - setupLog.Error(err, "unable to create controller", "controller", "MyKind") - os.Exit(1) - } +if err = (&controller.MyKindReconciler{ +Client: mgr.GetClient(), +Scheme: mgr.GetScheme(), +// Note that we added the following line: +Recorder: mgr.GetEventRecorderFor("mykind-controller"), +}).SetupWithManager(mgr); err != nil { +setupLog.Error(err, "unable to create controller", "controller", "MyKind") +os.Exit(1) +} ``` ### Granting the required permissions @@ -105,8 +105,8 @@ func (r *MyKindReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctr ``` And then, run `$ make manifests` to update the rules under `config/rbac/role.yaml`. - -[Events]: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#events + +[Events]: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#events [Event-Example]: https://github.com/kubernetes/api/blob/6c11c9e4685cc62e4ddc8d4aaa824c46150c9148/core/v1/types.go#L6019-L6024 [Reason-Example]: https://github.com/kubernetes/api/blob/6c11c9e4685cc62e4ddc8d4aaa824c46150c9148/core/v1/types.go#L6048 [Message-Example]: https://github.com/kubernetes/api/blob/6c11c9e4685cc62e4ddc8d4aaa824c46150c9148/core/v1/types.go#L6053 diff --git a/docs/book/src/reference/reference.md b/docs/book/src/reference/reference.md index 1d9026777d8..27d72f9e443 100644 --- a/docs/book/src/reference/reference.md +++ b/docs/book/src/reference/reference.md @@ -1,42 +1,42 @@ # Reference - - [Generating CRDs](generating-crd.md) - - [Using Finalizers](using-finalizers.md) - Finalizers are a mechanism to - execute any custom logic related to a resource before it gets deleted from - Kubernetes cluster. - - [Watching Resources](watching-resources.md) - Watch resources in the Kubernetes cluster to be informed and take actions on changes. - - [Resources Managed by the Operator](watching-resources/operator-managed.md) - - [Externally Managed Resources](watching-resources/externally-managed.md) - Controller Runtime provides the ability to watch additional resources relevant to the controlled ones. - - [Kind cluster](kind.md) - - [What's a webhook?](webhook-overview.md) - Webhooks are HTTP callbacks, there are 3 - types of webhooks in k8s: 1) admission webhook 2) CRD conversion webhook 3) - authorization webhook - - [Admission webhook](admission-webhook.md) - Admission webhooks are HTTP - callbacks for mutating or validating resources before the API server admit - them. - - [Markers for Config/Code Generation](markers.md) +- [Generating CRDs](generating-crd.md) +- [Using Finalizers](using-finalizers.md) +Finalizers are a mechanism to +execute any custom logic related to a resource before it gets deleted from +Kubernetes cluster. +- [Watching Resources](watching-resources.md) +Watch resources in the Kubernetes cluster to be informed and take actions on changes. +- [Resources Managed by the Operator](watching-resources/operator-managed.md) +- [Externally Managed Resources](watching-resources/externally-managed.md) +Controller Runtime provides the ability to watch additional resources relevant to the controlled ones. +- [Kind cluster](kind.md) +- [What's a webhook?](webhook-overview.md) +Webhooks are HTTP callbacks, there are 3 +types of webhooks in k8s: 1) admission webhook 2) CRD conversion webhook 3) +authorization webhook +- [Admission webhook](admission-webhook.md) +Admission webhooks are HTTP +callbacks for mutating or validating resources before the API server admit +them. +- [Markers for Config/Code Generation](markers.md) - - [CRD Generation](markers/crd.md) - - [CRD Validation](markers/crd-validation.md) - - [Webhook](markers/webhook.md) - - [Object/DeepCopy](markers/object.md) - - [RBAC](markers/rbac.md) +- [CRD Generation](markers/crd.md) +- [CRD Validation](markers/crd-validation.md) +- [Webhook](markers/webhook.md) +- [Object/DeepCopy](markers/object.md) +- [RBAC](markers/rbac.md) - - [controller-gen CLI](controller-gen.md) - - [completion](completion.md) - - [Artifacts](artifacts.md) - - [Platform Support](platform.md) +- [controller-gen CLI](controller-gen.md) +- [completion](completion.md) +- [Artifacts](artifacts.md) +- [Platform Support](platform.md) - - [Sub-Module Layouts](submodule-layouts.md) - - [Using an external Type / API](using_an_external_type.md) +- [Sub-Module Layouts](submodule-layouts.md) +- [Using an external Type / API](using_an_external_type.md) - - [Metrics](metrics.md) - - [Reference](metrics-reference.md) +- [Metrics](metrics.md) +- [Reference](metrics-reference.md) - - [Makefile Helpers](makefile-helpers.md) - - [CLI plugins](../plugins/plugins.md) +- [Makefile Helpers](makefile-helpers.md) +- [CLI plugins](../plugins/plugins.md) diff --git a/docs/book/src/reference/rescaffold.md b/docs/book/src/reference/rescaffold.md index 2b075cefa5c..160a4e041bd 100644 --- a/docs/book/src/reference/rescaffold.md +++ b/docs/book/src/reference/rescaffold.md @@ -2,17 +2,17 @@ ## Overview -Please note that all input utilized via the Kubebuilder tool is tracked in the PROJECT file ([example][example]). -This file is responsible for storing essential information, representing various facets of the Project such as its layout, +Please note that all input utilized via the Kubebuilder tool is tracked in the PROJECT file ([example][example]). +This file is responsible for storing essential information, representing various facets of the Project such as its layout, plugins, APIs, and more. ([More info][more-info]). -With the release of new plugin versions/layouts or even a new Kubebuilder CLI version with scaffold changes, -an easy way to upgrade your project is by re-scaffolding. This process allows users to employ tools like IDEs to compare +With the release of new plugin versions/layouts or even a new Kubebuilder CLI version with scaffold changes, +an easy way to upgrade your project is by re-scaffolding. This process allows users to employ tools like IDEs to compare changes, enabling them to overlay their code implementation on the new scaffold or integrate these changes into their existing projects. ## When to use it ? -This command is useful when you want to upgrade an existing project to the latest version of the Kubebuilder project layout. +This command is useful when you want to upgrade an existing project to the latest version of the Kubebuilder project layout. It makes it easier for the users to migrate their operator projects to the new scaffolding. ## How to use it ? @@ -25,7 +25,7 @@ kubebuilder alpha generate --plugins="pluginkey/version" **To upgrade the scaffold of your project to get the latest changes:** -Currently, it supports two optional params, `input-dir` and `output-dir`. +Currently, it supports two optional params, `input-dir` and `output-dir`. `input-dir` is the path to the existing project that you want to re-scaffold. Default is the current working directory. @@ -38,8 +38,8 @@ kubebuilder alpha generate --input-dir=/path/to/existing/project --output-dir=/p diff --git a/docs/book/src/reference/submodule-layouts.md b/docs/book/src/reference/submodule-layouts.md index 69fd1df8f92..aca7b5047db 100644 --- a/docs/book/src/reference/submodule-layouts.md +++ b/docs/book/src/reference/submodule-layouts.md @@ -9,7 +9,7 @@ Sub-Module Layouts (in a way you could call them a special form of [Monorepo's][ If you are looking to do operations and reconcile via a controller a Type(CRD) which are owned by another project then, please see [Using an external Type](/reference/using_an_external_type.md) for more info. - + ## Overview @@ -41,7 +41,7 @@ You may also lose the ability to use some of the CLI features and helpers. For f ## Adjusting your Project -For a proper Sub-Module layout, we will use the generated APIs as a starting point. +For a proper Sub-Module layout, we will use the generated APIs as a starting point. For the steps below, we will assume you created your project in your `GOPATH` with @@ -71,25 +71,25 @@ module YOUR_GO_PATH/test-operator/api/v1alpha1 go 1.21.0 require ( - k8s.io/apimachinery v0.28.4 - sigs.k8s.io/controller-runtime v0.16.3 +k8s.io/apimachinery v0.28.4 +sigs.k8s.io/controller-runtime v0.16.3 ) require ( - github.com/go-logr/logr v1.2.4 // indirect - github.com/gogo/protobuf v1.3.2 // indirect - github.com/google/gofuzz v1.2.0 // indirect - github.com/json-iterator/go v1.1.12 // indirect - github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect - github.com/modern-go/reflect2 v1.0.2 // indirect - golang.org/x/net v0.17.0 // indirect - golang.org/x/text v0.13.0 // indirect - gopkg.in/inf.v0 v0.9.1 // indirect - gopkg.in/yaml.v2 v2.4.0 // indirect - k8s.io/klog/v2 v2.100.1 // indirect - k8s.io/utils v0.0.0-20230406110748-d93618cff8a2 // indirect - sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect - sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect +github.com/go-logr/logr v1.2.4 // indirect +github.com/gogo/protobuf v1.3.2 // indirect +github.com/google/gofuzz v1.2.0 // indirect +github.com/json-iterator/go v1.1.12 // indirect +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect +github.com/modern-go/reflect2 v1.0.2 // indirect +golang.org/x/net v0.17.0 // indirect +golang.org/x/text v0.13.0 // indirect +gopkg.in/inf.v0 v0.9.1 // indirect +gopkg.in/yaml.v2 v2.4.0 // indirect +k8s.io/klog/v2 v2.100.1 // indirect +k8s.io/utils v0.0.0-20230406110748-d93618cff8a2 // indirect +sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect +sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect ) ``` @@ -104,9 +104,9 @@ When trying to resolve your main module in the root folder of the operator, you go mod tidy go: finding module for package YOUR_GO_PATH/test-operator/api/v1alpha1 YOUR_GO_PATH/test-operator imports - YOUR_GO_PATH/test-operator/api/v1alpha1: cannot find module providing package YOUR_GO_PATH/test-operator/api/v1alpha1: module YOUR_GO_PATH/test-operator/api/v1alpha1: git ls-remote -q origin in LOCALVCSPATH: exit status 128: - remote: Repository not found. - fatal: repository 'https://YOUR_GO_PATH/test-operator/' not found +YOUR_GO_PATH/test-operator/api/v1alpha1: cannot find module providing package YOUR_GO_PATH/test-operator/api/v1alpha1: module YOUR_GO_PATH/test-operator/api/v1alpha1: git ls-remote -q origin in LOCALVCSPATH: exit status 128: +remote: Repository not found. +fatal: repository 'https://YOUR_GO_PATH/test-operator/' not found ``` The reason for this is that you may have not pushed your modules into the VCS yet and resolving the main module will fail as it can no longer diff --git a/docs/book/src/reference/using-finalizers.md b/docs/book/src/reference/using-finalizers.md index 52c8f3cdbd8..3c275b277b1 100644 --- a/docs/book/src/reference/using-finalizers.md +++ b/docs/book/src/reference/using-finalizers.md @@ -8,17 +8,17 @@ on object's deletion from Kubernetes, you can use a finalizer to do that. You can read more about the finalizers in the [Kubernetes reference docs](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#finalizers). The section below demonstrates how to register and trigger pre-delete hooks in the `Reconcile` method of a controller. -The key point to note is that a finalizer causes "delete" on the object to become +The key point to note is that a finalizer causes "delete" on the object to become an "update" to set deletion timestamp. Presence of deletion timestamp on the object indicates that it is being deleted. Otherwise, without finalizers, a delete shows up as a reconcile where the object is missing from the cache. Highlights: - If the object is not being deleted and does not have the finalizer registered, - then add the finalizer and update the object in Kubernetes. +then add the finalizer and update the object in Kubernetes. - If object is being deleted and the finalizer is still present in finalizers list, - then execute the pre-delete logic and remove the finalizer and update the - object. +then execute the pre-delete logic and remove the finalizer and update the +object. - Ensure that the pre-delete logic is idempotent. {{#literatego ../cronjob-tutorial/testdata/finalizer_example.go}} diff --git a/docs/book/src/reference/using_an_external_type.md b/docs/book/src/reference/using_an_external_type.md index 14fcd522048..787b5ed0f04 100644 --- a/docs/book/src/reference/using_an_external_type.md +++ b/docs/book/src/reference/using_an_external_type.md @@ -11,7 +11,7 @@ Currently, kubebuilder handles the first two, CRDs and Core Resources, seamlessl In order to use a Kubernetes Custom Resource that has been defined in another project you will need to have several items of information. * The Domain of the CR -* The Group under the Domain +* The Group under the Domain * The Go import path of the CR Type definition * The Custom Resource Type you want to depend on. @@ -66,10 +66,10 @@ projectName: testkube repo: example.com resources: - controller: true - domain: my.domain ## <- Replace the domain with theirs.com domain - group: mygroup - kind: ExternalType - version: v1alpha1 +domain: my.domain ## <- Replace the domain with theirs.com domain +group: mygroup +kind: ExternalType +version: v1alpha1 version: "3" ``` @@ -79,8 +79,8 @@ file: internal/controller/externaltype_controller.go ```go // ExternalTypeReconciler reconciles a ExternalType object type ExternalTypeReconciler struct { - client.Client - Scheme *runtime.Scheme +client.Client +Scheme *runtime.Scheme } // external types can be added like this @@ -107,14 +107,14 @@ file: cmd/main.go package apis import ( - theirgroupv1alpha1 "github.com/theiruser/theirproject/apis/theirgroup/v1alpha1" +theirgroupv1alpha1 "github.com/theiruser/theirproject/apis/theirgroup/v1alpha1" ) func init() { - utilruntime.Must(clientgoscheme.AddToScheme(scheme)) - utilruntime.Must(theirgroupv1alpha1.AddToScheme(scheme)) // this contains the external API types - //+kubebuilder:scaffold:scheme -} +utilruntime.Must(clientgoscheme.AddToScheme(scheme)) +utilruntime.Must(theirgroupv1alpha1.AddToScheme(scheme)) // this contains the external API types +//+kubebuilder:scaffold:scheme +} ``` ## Edit the Controller `SetupWithManager` function @@ -126,16 +126,16 @@ file: internal/controllers/externaltype_controllers.go package controllers import ( - theirgroupv1alpha1 "github.com/theiruser/theirproject/apis/theirgroup/v1alpha1" +theirgroupv1alpha1 "github.com/theiruser/theirproject/apis/theirgroup/v1alpha1" ) //... // SetupWithManager sets up the controller with the Manager. func (r *ExternalTypeReconciler) SetupWithManager(mgr ctrl.Manager) error { - return ctrl.NewControllerManagedBy(mgr). - For(&theirgroupv1alpha1.ExternalType{}). - Complete(r) +return ctrl.NewControllerManagedBy(mgr). +For(&theirgroupv1alpha1.ExternalType{}). +Complete(r) } ``` @@ -147,15 +147,15 @@ file: internal/controllers/externaltype_controllers.go package controllers // contains core resources like Deployment import ( - v1 "k8s.io/api/apps/v1" +v1 "k8s.io/api/apps/v1" ) // SetupWithManager sets up the controller with the Manager. func (r *ExternalTypeReconciler) SetupWithManager(mgr ctrl.Manager) error { - return ctrl.NewControllerManagedBy(mgr). - For(&v1.Pod{}). - Complete(r) +return ctrl.NewControllerManagedBy(mgr). +For(&v1.Pod{}). +Complete(r) } ``` @@ -169,7 +169,7 @@ go mod tidy ``` make manifests -``` +``` ## Prepare for testing @@ -182,22 +182,22 @@ file: internal/controllers/suite_test.go package controller import ( - "fmt" - "path/filepath" - "runtime" - "testing" - - . "github.com/onsi/ginkgo/v2" - . "github.com/onsi/gomega" - - "k8s.io/client-go/kubernetes/scheme" - "k8s.io/client-go/rest" - "sigs.k8s.io/controller-runtime/pkg/client" - "sigs.k8s.io/controller-runtime/pkg/envtest" - logf "sigs.k8s.io/controller-runtime/pkg/log" - "sigs.k8s.io/controller-runtime/pkg/log/zap" - //+kubebuilder:scaffold:imports - theirgroupv1alpha1 "github.com/theiruser/theirproject/apis/theirgroup/v1alpha1" +"fmt" +"path/filepath" +"runtime" +"testing" + +. "github.com/onsi/ginkgo/v2" +. "github.com/onsi/gomega" + +"k8s.io/client-go/kubernetes/scheme" +"k8s.io/client-go/rest" +"sigs.k8s.io/controller-runtime/pkg/client" +"sigs.k8s.io/controller-runtime/pkg/envtest" +logf "sigs.k8s.io/controller-runtime/pkg/log" +"sigs.k8s.io/controller-runtime/pkg/log/zap" +//+kubebuilder:scaffold:imports +theirgroupv1alpha1 "github.com/theiruser/theirproject/apis/theirgroup/v1alpha1" ) var cfg *rest.Config @@ -205,45 +205,45 @@ var k8sClient client.Client var testEnv *envtest.Environment func TestControllers(t *testing.T) { - RegisterFailHandler(Fail) +RegisterFailHandler(Fail) - RunSpecs(t, "Controller Suite") +RunSpecs(t, "Controller Suite") } var _ = BeforeSuite(func() { - //... - By("bootstrapping test environment") - testEnv = &envtest.Environment{ - CRDDirectoryPaths: []string{ - // if you are using vendoring and rely on a kubebuilder based project, you can simply rely on the vendored config directory - filepath.Join("..", "..", "..", "vendor", "github.com", "theiruser", "theirproject", "config", "crds"), - // otherwise you can simply download the CRD from any source and place it within the config/crd/bases directory, - filepath.Join("..", "..", "config", "crd", "bases"), - }, - ErrorIfCRDPathMissing: false, - - // The BinaryAssetsDirectory is only required if you want to run the tests directly - // without call the makefile target test. If not informed it will look for the - // default path defined in controller-runtime which is /usr/local/kubebuilder/. - // Note that you must have the required binaries setup under the bin directory to perform - // the tests directly. When we run make test it will be setup and used automatically. - BinaryAssetsDirectory: filepath.Join("..", "..", "bin", "k8s", - fmt.Sprintf("1.28.3-%s-%s", runtime.GOOS, runtime.GOARCH)), - } - - var err error - // cfg is defined in this file globally. - cfg, err = testEnv.Start() - Expect(err).NotTo(HaveOccurred()) - Expect(cfg).NotTo(BeNil()) - - //+kubebuilder:scaffold:scheme - Expect(theirgroupv1alpha1.AddToScheme(scheme.Scheme)).To(Succeed()) - - k8sClient, err = client.New(cfg, client.Options{Scheme: scheme.Scheme}) - Expect(err).NotTo(HaveOccurred()) - Expect(k8sClient).NotTo(BeNil()) +//... +By("bootstrapping test environment") +testEnv = &envtest.Environment{ +CRDDirectoryPaths: []string{ +// if you are using vendoring and rely on a kubebuilder based project, you can simply rely on the vendored config directory +filepath.Join("..", "..", "..", "vendor", "github.com", "theiruser", "theirproject", "config", "crds"), +// otherwise you can simply download the CRD from any source and place it within the config/crd/bases directory, +filepath.Join("..", "..", "config", "crd", "bases"), +}, +ErrorIfCRDPathMissing: false, + +// The BinaryAssetsDirectory is only required if you want to run the tests directly +// without call the makefile target test. If not informed it will look for the +// default path defined in controller-runtime which is /usr/local/kubebuilder/. +// Note that you must have the required binaries setup under the bin directory to perform +// the tests directly. When we run make test it will be setup and used automatically. +BinaryAssetsDirectory: filepath.Join("..", "..", "bin", "k8s", +fmt.Sprintf("1.28.3-%s-%s", runtime.GOOS, runtime.GOARCH)), +} + +var err error +// cfg is defined in this file globally. +cfg, err = testEnv.Start() +Expect(err).NotTo(HaveOccurred()) +Expect(cfg).NotTo(BeNil()) + +//+kubebuilder:scaffold:scheme +Expect(theirgroupv1alpha1.AddToScheme(scheme.Scheme)).To(Succeed()) + +k8sClient, err = client.New(cfg, client.Options{Scheme: scheme.Scheme}) +Expect(err).NotTo(HaveOccurred()) +Expect(k8sClient).NotTo(BeNil()) }) diff --git a/docs/book/src/reference/watching-resources.md b/docs/book/src/reference/watching-resources.md index 85c73959368..b93a98576f1 100644 --- a/docs/book/src/reference/watching-resources.md +++ b/docs/book/src/reference/watching-resources.md @@ -10,7 +10,7 @@ This ranges from the easy and obvious use cases, such as watching the resources See each subsection for explanations and examples of the different ways in which your controller can _Watch_ the resources it cares about. - [Watching Operator Managed Resources](watching-resources/operator-managed.md) - - These resources are created and managed by the same operator as the resource watching them. - This section covers both if they are managed by the same controller or separate controllers. +These resources are created and managed by the same operator as the resource watching them. +This section covers both if they are managed by the same controller or separate controllers. - [Watching Externally Managed Resources](watching-resources/externally-managed.md) - - These resources could be manually created, or managed by other operators/controllers or the Kubernetes control plane. \ No newline at end of file +These resources could be manually created, or managed by other operators/controllers or the Kubernetes control plane. \ No newline at end of file diff --git a/docs/book/src/reference/webhook-for-core-types.md b/docs/book/src/reference/webhook-for-core-types.md index dd0258a44c9..353616912cd 100644 --- a/docs/book/src/reference/webhook-for-core-types.md +++ b/docs/book/src/reference/webhook-for-core-types.md @@ -17,24 +17,24 @@ interface. ```go type podAnnotator struct { - Client client.Client - decoder *admission.Decoder +Client client.Client +decoder *admission.Decoder } func (a *podAnnotator) Handle(ctx context.Context, req admission.Request) admission.Response { - pod := &corev1.Pod{} - err := a.decoder.Decode(req, pod) - if err != nil { - return admission.Errored(http.StatusBadRequest, err) - } - - // mutate the fields in pod - - marshaledPod, err := json.Marshal(pod) - if err != nil { - return admission.Errored(http.StatusInternalServerError, err) - } - return admission.PatchResponseFromRaw(req.Object.Raw, marshaledPod) +pod := &corev1.Pod{} +err := a.decoder.Decode(req, pod) +if err != nil { +return admission.Errored(http.StatusBadRequest, err) +} + +// mutate the fields in pod + +marshaledPod, err := json.Marshal(pod) +if err != nil { +return admission.Errored(http.StatusInternalServerError, err) +} +return admission.PatchResponseFromRaw(req.Object.Raw, marshaledPod) } ``` @@ -58,10 +58,10 @@ If you need a client and/or decoder, just pass them in at struct construction ti ```go mgr.GetWebhookServer().Register("/mutate-v1-pod", &webhook.Admission{ - Handler: &podAnnotator{ - Client: mgr.GetClient(), - decoder: admission.NewDecoder(mgr.GetScheme()), - }, +Handler: &podAnnotator{ +Client: mgr.GetClient(), +decoder: admission.NewDecoder(mgr.GetScheme()), +}, }) ``` diff --git a/docs/kubebuilder_annotation.md b/docs/kubebuilder_annotation.md index ed7a2d80293..c6364feae79 100644 --- a/docs/kubebuilder_annotation.md +++ b/docs/kubebuilder_annotation.md @@ -22,23 +22,23 @@ Delimiter symbols are distinguished to work in different levels from top-down fo - **Colon** - Colon `:` is the 1st level delimiter (to annotation) only for separate tokens. Tokens on different sides of the colon should refer to different token types. +Colon `:` is the 1st level delimiter (to annotation) only for separate tokens. Tokens on different sides of the colon should refer to different token types. - **Comma** - Comma `,` is the 2nd level delimiter (to annotation) for splitting key-value pairs in **key-value elements** which is normally the last token in annotation. e.g. `+kubebuilder:printcolumn:name=,type=,description=,JSONPath:<.spec.Name>,priority=,format=` It works within token which is the 2nd level of annotation, so it is called "2nd level delimiter" +Comma `,` is the 2nd level delimiter (to annotation) for splitting key-value pairs in **key-value elements** which is normally the last token in annotation. e.g. `+kubebuilder:printcolumn:name=,type=,description=,JSONPath:<.spec.Name>,priority=,format=` It works within token which is the 2nd level of annotation, so it is called "2nd level delimiter" - **Equal sign** - Equal sign `=` is the 3rd level delimiter (to annotation) for identifying key and value. Since the `key=value` parts are splitted from single token (2nd level), its inner delimiter `=` works for next level (3rd level) +Equal sign `=` is the 3rd level delimiter (to annotation) for identifying key and value. Since the `key=value` parts are splitted from single token (2nd level), its inner delimiter `=` works for next level (3rd level) - **Semicolon sign** - Semicolon sign `;` is the 4th level delimiter, which works on the `value` part (4th level) of `key=value`(3rd level) for splitting individual values. e.g. `key1=value1;value2;value3` +Semicolon sign `;` is the 4th level delimiter, which works on the `value` part (4th level) of `key=value`(3rd level) for splitting individual values. e.g. `key1=value1;value2;value3` - **Pipe sign or Vertical bar** - Pipe sign `|` is the 5th level delimiter, which works inside the single `value` part (4th level) indicating key and value in case of the single value has nested key-value structure. e.g. `outerkey=innerkey1|innervalue1` +Pipe sign `|` is the 5th level delimiter, which works inside the single `value` part (4th level) indicating key and value in case of the single value has nested key-value structure. e.g. `outerkey=innerkey1|innervalue1` ### Examples diff --git a/docs/kubebuilder_v0_v1_difference.md b/docs/kubebuilder_v0_v1_difference.md index dd8736322da..0d44a79398b 100644 --- a/docs/kubebuilder_v0_v1_difference.md +++ b/docs/kubebuilder_v0_v1_difference.md @@ -1,35 +1,35 @@ # Kubebuilder v0 v.s. v1 -Kubebuilder 1.0 adds a new flag `--project-version`, it accepts two different values, `v0` and `v1`. When `v0` is used, the kubebuilder behavior and workflow is the same as kubebuilder 0.*. When `v1` is specified, the generated v1 project layout is architecturally different from v0 project. v1 project uses [controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) set of libraries for controller implementation and used tools under [controller-tools](https://github.com/kubernetes-sigs/controller-tools) for scaffolding and generation. +Kubebuilder 1.0 adds a new flag `--project-version`, it accepts two different values, `v0` and `v1`. When `v0` is used, the kubebuilder behavior and workflow is the same as kubebuilder 0.*. When `v1` is specified, the generated v1 project layout is architecturally different from v0 project. v1 project uses [controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) set of libraries for controller implementation and used tools under [controller-tools](https://github.com/kubernetes-sigs/controller-tools) for scaffolding and generation. ## Command difference - - kubebuilder v0 has `init`, `create controller`, `create resource`, `create config`, `generate` commands and the workflow is: +- kubebuilder v0 has `init`, `create controller`, `create resource`, `create config`, `generate` commands and the workflow is: ``` - kubebuilder init --domain example.com - kubebuilder create resource --group --version --kind - GOBIN=${PWD}/bin go install ${PWD#$GOPATH/src/}/cmd/controller-manager - bin/controller-manager --kubeconfig ~/.kube/config - - kubectl apply -f hack/sample/.yaml - docker build -f Dockerfile.controller . -t - docker push - kubebuilder create config --controller-image --name - kubectl apply -f hack/install.yaml +kubebuilder init --domain example.com +kubebuilder create resource --group --version --kind +GOBIN=${PWD}/bin go install ${PWD#$GOPATH/src/}/cmd/controller-manager +bin/controller-manager --kubeconfig ~/.kube/config + +kubectl apply -f hack/sample/.yaml +docker build -f Dockerfile.controller . -t +docker push +kubebuilder create config --controller-image --name +kubectl apply -f hack/install.yaml ``` - Every time the resource or controller is updated, users need to run `kubebuilder generate` to regenerate the project. - - kubebuilder v1 has `init`, `create api` commands and the workflow is - +Every time the resource or controller is updated, users need to run `kubebuilder generate` to regenerate the project. +- kubebuilder v1 has `init`, `create api` commands and the workflow is + ``` - kubebuilder init --domain example.com --license apache2 --owner "The Kubernetes authors" - kubebuilder create api --group ship --version v1beta1 --kind Frigate - make install - make run +kubebuilder init --domain example.com --license apache2 --owner "The Kubernetes authors" +kubebuilder create api --group ship --version v1beta1 --kind Frigate +make install +make run ``` - - In a v1 project, there is no generate command. When the resource or controller is updated, users don't need to regenerate the project. + +In a v1 project, there is no generate command. When the resource or controller is updated, users don't need to regenerate the project. ## Scaffolding difference @@ -37,20 +37,20 @@ Kubebuilder 1.0 adds a new flag `--project-version`, it accepts two different va - v0 project contains a directory `inject` while v1 project doesn't - v0 project layout follows predefined directory layout `pkg/apis` and `pkg/controller` while v1 project accepts user specified path - In v1 project, there is a `init()` function for every api and controller. - + ## Library difference ### Controller libraries - - v0 projects import the controller library from kubebuilder `kubebuilder/pkg/controller`. It provides a `GenericController` type with a list of functions. - - - v1 projects import the controller libraries from controller-runtime, such as `controller-runtime/pkg/controller`, `controller-runtime/pkg/reconcile`. - +- v0 projects import the controller library from kubebuilder `kubebuilder/pkg/controller`. It provides a `GenericController` type with a list of functions. + +- v1 projects import the controller libraries from controller-runtime, such as `controller-runtime/pkg/controller`, `controller-runtime/pkg/reconcile`. + ### Client libraries - - - In v0 projects, the client libraries is generated by `kubebuilder generate` under directory `pkg/client` and imported wherever they are used in the project. - - - v1 projects import the dynamic client library from controller-runtime `controller-runtime/pkg/client`. - + +- In v0 projects, the client libraries is generated by `kubebuilder generate` under directory `pkg/client` and imported wherever they are used in the project. + +- v1 projects import the dynamic client library from controller-runtime `controller-runtime/pkg/client`. + ## Wiring difference -Wiring refers to the mechanics of integrating controllers to controller-manager and injecting the dependencies in them. - - v0 projects have a `inject` package and it provides functions for adding the controller to controller-manager as well as registering CRDs. - - v1 projects don't have a `inject` package, the controller is added to controller-manager by a `init` function inside add_.go file inside the controller directory. The types are registered by a `init` function inside _types.go file inside the apis directory. \ No newline at end of file +Wiring refers to the mechanics of integrating controllers to controller-manager and injecting the dependencies in them. +- v0 projects have a `inject` package and it provides functions for adding the controller to controller-manager as well as registering CRDs. +- v1 projects don't have a `inject` package, the controller is added to controller-manager by a `init` function inside add_.go file inside the controller directory. The types are registered by a `init` function inside _types.go file inside the apis directory. \ No newline at end of file diff --git a/docs/migration_guide.md b/docs/migration_guide.md index 0e59ff4421a..8b05e9ac67b 100644 --- a/docs/migration_guide.md +++ b/docs/migration_guide.md @@ -9,7 +9,7 @@ Find project's domain name from the old project's pkg/apis/doc.go and use it to `kubebuilder init --project-version v1 --domain ` ## Create api -Find the group/version/kind names from the project's pkg/apis. The group and version names are directory names while the kind name can be found from *_types.go. Note that the kind name should be capitalized. +Find the group/version/kind names from the project's pkg/apis. The group and version names are directory names while the kind name can be found from *_types.go. Note that the kind name should be capitalized. Create api in the new project with `kubebuilder create api --group --version --kind ` @@ -25,13 +25,13 @@ Note that in the v1 project, there is a section containing `List` and `ini // HelloList contains a list of Hello type HelloList struct { - metav1.TypeMeta `json:",inline"` - metav1.ListMeta `json:"metadata,omitempty"` - Items []Hello `json:"items"` +metav1.TypeMeta `json:",inline"` +metav1.ListMeta `json:"metadata,omitempty"` +Items []Hello `json:"items"` } func init() { - SchemeBuilder.Register(&Hello{}, &HelloList{}) +SchemeBuilder.Register(&Hello{}, &HelloList{}) } ``` @@ -106,9 +106,9 @@ need to be changed to: c, err := controller.New{...} c.Watch(&source.Kind{Type: &myappsv1alpha1.Memcached{}}, &handler.EnqueueRequestForObject{}) c.Watch(&source.Kind{Type: &appsv1.Deployment{}}, &handler.EnqueueRequestForOwner{ - IsController: true, - OwnerType: &myappsv1alpha1.Memcached{}, - }) +IsController: true, +OwnerType: &myappsv1alpha1.Memcached{}, +}) ``` ### copy other functions @@ -125,8 +125,8 @@ Open the Gopkg.toml file in the old project and find if there is user defined de # Users add deps lines here [prune] - go-tests = true - #unused-packages = true +go-tests = true +#unused-packages = true # Note: Stanzas below are generated by Kubebuilder and may be rewritten when # upgrading kubebuilder versions. diff --git a/docs/testing/e2e.md b/docs/testing/e2e.md index f4ba7840ff3..c4996b73eb8 100644 --- a/docs/testing/e2e.md +++ b/docs/testing/e2e.md @@ -8,57 +8,57 @@ The steps are as follow: 1. Create a test file named `_test.go` populated with template below (referring [this](https://github.com/foxish/application/blob/master/e2e/main_test.go)): ``` import ( - "k8s.io/client-go/tools/clientcmd" - clientset "k8s.io/redis-operator/pkg/client/clientset/versioned/typed//" - ...... +"k8s.io/client-go/tools/clientcmd" +clientset "k8s.io/redis-operator/pkg/client/clientset/versioned/typed//" +...... ) -// Specify kubeconfig file +// Specify kubeconfig file func getClientConfig() (*rest.Config, error) { - return clientcmd.BuildConfigFromFlags("", path.Join(os.Getenv("HOME"), "")) +return clientcmd.BuildConfigFromFlags("", path.Join(os.Getenv("HOME"), "")) } // Set up test environment var _ = Describe(" should work", func() { - config, err := getClientConfig() - if err != nil { - ...... - } +config, err := getClientConfig() +if err != nil { +...... +} + +// Construct kubernetes client +k8sClient, err := kubernetes.NewForConfig(config) +if err != nil { +...... +} - // Construct kubernetes client - k8sClient, err := kubernetes.NewForConfig(config) - if err != nil { - ...... - } +// Construct controller client +client, err := clientset.NewForConfig(config) +if err != nil { +...... +} - // Construct controller client - client, err := clientset.NewForConfig(config) - if err != nil { - ...... - } +BeforeEach(func() { +// Create environment-specific resources such as controller image StatefulSet, +// CRDs etc. Note: refer "install.yaml" created via "kubebuilder create config" +// command to have an idea of what resources to be created. +...... +}) - BeforeEach(func() { - // Create environment-specific resources such as controller image StatefulSet, - // CRDs etc. Note: refer "install.yaml" created via "kubebuilder create config" - // command to have an idea of what resources to be created. - ...... - }) +AfterEach(func() { +// Delete all test-specific resources +...... - AfterEach(func() { - // Delete all test-specific resources - ...... +// Delete all environment-specific resources +...... +}) - // Delete all environment-specific resources - ...... - }) +// Declare a list of testing specifications with corresponding test functions +// Note: test-specific resources are normally created within the test functions +It("should do something", func() { +testDoSomething(k8sClient, roClient) +}) - // Declare a list of testing specifications with corresponding test functions - // Note: test-specific resources are normally created within the test functions - It("should do something", func() { - testDoSomething(k8sClient, roClient) - }) - - ...... +...... ``` 2. Write some controller-specific e2e tests 3. Build controller image and upload it to an image storage website such as [gcr.io](https://cloud.google.com/container-registry/) diff --git a/docs/testing/integration.md b/docs/testing/integration.md index 4c456c60ddc..f33d124ec07 100644 --- a/docs/testing/integration.md +++ b/docs/testing/integration.md @@ -9,13 +9,13 @@ For example, there is a controller watching *Parent* objects. The *Parent* objec package parent import ( - _ "k8s.io/client-go/plugin/pkg/client/auth/gcp" - childapis "k8s.io/child/pkg/apis" - childv1alpha1 "k8s.io/childrepo/pkg/apis/child/v1alpha1" - parentapis "k8s.io/parent/pkg/apis" - parentv1alpha1 "k8s.io/parentrepo/pkg/apis/parent/v1alpha1" +_ "k8s.io/client-go/plugin/pkg/client/auth/gcp" +childapis "k8s.io/child/pkg/apis" +childv1alpha1 "k8s.io/childrepo/pkg/apis/child/v1alpha1" +parentapis "k8s.io/parent/pkg/apis" +parentv1alpha1 "k8s.io/parentrepo/pkg/apis/parent/v1alpha1" - ...... +...... ) const timeout = time.Second * 5 @@ -25,53 +25,53 @@ var expectedRequest = reconcile.Request{NamespacedName: types.NamespacedName{Nam var childKey = types.NamespacedName{Name: "child", Namespace: "default"} func TestReconcile(t *testing.T) { - g := gomega.NewGomegaWithT(t) - - // Parent instance to be created. - parent := &parentv1alpha1.Parent{ - ObjectMeta: metav1.ObjectMeta{ - Name: "parent", - Namespace: "default", - }, - Spec: metav1.ParentSpec{ - SomeSpecField: "SomeSpecValue", - AnotherSpecField: "AnotherSpecValue", - }, - } - - // Setup the Manager and Controller. Wrap the Controller Reconcile function - // so it writes each request to a channel when it is finished. - mgr, err := manager.New(cfg, manager.Options{}) - - // Setup Scheme for all resources. - if err = parentapis.AddToScheme(mgr.GetScheme()); err != nil { - t.Logf("failed to add Parent scheme: %v", err) - } - if err = childapis.AddToScheme(mgr.GetScheme()); err != nil { - t.Logf("failed to add Child scheme: %v", err) - } - - // Set up and start test manager. - reconciler, err := newReconciler(mgr) - g.Expect(err).NotTo(gomega.HaveOccurred()) - recFn, requests := SetupTestReconcile(reconciler) - g.Expect(add(mgr, recFn)).NotTo(gomega.HaveOccurred()) - defer close(StartTestManager(mgr, g)) - - // Create the Parent object and expect the Reconcile and Child to be created. - c = mgr.GetClient() - err = c.Create(context.TODO(), parent) - g.Expect(err).NotTo(gomega.HaveOccurred()) - defer c.Delete(context.TODO(), parent) - g.Eventually(requests, timeout).Should(gomega.Receive(gomega.Equal(expectedRequest))) - - // Verify Child is created. - child := &childv1alpha1.Child{} - g.Eventually(func() error { return c.Get(context.TODO(), childKey, child) }, timeout). - Should(gomega.Succeed()) - - // Manually delete Child since GC isn't enabled in the test control plane. - g.Expect(c.Delete(context.TODO(), child)).To(gomega.Succeed()) +g := gomega.NewGomegaWithT(t) + +// Parent instance to be created. +parent := &parentv1alpha1.Parent{ +ObjectMeta: metav1.ObjectMeta{ +Name: "parent", +Namespace: "default", +}, +Spec: metav1.ParentSpec{ +SomeSpecField: "SomeSpecValue", +AnotherSpecField: "AnotherSpecValue", +}, +} + +// Setup the Manager and Controller. Wrap the Controller Reconcile function +// so it writes each request to a channel when it is finished. +mgr, err := manager.New(cfg, manager.Options{}) + +// Setup Scheme for all resources. +if err = parentapis.AddToScheme(mgr.GetScheme()); err != nil { +t.Logf("failed to add Parent scheme: %v", err) +} +if err = childapis.AddToScheme(mgr.GetScheme()); err != nil { +t.Logf("failed to add Child scheme: %v", err) +} + +// Set up and start test manager. +reconciler, err := newReconciler(mgr) +g.Expect(err).NotTo(gomega.HaveOccurred()) +recFn, requests := SetupTestReconcile(reconciler) +g.Expect(add(mgr, recFn)).NotTo(gomega.HaveOccurred()) +defer close(StartTestManager(mgr, g)) + +// Create the Parent object and expect the Reconcile and Child to be created. +c = mgr.GetClient() +err = c.Create(context.TODO(), parent) +g.Expect(err).NotTo(gomega.HaveOccurred()) +defer c.Delete(context.TODO(), parent) +g.Eventually(requests, timeout).Should(gomega.Receive(gomega.Equal(expectedRequest))) + +// Verify Child is created. +child := &childv1alpha1.Child{} +g.Eventually(func() error { return c.Get(context.TODO(), childKey, child) }, timeout). +Should(gomega.Succeed()) + +// Manually delete Child since GC isn't enabled in the test control plane. +g.Expect(c.Delete(context.TODO(), child)).To(gomega.Succeed()) } ``` diff --git a/testdata/project-v3/README.md b/testdata/project-v3/README.md index a968e958d3d..ae34d8bebe7 100644 --- a/testdata/project-v3/README.md +++ b/testdata/project-v3/README.md @@ -84,7 +84,7 @@ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at - http://www.apache.org/licenses/LICENSE-2.0 +http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, diff --git a/testdata/project-v4-multigroup-with-deploy-image/README.md b/testdata/project-v4-multigroup-with-deploy-image/README.md index b6543d55d83..fa3d0bc9d08 100644 --- a/testdata/project-v4-multigroup-with-deploy-image/README.md +++ b/testdata/project-v4-multigroup-with-deploy-image/README.md @@ -19,8 +19,8 @@ make docker-build docker-push IMG=/project-v4-multigroup-with-deploy-image:tag ``` -**NOTE:** This image ought to be published in the personal registry you specified. -And it is required to have access to pull the image from the working environment. +**NOTE:** This image ought to be published in the personal registry you specified. +And it is required to have access to pull the image from the working environment. Make sure you have the proper permission to the registry if the above commands don’t work. **Install the CRDs into the cluster:** @@ -35,7 +35,7 @@ make install make deploy IMG=/project-v4-multigroup-with-deploy-image:tag ``` -> **NOTE**: If you encounter RBAC errors, you may need to grant yourself cluster-admin +> **NOTE**: If you encounter RBAC errors, you may need to grant yourself cluster-admin privileges or be logged in as admin. **Create instances of your solution** @@ -104,7 +104,7 @@ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at - http://www.apache.org/licenses/LICENSE-2.0 +http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, diff --git a/testdata/project-v4-multigroup/README.md b/testdata/project-v4-multigroup/README.md index 99be2f2b12b..ac496d0ccaa 100644 --- a/testdata/project-v4-multigroup/README.md +++ b/testdata/project-v4-multigroup/README.md @@ -19,8 +19,8 @@ make docker-build docker-push IMG=/project-v4-multigroup:tag ``` -**NOTE:** This image ought to be published in the personal registry you specified. -And it is required to have access to pull the image from the working environment. +**NOTE:** This image ought to be published in the personal registry you specified. +And it is required to have access to pull the image from the working environment. Make sure you have the proper permission to the registry if the above commands don’t work. **Install the CRDs into the cluster:** @@ -35,7 +35,7 @@ make install make deploy IMG=/project-v4-multigroup:tag ``` -> **NOTE**: If you encounter RBAC errors, you may need to grant yourself cluster-admin +> **NOTE**: If you encounter RBAC errors, you may need to grant yourself cluster-admin privileges or be logged in as admin. **Create instances of your solution** @@ -104,7 +104,7 @@ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at - http://www.apache.org/licenses/LICENSE-2.0 +http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, diff --git a/testdata/project-v4-with-deploy-image/README.md b/testdata/project-v4-with-deploy-image/README.md index d400311c086..785fd347717 100644 --- a/testdata/project-v4-with-deploy-image/README.md +++ b/testdata/project-v4-with-deploy-image/README.md @@ -19,8 +19,8 @@ make docker-build docker-push IMG=/project-v4-with-deploy-image:tag ``` -**NOTE:** This image ought to be published in the personal registry you specified. -And it is required to have access to pull the image from the working environment. +**NOTE:** This image ought to be published in the personal registry you specified. +And it is required to have access to pull the image from the working environment. Make sure you have the proper permission to the registry if the above commands don’t work. **Install the CRDs into the cluster:** @@ -35,7 +35,7 @@ make install make deploy IMG=/project-v4-with-deploy-image:tag ``` -> **NOTE**: If you encounter RBAC errors, you may need to grant yourself cluster-admin +> **NOTE**: If you encounter RBAC errors, you may need to grant yourself cluster-admin privileges or be logged in as admin. **Create instances of your solution** @@ -104,7 +104,7 @@ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at - http://www.apache.org/licenses/LICENSE-2.0 +http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, diff --git a/testdata/project-v4-with-grafana/README.md b/testdata/project-v4-with-grafana/README.md index 96ea5aefcd6..0f0f6cf2575 100644 --- a/testdata/project-v4-with-grafana/README.md +++ b/testdata/project-v4-with-grafana/README.md @@ -19,8 +19,8 @@ make docker-build docker-push IMG=/project-v4-with-grafana:tag ``` -**NOTE:** This image ought to be published in the personal registry you specified. -And it is required to have access to pull the image from the working environment. +**NOTE:** This image ought to be published in the personal registry you specified. +And it is required to have access to pull the image from the working environment. Make sure you have the proper permission to the registry if the above commands don’t work. **Install the CRDs into the cluster:** @@ -35,7 +35,7 @@ make install make deploy IMG=/project-v4-with-grafana:tag ``` -> **NOTE**: If you encounter RBAC errors, you may need to grant yourself cluster-admin +> **NOTE**: If you encounter RBAC errors, you may need to grant yourself cluster-admin privileges or be logged in as admin. **Create instances of your solution** @@ -104,7 +104,7 @@ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at - http://www.apache.org/licenses/LICENSE-2.0 +http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, diff --git a/testdata/project-v4/README.md b/testdata/project-v4/README.md index 390e8a64031..ba792bdf749 100644 --- a/testdata/project-v4/README.md +++ b/testdata/project-v4/README.md @@ -19,8 +19,8 @@ make docker-build docker-push IMG=/project-v4:tag ``` -**NOTE:** This image ought to be published in the personal registry you specified. -And it is required to have access to pull the image from the working environment. +**NOTE:** This image ought to be published in the personal registry you specified. +And it is required to have access to pull the image from the working environment. Make sure you have the proper permission to the registry if the above commands don’t work. **Install the CRDs into the cluster:** @@ -35,7 +35,7 @@ make install make deploy IMG=/project-v4:tag ``` -> **NOTE**: If you encounter RBAC errors, you may need to grant yourself cluster-admin +> **NOTE**: If you encounter RBAC errors, you may need to grant yourself cluster-admin privileges or be logged in as admin. **Create instances of your solution** @@ -104,7 +104,7 @@ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at - http://www.apache.org/licenses/LICENSE-2.0 +http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS,