-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds script to auto create libvirt podvm config - #435 #436
base: devel
Are you sure you want to change the base?
Adds script to auto create libvirt podvm config - #435 #436
Conversation
Hi @Saripalli-lavanya. Thanks for your PR. I'm waiting for a openshift member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@Saripalli-lavanya Add the script to cleanup when kataconfig is unconfigured |
Hi @Saripalli-lavanya and @savitrilh ! Thanks for all the huge work you are doing 😄 I'll just suggest that you break down the first commit into smaller logical and bisectable pieces. Even if this might look like extra work, it is largely beneficial for everyone :
Also, we don't have pre-merge CI currently : bugs will be found late. Being able to bisect can really make a difference for non-obvious ones. Cheers ! |
39a687a
to
d46549b
Compare
e4bac7c
to
0a45f37
Compare
262547a
to
1c918aa
Compare
Added script to that will help create and clean up libvirt configurations needed for Sandboxed containers operator. Signed-off-by: Saripalli Lavanya <[email protected]> and Signed-off-by: savitrilh <[email protected]>
Sandboxed containers operator to setup peer-pod config Signed-off-by: Saripalli Lavanya <[email protected]> and Signed-off-by: savitrilh <[email protected]>
Sandboxed containers operator to cleanup peer-pod config Signed-off-by: Saripalli Lavanya <[email protected]> and Signed-off-by: savitrilh <[email protected]>
…n setup script Updated builder script to include installation of dependencies and libvirt setup script which helps configure libvirt and cleanup. Signed-off-by: Saripalli Lavanya <[email protected]> and Signed-off-by: savitrilh <[email protected]>
Added functions to retrieve ssh-key-secret, ocp-libvirt-secret, and libvirt-podvm-image config map in the operator. Signed-off-by: Saripalli Lavanya <[email protected]> and Signed-off-by: savitrilh <[email protected]>
Added constants for job statuses and implemented functions for pre-configuration and post-unconfiguration tasks in the libvirt setup.
Refactored configuration checks and cleanup handling in the confidential handler and OpenShift controller.
5fde8fb
to
b855cff
Compare
@@ -59,6 +59,8 @@ const ( | |||
LibvirtProvider = "libvirt" | |||
peerpodsImageJobsPathLocation = "/config/peerpods/podvm" | |||
azureImageGalleryPrefix = "PodVMGallery" | |||
preConfigJobName = "osc-podvm-pre-configuration" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On a generic note, I'm really not in favour of adding more jobs to the already complicated image generation code.
Can't the jobs be run in a standalone fashion (admin runs oc apply -f <job.yaml> or a script) to start with without code integration ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bpradipt If we provide a builder image built with these scripts in the job, the OCP admin can run this job as a standalone process. We believe that the creation and deletion of ConfigMaps, Secrets, and Libvirt pools/volumes can be automated with OSC. We attempted to synchronize this with the Image Job and the code. Additionally, we enhanced it to be scalable so that other providers can also manage their ConfigMaps and Secrets by enhancing provider specific data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't the configmap and secret updates be de-coupled from image generation ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bpradipt Certainly possible to de-couple configmap and secret from image generation
This can be implemented as below:
File config/peerpods/podvm/libvirt-podvm-secret.yaml contains all user Input.
Add new Key/Flag in the config/peerpods/podvm/libvirt-podvm-secret.yaml file as below:
IMAGE_GEN ="YES"
This is an option parameter, by default it can be considered as Yes
. If Admin user doesn't wish to generate image, then it can be set to "NO". Then it will not create image config map and pool, vol.
By default, it can create peer-pods-cm configmap and secrets.
If this option is fine, we can implement and drop that code changes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's hear what other maintainers have to say before you make any changes.
The related issue is quite old and lots have changed since then. So I think some discussion is needed on the linked issue - #435
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @bpradipt
Please let me know where I need to initiate this discussion. I will initiate the discussion
|
||
if [ "\$POOL_EXISTS" -eq 0 ] && [ "\$VOL_EXISTS" -eq 0 ]; then | ||
echo "A Libvirt pool named '${libvirt_pool}' with volume '${libvirt_vol_name}' already exists on the KVM host. Please choose a different name." | ||
exit 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no way to re-use an existing pool?
No big deal if it's a mandatory requirement, but if we can keep a little bit of flexibility, that would be nice - if only for development/test setups :-)
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
- Description of the problem which is fixed/What is the use case
Added script to Automatically create libvirt pool and volume on KVM host and upon successful creation same script will create peer-pod secret, config map & ssh secret. - #435
- What I did
- How to verify it
- Description for the changelog