Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Manila CephNFS support #620

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

fmount
Copy link
Contributor

@fmount fmount commented Sep 24, 2024

This patch introduces a set of basic tasks to perform the adoption of Manila with a CephNFS backend.
Before performing the actual adoption, a new CephNFS cephadm based cluster must be created.
For this reason, the manila_nfs tasks require a few parameters as input:

  1. A new CephNFS VIP where the CephIngress daemon (made by Haproxy and keepalived) is created;
  2. A set of Ceph target_nodes where the "nfs" label is added: these nodes are supposed to host the new CephNFS cluster and must be different from controller nodes that are going to be decommisioned;
    3. The TripleO managed Ganesha VIP: this input can be actually retrieved by the existing TripleO deployment, but we can gather this input as part of the next iteration on this patch

After the new CephNFS cluster is created, it is possible to build a manila-share with the expected NFS configuration.

@fmount
Copy link
Contributor Author

fmount commented Sep 24, 2024

@lkuchlan I might need your help to testproject this use case and define the set of parameters we need to pass as overrides when NFS is set as a backend.
@gouthampacha I start doing some sanity check against this code and try to reduce (as much as possible) the number of overrides we need to setup in the job.
For the reason above, I'm holding this change for now.

@fmount fmount force-pushed the manila_nfs branch 4 times, most recently from 951e078 to a503e17 Compare September 24, 2024 12:03
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/c17fe76e4a084abc8ee5afb3fe05352e

adoption-standalone-to-crc-ceph FAILURE in 2h 37m 42s
adoption-standalone-to-crc-no-ceph FAILURE in 2h 41m 37s

@fmount fmount force-pushed the manila_nfs branch 3 times, most recently from d493cbd to ae2ad92 Compare October 1, 2024 11:46
@fmount fmount force-pushed the manila_nfs branch 2 times, most recently from 77d923b to 77582d7 Compare October 1, 2024 11:53
@fmount
Copy link
Contributor Author

fmount commented Oct 1, 2024

@gouthampacha @lkuchlan I think we're still missing something here. As a prereq, when the overcloud still exists, we need to propagate the StorageNFS network to the Ceph target nodes. For this reason we might need an additional change where we do that beforehand.
I'm thinking to create an additional PR where we can discuss and focus on the following:

  1. build the appropriate baremetal file from the undercloud
  2. add the StorageNFS network to the Ceph target nodes
  3. run overcloud node provision --network-config
  4. verify the network exists in the target nodes

While the target nodes should be passed as input, the above would also remove the requirement of getting the new VIP.

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/bb8e08db65c045ac98ed9d23bdc24d15

✔️ adoption-standalone-to-crc-ceph SUCCESS in 2h 55m 38s
adoption-standalone-to-crc-no-ceph RETRY_LIMIT in 26m 29s

This patch introduces a set of basic tasks to perform the adoption of
Manila with a CephNFS backend.

Signed-off-by: Francesco Pantano <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant