-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add mesh VPN support to the CDH #763
base: main
Are you sure you want to change the base?
Add mesh VPN support to the CDH #763
Conversation
00e1f81
to
6fb1d22
Compare
60d552e
to
8cdf322
Compare
8062c00
to
2163ae5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few comments on the first half of the PR
#[arg(short, long)] | ||
pod_name: String, | ||
#[arg(short, long)] | ||
lighthouse_pub_ip: String, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this could maybe come from the CDH config file (with init-data coming soon) rather than be passed from the caller.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this idea. Here's an example from the repo. So you're thinking a new top-level key, e.g. "overlay-network"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated the example configs and also CdhConfig to include these fields.
confidential-data-hub/overlay-network/src/overlay_network/nebula.rs
Outdated
Show resolved
Hide resolved
confidential-data-hub/overlay-network/src/overlay_network/nebula.rs
Outdated
Show resolved
Hide resolved
confidential-data-hub/overlay-network/src/overlay_network/nebula.rs
Outdated
Show resolved
Hide resolved
confidential-data-hub/overlay-network/src/overlay_network/nebula.rs
Outdated
Show resolved
Hide resolved
/// - Ask trustee for its nebula credentials | ||
/// - Start the nebula daemon. | ||
pub async fn init(&self) -> Result<()> { | ||
let is_lighthouse: bool = self.lighthouse_ip.is_empty(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so we're going to use one of the pods as the lighthouse?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
... yea. Thoughts? We looked at two designs: (1) the lighthouse is a coco pod (we went with this for now); (2) the lighthouse runs and lives alongside trustee.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would do #2. Trustee should already be at some static ip known to all pods and it won't move or go down during the lifecycle of the workload. This would also allow you to more easily have two completely different workloads sharing the same overlay network.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the merits of #2. Maybe the thing to do is strip the ability to launch the lighthouse here (which will simplify things); then create a new issue and PR for lighthouse support in trustee. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I stripped lighthouse support from this PR (and thus from guest-components). It will need some kind of support from the trustee side now.
confidential-data-hub/overlay-network/src/overlay_network/nebula.rs
Outdated
Show resolved
Hide resolved
confidential-data-hub/overlay-network/src/overlay_network/nebula.rs
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it could be cool to make these configs more object oriented. rather than just storing a string config, you could have a struct that stores a string and knows how to write itself to a certain place and such. not a requirement but could be nifty
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the config as a giant string (like we have here) (and like what I think you're still suggesting) is good. Reasons include readability, and updating it (to change a default value, or to change something due to a nebula update). But if it's part of a struct that can manage how and where it's written out, etc. (rather than burying those details in the NebulaMesh::init
function), then it's better than how it is now.
0f1fdb4
to
f425b00
Compare
f425b00
to
39ee471
Compare
Do we want a compile-time feature flag for the overlay-network AND a CdhConfig setting to enable it? We should definitely have the config setting. I'm wondering if the compile-time feature confuses the matter (though it has its merits). |
We probably should have a feature so the code can be completely removed for trypophobes (or people who just want a really small guest tcb). |
Signed-off-by: Chris Porter <[email protected]>
39ee471
to
15daed7
Compare
RFC issue is here
This PR is meant to hold the main guest functionality for overlay network support with Nebula. The main additions are in confidential-data-hub/overlay-network.
Related items not included in this PR: