-
Notifications
You must be signed in to change notification settings - Fork 707
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helm3 design proposal #1250
Helm3 design proposal #1250
Conversation
- Technical proofreading - Fix some typos
Since you proposed But the name won't be the biggest show-stopper around, so I don't mind that much really. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! In general, the proposal looks good to me, I just have some questions regarding the naming.
Related to the name of the service, I think this can still be consider a proxy? Maybe instead of backend
I would name it kubeapps-proxy
but maybe I am missing something.
Let me know what you think!
We see no reason to implement a proxy for Helm 3, but rather what we call an | ||
agent. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well it'd be still a proxy but instead to Tiller, to the Kubernetes API directly (as far as I understood)
We should update the API version from v1 to v2, to clearly distinguish between | ||
the old and the new subsystems. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the problem with this is that we would need to implement in the dashboard
code the two API endpoints but I guess that's fine
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hrm... so this is what I'd like to avoid if possible - for the duration that we support both helm2 and helm3, it'd be great if we could stick to the existing interface (CreateRelease
, UpgradeRelease
for the handler) so that the change is isolated and the frontend does not need to know. Once we remove helm2 support, I'd be happy to diverge...
So my question is: Why should we update the API version from v1 to v2 and clearly distinguish? Is there a difference in functionality as far as the API contract between the frontend and this service?
![Current situation](https://user-images.githubusercontent.com/7773090/67413010-ac044e00-f5c0-11e9-93e9-f3cdd1eeaca8.PNG) | ||
|
||
**With the new additions:** | ||
![With the new additions](https://user-images.githubusercontent.com/7773090/67413025-b45c8900-f5c0-11e9-8961-67377bc8faad.PNG) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's the difference between helm3-agent
and Helm 3 actions library
? Can't those be merged? I am not sure if I am wrong but I would expect something like backend -> helm3-lib -> K8s API
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's the difference between helm3-agent and Helm 3 actions library? Can't those be merged?
I think it's to match the graphic for the existing PNG, but I had the same question about that? The diagrams should (IMO) either be about requests or about implementation modules, but these seem to conflate the two which is a bit confusing. What I'd expect to see is something like:
Current:
REST API -> tiller-proxy service -> Helm Tiller
New:
REST API -> Helm3 agent -> Kubernetes API
The fact that the tiller-proxy service is built using the go module pkg/proxy
isn't relevant to requests (but may be relevant to implementation), and similarly, that the Helm3 agent is using a pkg/helm3actions
or similar.
But I could be misunderstanding the point of these - sorry in advance if so :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for all the thought that's going into this! I've got a bunch of questions and thoughts for discussion below.
Sorry it's so long - none of the points below are blockers at all, I just want to ensure we discuss, understand and agree on the pros and cons.
We believe that the transition to Helm 3 can be done in such a fashion that | ||
both the old tiller-proxy and the new Helm 3 components can coexist. | ||
However, we propose that the initial effort not touch tiller-proxy, but create | ||
a completely new component. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By component, I'm assuming we're talking about the actual service (ie. a docker image serving requests, rather than a go module or similar)?
So I'd initially suggested that we use (and rename) the existing service for a two reasons, but neither of which is mega important or can't be solved in a different way - see what you think:
- First, it would ensure that we support the same API for the frontend to use (ie. CreateRelease, UpgradeRelease, RollbackRelease, ListAllReleases, etc.) as we would just have a different implementation depending on whether helm 2 or 3 was configured. This means that no other component needs to change (potentially?) - for example, the frontend dashboard does not know anything about the change at all for now (later when we remove helm 2 support we can reconsider).
That said, either way I'd be keen to first extract the current handler functionality (CreateRelease
, UpgradeRelease
etc.) to a go interface that is implemented by the existing TillerProxy
so that we're sure the new helm3 handler implements exactly the same functionality (request handling). At that point, I don't see that it would matter whether the switch between the two was config to that single service, or config to nginx to send request to a separate service, so if that's what you think is best, +1 - as long as we have the interface and can stick to that. Does that make sense?
- My second reason was that I have future plans to improve the authn to be cookie-based in general (as it already is when using the auth proxy), which for the non auth-proxy situation, requires a backend to encrypt the k8s token and return a cookie. But that's in the future and may not eventuate anyway, so I don't think it deserves any weight at all.
So happy either way - but keen to know the benefits of a doing it as a separate service implementing the same interface/api rather than a second implementation of the same interface in the existing service.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'll give a full answer soon. But just to clarify what we mean with "component": it's anything that has a main.go
file. So right now, we call tiller-proxy
a component.
However, we propose that the initial effort not touch tiller-proxy, but create | ||
a completely new component. | ||
The choice between Helm 2 support and Helm 3 support should be made at deploy | ||
time in Helm (e.g. by setting `helm3=true` in Values). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, I think we all agreed on that, but if I'm reading correctly, you actually mean that this switch will mean frontend requests being sent to a different service (rather than frontend requests being handled by a different module in the existing service)? (which is fine, just good to be clear).
We should update the API version from v1 to v2, to clearly distinguish between | ||
the old and the new subsystems. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hrm... so this is what I'd like to avoid if possible - for the duration that we support both helm2 and helm3, it'd be great if we could stick to the existing interface (CreateRelease
, UpgradeRelease
for the handler) so that the change is isolated and the frontend does not need to know. Once we remove helm2 support, I'd be happy to diverge...
So my question is: Why should we update the API version from v1 to v2 and clearly distinguish? Is there a difference in functionality as far as the API contract between the frontend and this service?
![Current situation](https://user-images.githubusercontent.com/7773090/67413010-ac044e00-f5c0-11e9-93e9-f3cdd1eeaca8.PNG) | ||
|
||
**With the new additions:** | ||
![With the new additions](https://user-images.githubusercontent.com/7773090/67413025-b45c8900-f5c0-11e9-8961-67377bc8faad.PNG) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's the difference between helm3-agent and Helm 3 actions library? Can't those be merged?
I think it's to match the graphic for the existing PNG, but I had the same question about that? The diagrams should (IMO) either be about requests or about implementation modules, but these seem to conflate the two which is a bit confusing. What I'd expect to see is something like:
Current:
REST API -> tiller-proxy service -> Helm Tiller
New:
REST API -> Helm3 agent -> Kubernetes API
The fact that the tiller-proxy service is built using the go module pkg/proxy
isn't relevant to requests (but may be relevant to implementation), and similarly, that the Helm3 agent is using a pkg/helm3actions
or similar.
But I could be misunderstanding the point of these - sorry in advance if so :)
the token are already provided. | ||
Basically all we need to do is to call the `InClusterConfig` method and | ||
overwrite the `BearerToken` field in the received configuration with the token | ||
string we get from the Dashboard. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right - though I don't have the context (perhaps you can link to the relevant code/docs for the helm3 client code?) it should be as simple as using the bearer token.
|
||
Since Helm 2 built on Tiller, all authentication to the Kubernetes cluster | ||
happened "over there" and Kubeapps did not need to authenticate anything other | ||
than the communication with Tiller. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was not clear to me at first, but after digging a bit, I think I understand and agree, though would expand and clarify a little as follows:
Since Helm 2 is built on the assumption that the Tiller service runs in-cluster with its own service account (as per Helm's Securing your Helm Installation document, "Tiller in its current form does not provide a way to map user credentials to specific permissions within Kubernetes. When Tiller is running inside of the cluster, it operates with the permissions of its service account."), therefore the only credentials Kubeapps was required to provide for authentication with Tiller was the
ca.crt
- and then only if tiller had been configured with tls. For this reason Kubeapps currently authenticates with the Kubernetes API server using the users bearer token not only when talking directly to the api server, but also to verify permissions, before making any request to tiller.
happened "over there" and Kubeapps did not need to authenticate anything other | ||
than the communication with Tiller. | ||
Now, with Helm 3, all authentication is handled by the Kubernetes API Server | ||
via the credentials provided by kubectl. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"via the credentials provided by the user."? I'm not sure how kubectl is relevant here? Ah, or you mean when using the helm3 CLI it just uses the same credentials that kubectl uses (from ~/.kube/config or similar). If that's what you mean, I think the CLI less relevant - the main point is that helm 3 uses the credentials provided by the user for accessing the Kubernetes API server, I think?
via the credentials provided by kubectl. | ||
We have created a PoC using the Helm Library in Helm 3, and we have found that | ||
it should be relatively straight-forward to implement since both `ca.crt` and | ||
the token are already provided. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, but doesn't Kubeapps already do this for every call to the K8s API server? (ie. sends the user-provided Authorization: Bearer ****
header to the K8s API server)
I'm not understanding how or why this would be different. We don't currently support individual users configuring a client ca file for requests to the k8s api server - we only support authentication tokens for users, so I'm not sure that we'd need or want to do so here... the token is the only relevant credential (which can be a service account token or openidconnect id_token or anything the k8s api server is configured to use).
I'm not sure if there's a little confusion because tiller-proxy was updated to allow setting a ca.crt
for authentication with tiller? This was required if running the tiller service securely, but I don't see how it's relevant for Kubeapps with Helm 3 (I mean, I get that the helm3 CLI might use a ca.crt for authn because that's what's configured for kubectl, but I'm not seeing how that's relevant to the Kubeapps use-case where we are not using certificate-based authentication for users - only for a single non-user, the tiller-proxy, in the helm2 setup).
Totally agree - I don't care too much about the name, the only reason I was proposing |
The last straw for me was how unreadable the text became when presented in comments on vmware-tanzu#1250.
Thanks for all your valuable feedback, @andresmgot and @absoludity. We'll discuss within the team and hopefully come up with some good answers to your questions. |
Hi again. We've now discussed your feedback and tried to come up with good answers. We decided to stick to one single comment; hopefully that will do.
We agree that it shouldn't be necessary to modify the dashboard. Regarding that, could you please confirm or deny this:
We may be missing some important point here, but the diagram above seems to represent replacing Helm 2 support with Helm 3 support. Our diagram was intended to represent adding Helm 3 support while keeping Helm 2 support.
At first, we wanted to define a common interface for
As @lindhe mentioned, this may just be a matter of us using the wrong terminology. We think that
No, Now, with Helm 3, all authentication is handled by the Kubernetes API Server, using the already available bearer token.
We're also considering
The context is that creating a If you feel we have misunderstood something or haven't addressed all your points, just let us know. |
Hi, thanks for the comment! I will reply inline with my thoughts to give early feedback but @absoludity may think otherwise.
True. The thing is that right now this
That's fine, I think we are just confused because the diagram mixes go packages (like
Yes, that's fine. We mean the same.
This may be a problem. Note that we are not interested on maintaining the same interface in the Golang service. Tiller-proxy can keep sending back something like https://github.com/kubeapps/kubeapps/blob/master/dashboard/src/shared/hapi/release.d.ts#L9 If the returned value from helm 3 doesn't match that interface, then it's when we have a problem. Can you confirm if a helm 3 release does not match that interface? Note that in that case, we could define a minimum common interface between a release in helm 2 and helm 3 or we can post-process the information returned by helm in the
Yes, but this
That's fine.
I would then call it just
Yes, using |
Nope - I agree with your responses @andresmgot . Thanks @SimonAlling for all the clarifications! |
Yes, what Andres said. To be specific, I meant that we should define an interface for the following methods which are defined in https://github.com/kubeapps/kubeapps/blob/master/cmd/tiller-proxy/internal/handler/handler.go#L169 so that we are guaranteed that the dashboard does not need to change (until we remove helm2 support later at which point we can remove this restriction):
which would require that the service continues to understand the release sent by the frontend (and return the same where appropriate). |
We are almost sure that we will be able to keep the Dashboard–backend* API untouched. But the part referred to as "proxy" in our schematic cannot be reused for Helm 3, due to incompatibilities between the Helm 2 and Helm 3 versions of I'm aware that I may not be answering all your questions and thoughts, even with this comment and today's update to the document combined. I'm sorry for this, but the conversation has become pretty unwieldy and I'm having trouble keeping everything in my head at the moment. Hopefully, the next commit will provide some sort of a stepping stone from which we can move on. We're looking forward to your next piece of feedback! Also, a small but useful tip: When replying to GitHub comments, you can select the piece of text you want to quote and, instead of copypasting it, click Quote reply. Then the quote gets properly formatted instead of losing all formatting such as * Note the difference between "Dashboard–backend" (between Dashboard and backend) and "Dashboard-backend" (backend of Dashboard or something). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we agree on the core of this implementation so let's go!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we agree on the core of this implementation so let's go!
Same.
We are almost sure that we will be able to keep the Dashboard–backend* API untouched.
Great - I'll check with Andres this evening regarding encoding that in an interface (I just re-read and saw that we had differing views there that I hadn't realised earlier - I was keen to have the current go handler code extracted to an interface that is used by both the old and the new, I want to understand why that would not be useful here).
A couple of things are still not clear to me, which I'd be keen to see in this PR if you can update before landing:
-
Where the condition will be for routing the request to the helm3 handler for dashboard requests (ie. it could be in the dashboard itself (ie. different host for the request), or it could be in the existing service (tiller-proxy, but I think you're against that), or it could be updating the nginx frontend config to point existing host to the new service. I'm guessing you want the first (ie. use a different host which will require a tiny conditional in the dashboard and an update to the nginx config), but it's not clear.
-
Whether you plan to use the same/similar go libraries as used in
tiller-proxy
or plan to use some other go libraries (I don't have a strong preference, given that we will eventually be able to removetiller-proxy
given the current direction, but think it's worth mentioning here rather than surprising people in a first PR with an Eiffel web handler or something :P)
Answering my own question (as I think Andres is away) - I took a look at what is involved to extract this interface and it is not worthwhile, or useful. So please ignore :) Details: I first needed to refactor the tiller-proxy handler to remove the |
I'm glad to hear that our suspicion could be confirmed, Absoludity. |
Yeah - sorry I didn't check sooner when proposing it. I'm landing this now, but am keen to hear about the two points as soon as you're ready:
|
@absoludity Thank you for pointing out those things again. Hopefully, I'll be able to get back to you soon on those matters. For your information, I'm assuming main responsibility (within our team) for implementing Helm 3 support in Kubeapps from now on. |
Status update: With the invaluable help of @andresmgot, we've finally managed to setup a development environment so that we can run a locally modified instance of Kubeapps. We should now be able to start experimenting with implementation details. |
As discussed in #1056 and on Slack.
Joint work of @latiif, @lindhe, @NiclasWarefelt and myself.