Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm3 design proposal #1250

Merged
merged 9 commits into from
Nov 4, 2019
Merged

Conversation

SimonAlling
Copy link
Contributor

As discussed in #1056 and on Slack.

Joint work of @latiif, @lindhe, @NiclasWarefelt and myself.

@lindhe
Copy link
Contributor

lindhe commented Oct 24, 2019

Since you proposed backend as name for the new component, we went with that. I think I would prefer a different name, since it's as much of a frontend as a backend. Maybe api-handler, helm-parser or cmd-endpoint would be good alternatives?

But the name won't be the biggest show-stopper around, so I don't mind that much really.

Copy link
Contributor

@andresmgot andresmgot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! In general, the proposal looks good to me, I just have some questions regarding the naming.

Related to the name of the service, I think this can still be consider a proxy? Maybe instead of backend I would name it kubeapps-proxy but maybe I am missing something.

Let me know what you think!

Comment on lines 12 to 13
We see no reason to implement a proxy for Helm 3, but rather what we call an
agent.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well it'd be still a proxy but instead to Tiller, to the Kubernetes API directly (as far as I understood)

Comment on lines 14 to 15
We should update the API version from v1 to v2, to clearly distinguish between
the old and the new subsystems.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the problem with this is that we would need to implement in the dashboard code the two API endpoints but I guess that's fine

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hrm... so this is what I'd like to avoid if possible - for the duration that we support both helm2 and helm3, it'd be great if we could stick to the existing interface (CreateRelease, UpgradeRelease for the handler) so that the change is isolated and the frontend does not need to know. Once we remove helm2 support, I'd be happy to diverge...

So my question is: Why should we update the API version from v1 to v2 and clearly distinguish? Is there a difference in functionality as far as the API contract between the frontend and this service?

![Current situation](https://user-images.githubusercontent.com/7773090/67413010-ac044e00-f5c0-11e9-93e9-f3cdd1eeaca8.PNG)

**With the new additions:**
![With the new additions](https://user-images.githubusercontent.com/7773090/67413025-b45c8900-f5c0-11e9-8961-67377bc8faad.PNG)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the difference between helm3-agent and Helm 3 actions library? Can't those be merged? I am not sure if I am wrong but I would expect something like backend -> helm3-lib -> K8s API

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the difference between helm3-agent and Helm 3 actions library? Can't those be merged?

I think it's to match the graphic for the existing PNG, but I had the same question about that? The diagrams should (IMO) either be about requests or about implementation modules, but these seem to conflate the two which is a bit confusing. What I'd expect to see is something like:

Current:
REST API -> tiller-proxy service -> Helm Tiller

New:
REST API -> Helm3 agent -> Kubernetes API

The fact that the tiller-proxy service is built using the go module pkg/proxy isn't relevant to requests (but may be relevant to implementation), and similarly, that the Helm3 agent is using a pkg/helm3actions or similar.

But I could be misunderstanding the point of these - sorry in advance if so :)

Copy link
Contributor

@absoludity absoludity left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for all the thought that's going into this! I've got a bunch of questions and thoughts for discussion below.

Sorry it's so long - none of the points below are blockers at all, I just want to ensure we discuss, understand and agree on the pros and cons.

We believe that the transition to Helm 3 can be done in such a fashion that
both the old tiller-proxy and the new Helm 3 components can coexist.
However, we propose that the initial effort not touch tiller-proxy, but create
a completely new component.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By component, I'm assuming we're talking about the actual service (ie. a docker image serving requests, rather than a go module or similar)?

So I'd initially suggested that we use (and rename) the existing service for a two reasons, but neither of which is mega important or can't be solved in a different way - see what you think:

  1. First, it would ensure that we support the same API for the frontend to use (ie. CreateRelease, UpgradeRelease, RollbackRelease, ListAllReleases, etc.) as we would just have a different implementation depending on whether helm 2 or 3 was configured. This means that no other component needs to change (potentially?) - for example, the frontend dashboard does not know anything about the change at all for now (later when we remove helm 2 support we can reconsider).

That said, either way I'd be keen to first extract the current handler functionality (CreateRelease, UpgradeRelease etc.) to a go interface that is implemented by the existing TillerProxy so that we're sure the new helm3 handler implements exactly the same functionality (request handling). At that point, I don't see that it would matter whether the switch between the two was config to that single service, or config to nginx to send request to a separate service, so if that's what you think is best, +1 - as long as we have the interface and can stick to that. Does that make sense?

  1. My second reason was that I have future plans to improve the authn to be cookie-based in general (as it already is when using the auth proxy), which for the non auth-proxy situation, requires a backend to encrypt the k8s token and return a cookie. But that's in the future and may not eventuate anyway, so I don't think it deserves any weight at all.

So happy either way - but keen to know the benefits of a doing it as a separate service implementing the same interface/api rather than a second implementation of the same interface in the existing service.

Copy link
Contributor

@lindhe lindhe Oct 25, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll give a full answer soon. But just to clarify what we mean with "component": it's anything that has a main.go file. So right now, we call tiller-proxy a component.

However, we propose that the initial effort not touch tiller-proxy, but create
a completely new component.
The choice between Helm 2 support and Helm 3 support should be made at deploy
time in Helm (e.g. by setting `helm3=true` in Values).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, I think we all agreed on that, but if I'm reading correctly, you actually mean that this switch will mean frontend requests being sent to a different service (rather than frontend requests being handled by a different module in the existing service)? (which is fine, just good to be clear).

Comment on lines 14 to 15
We should update the API version from v1 to v2, to clearly distinguish between
the old and the new subsystems.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hrm... so this is what I'd like to avoid if possible - for the duration that we support both helm2 and helm3, it'd be great if we could stick to the existing interface (CreateRelease, UpgradeRelease for the handler) so that the change is isolated and the frontend does not need to know. Once we remove helm2 support, I'd be happy to diverge...

So my question is: Why should we update the API version from v1 to v2 and clearly distinguish? Is there a difference in functionality as far as the API contract between the frontend and this service?

![Current situation](https://user-images.githubusercontent.com/7773090/67413010-ac044e00-f5c0-11e9-93e9-f3cdd1eeaca8.PNG)

**With the new additions:**
![With the new additions](https://user-images.githubusercontent.com/7773090/67413025-b45c8900-f5c0-11e9-8961-67377bc8faad.PNG)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the difference between helm3-agent and Helm 3 actions library? Can't those be merged?

I think it's to match the graphic for the existing PNG, but I had the same question about that? The diagrams should (IMO) either be about requests or about implementation modules, but these seem to conflate the two which is a bit confusing. What I'd expect to see is something like:

Current:
REST API -> tiller-proxy service -> Helm Tiller

New:
REST API -> Helm3 agent -> Kubernetes API

The fact that the tiller-proxy service is built using the go module pkg/proxy isn't relevant to requests (but may be relevant to implementation), and similarly, that the Helm3 agent is using a pkg/helm3actions or similar.

But I could be misunderstanding the point of these - sorry in advance if so :)

the token are already provided.
Basically all we need to do is to call the `InClusterConfig` method and
overwrite the `BearerToken` field in the received configuration with the token
string we get from the Dashboard.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right - though I don't have the context (perhaps you can link to the relevant code/docs for the helm3 client code?) it should be as simple as using the bearer token.


Since Helm 2 built on Tiller, all authentication to the Kubernetes cluster
happened "over there" and Kubeapps did not need to authenticate anything other
than the communication with Tiller.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was not clear to me at first, but after digging a bit, I think I understand and agree, though would expand and clarify a little as follows:

Since Helm 2 is built on the assumption that the Tiller service runs in-cluster with its own service account (as per Helm's Securing your Helm Installation document, "Tiller in its current form does not provide a way to map user credentials to specific permissions within Kubernetes. When Tiller is running inside of the cluster, it operates with the permissions of its service account."), therefore the only credentials Kubeapps was required to provide for authentication with Tiller was the ca.crt - and then only if tiller had been configured with tls. For this reason Kubeapps currently authenticates with the Kubernetes API server using the users bearer token not only when talking directly to the api server, but also to verify permissions, before making any request to tiller.

happened "over there" and Kubeapps did not need to authenticate anything other
than the communication with Tiller.
Now, with Helm 3, all authentication is handled by the Kubernetes API Server
via the credentials provided by kubectl.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"via the credentials provided by the user."? I'm not sure how kubectl is relevant here? Ah, or you mean when using the helm3 CLI it just uses the same credentials that kubectl uses (from ~/.kube/config or similar). If that's what you mean, I think the CLI less relevant - the main point is that helm 3 uses the credentials provided by the user for accessing the Kubernetes API server, I think?

via the credentials provided by kubectl.
We have created a PoC using the Helm Library in Helm 3, and we have found that
it should be relatively straight-forward to implement since both `ca.crt` and
the token are already provided.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, but doesn't Kubeapps already do this for every call to the K8s API server? (ie. sends the user-provided Authorization: Bearer **** header to the K8s API server)

I'm not understanding how or why this would be different. We don't currently support individual users configuring a client ca file for requests to the k8s api server - we only support authentication tokens for users, so I'm not sure that we'd need or want to do so here... the token is the only relevant credential (which can be a service account token or openidconnect id_token or anything the k8s api server is configured to use).

I'm not sure if there's a little confusion because tiller-proxy was updated to allow setting a ca.crt for authentication with tiller? This was required if running the tiller service securely, but I don't see how it's relevant for Kubeapps with Helm 3 (I mean, I get that the helm3 CLI might use a ca.crt for authn because that's what's configured for kubectl, but I'm not seeing how that's relevant to the Kubeapps use-case where we are not using certificate-based authentication for users - only for a single non-user, the tiller-proxy, in the helm2 setup).

@absoludity
Copy link
Contributor

absoludity commented Oct 25, 2019

Since you proposed backend as name for the new component, we went with that. I think I would prefer a different name, since it's as much of a frontend as a backend. Maybe api-handler, helm-parser or cmd-endpoint would be good alternatives?

But the name won't be the biggest show-stopper around, so I don't mind that much really.

Totally agree - I don't care too much about the name, the only reason I was proposing backend was because I had future plans (for improving authn, encrypting token in a cookie so we don't store the token in the client's local storage), but that's not a good reason to generalise the name now, for a future, non-certain reason. I'd be totally happy with api-handler

The last straw for me was how unreadable the text became when presented
in comments on vmware-tanzu#1250.
@SimonAlling
Copy link
Contributor Author

Thanks for all your valuable feedback, @andresmgot and @absoludity. We'll discuss within the team and hopefully come up with some good answers to your questions.

@SimonAlling
Copy link
Contributor Author

Hi again. We've now discussed your feedback and tried to come up with good answers. We decided to stick to one single comment; hopefully that will do.

Hrm... so this is what I'd like to avoid if possible - for the duration that we support both helm2 and helm3, it'd be great if we could stick to the existing interface (CreateRelease, UpgradeRelease for the handler) so that the change is isolated and the frontend does not need to know. Once we remove helm2 support, I'd be happy to diverge...

So my question is: Why should we update the API version from v1 to v2 and clearly distinguish? Is there a difference in functionality as far as the API contract between the frontend and this service?

We agree that it shouldn't be necessary to modify the dashboard. Regarding that, could you please confirm or deny this: /v1 in the requests from the Dashboard refers to the Dashboard–backend API, and has nothing to do with Helm's v1 and v2 APIs. True or false?

what's the difference between helm3-agent and Helm 3 actions library? Can't those be merged? I am not sure if I am wrong but I would expect something like backend -> helm3-lib -> K8s API

helm3-agent would be a new Go module created by us (in Kubeapps), whereas "Helm 3 actions library" is an already existing part of Helm 3, which helm3-agent would use.

I think it's to match the graphic for the existing PNG, but I had the same question about that? The diagrams should (IMO) either be about requests or about implementation modules, but these seem to conflate the two which is a bit confusing. What I'd expect to see is something like:

Current:
REST API -> tiller-proxy service -> Helm Tiller

New:
REST API -> Helm3 agent -> Kubernetes API

We may be missing some important point here, but the diagram above seems to represent replacing Helm 2 support with Helm 3 support. Our diagram was intended to represent adding Helm 3 support while keeping Helm 2 support.

That said, either way I'd be keen to first extract the current handler functionality (CreateRelease, UpgradeRelease etc.) to a go interface that is implemented by the existing TillerProxy so that we're sure the new helm3 handler implements exactly the same functionality (request handling). At that point, I don't see that it would matter whether the switch between the two was config to that single service, or config to nginx to send request to a separate service, so if that's what you think is best, +1 - as long as we have the interface and can stick to that. Does that make sense?

At first, we wanted to define a common interface for tiller-proxy and helm3-agent, but then we realized that the Chart type is defined differently (and incompatibly) in Helm 3 compared to Helm 2, so we abandoned that idea. (Is there any such thing as parameterized interfaces in Go? Then maybe we could have, conceptually, types like TheInterface<helm2.Chart> and TheInterface<helm3.Chart>.)

So happy either way - but keen to know the benefits of a doing it as a separate service implementing the same interface/api rather than a second implementation of the same interface in the existing service.

As @lindhe mentioned, this may just be a matter of us using the wrong terminology. We think that helm3-agent (or whatever its name will be) should be a Go module, not a separate service.

"via the credentials provided by the user."? I'm not sure how kubectl is relevant here? Ah, or you mean when using the helm3 CLI it just uses the same credentials that kubectl uses (from ~/.kube/config or similar). If that's what you mean, I think the CLI less relevant - the main point is that helm 3 uses the credentials provided by the user for accessing the Kubernetes API server, I think?

No, kubectl isn't actually relevant. Terminology confusion again. Maybe something like this instead:

Now, with Helm 3, all authentication is handled by the Kubernetes API Server, using the already available bearer token.

Totally agree - I don't care too much about the name, the only reason I was proposing backend was because I had future plans (for improving authn, encrypting token in a cookie so we don't store the token in the client's local storage), but that's not a good reason to generalise the name now, for a future, non-certain reason. I'd be totally happy with api-handler

We're also considering helm-api-handler, because it gives a clearer idea of what it actually is, and because there could be other "API handlers" in the future.

Right - though I don't have the context (perhaps you can link to the relevant code/docs for the helm3 client code?) it should be as simple as using the bearer token.

The context is that creating a Configuration struct seems fairly complicated. We use InClusterConfig as a step in the process of creating a Configuration.

If you feel we have misunderstood something or haven't addressed all your points, just let us know.

@andresmgot
Copy link
Contributor

andresmgot commented Oct 25, 2019

Hi, thanks for the comment! I will reply inline with my thoughts to give early feedback but @absoludity may think otherwise.

We agree that it shouldn't be necessary to modify the dashboard. Regarding that, could you please confirm or deny this: /v1 in the requests from the Dashboard refers to the Dashboard–backend API, and has nothing to do with Helm's v1 and v2 APIs. True or false?

True. The thing is that right now this Dashboard-backend API is tiller-proxy.

helm3-agent would be a new Go module created by us (in Kubeapps), whereas "Helm 3 actions library" is an already existing part of Helm 3, which helm3-agent would use.

That's fine, I think we are just confused because the diagram mixes go packages (like helm3-agent) with services (like Tiller). In any case, I think I understand what you mean so go ahead.

We may be missing some important point here, but the diagram above seems to represent replacing Helm 2 support with Helm 3 support. Our diagram was intended to represent adding Helm 3 support while keeping Helm 2 support.

Yes, that's fine. We mean the same.

At first, we wanted to define a common interface for tiller-proxy and helm3-agent, but then we realized that the Chart type is defined differently (and incompatibly) in Helm 3 compared to Helm 2, so we abandoned that idea. (Is there any such thing as parameterized interfaces in Go? Then maybe we could have, conceptually, types like TheInterface<helm2.Chart> and TheInterface<helm3.Chart>.)

This may be a problem. Note that we are not interested on maintaining the same interface in the Golang service. Tiller-proxy can keep sending back something like helm2.Chart and the new service a helm3.Chart. Also, just to clarify, what Tiller returns is a release (not a chart). A release is an instance of a chart. So let's say that tiller-proxy returns a helm2.Release and the new service will return a helm3.Release. What we are interested is in maintaining the interface in the dashboard. The definition of a release is here:

https://github.com/kubeapps/kubeapps/blob/master/dashboard/src/shared/hapi/release.d.ts#L9

If the returned value from helm 3 doesn't match that interface, then it's when we have a problem. Can you confirm if a helm 3 release does not match that interface? Note that in that case, we could define a minimum common interface between a release in helm 2 and helm 3 or we can post-process the information returned by helm in the helm3-agent to match the interface that the dashboard is expecting.

As @lindhe mentioned, this may just be a matter of us using the wrong terminology. We think that helm3-agent (or whatever its name will be) should be a Go module, not a separate service.

Yes, but this helm3-agent will run in a service. The service is what you call later helm-api-handler, right?

No, kubectl isn't actually relevant. Terminology confusion again. Maybe something like this instead:
Now, with Helm 3, all authentication is handled by the Kubernetes API Server, using the already available bearer token.

That's fine.

We're also considering helm-api-handler, because it gives a clearer idea of what it actually is, and because there could be other "API handlers" in the future.

I would then call it just api-handler. I am removing the helm part because the goal is to substitute the current tiller-proxy that not only talks with "helm" but with the chartsvc and the kubernetes API as well.

The context is that creating a Configuration struct seems fairly complicated. We use InClusterConfig as a step in the process of creating a Configuration.

Yes, using InClusterConfig and changing the token is fine.

@absoludity
Copy link
Contributor

Hi, thanks for the comment! I will reply inline with my thoughts to give early feedback but @absoludity may think otherwise.

Nope - I agree with your responses @andresmgot . Thanks @SimonAlling for all the clarifications!

@absoludity
Copy link
Contributor

At first, we wanted to define a common interface for tiller-proxy and helm3-agent, but then we realized that the Chart type is defined differently (and incompatibly) in Helm 3 compared to Helm 2, so we abandoned that idea. (Is there any such thing as parameterized interfaces in Go? Then maybe we could have, conceptually, types like TheInterface<helm2.Chart> and TheInterface<helm3.Chart>.)

This may be a problem. Note that we are not interested on maintaining the same interface in the Golang service. Tiller-proxy can keep sending back something like helm2.Chart and the new service a helm3.Chart. Also, just to clarify, what Tiller returns is a release (not a chart). A release is an instance of a chart. So let's say that tiller-proxy returns a helm2.Release and the new service will return a helm3.Release. What we are interested is in maintaining the interface in the dashboard. The definition of a release is here:

https://github.com/kubeapps/kubeapps/blob/master/dashboard/src/shared/hapi/release.d.ts#L9

If the returned value from helm 3 doesn't match that interface, then it's when we have a problem. Can you confirm if a helm 3 release does not match that interface? Note that in that case, we could define a minimum common interface between a release in helm 2 and helm 3 or we can post-process the information returned by helm in the helm3-agent to match the interface that the dashboard is expecting.

Yes, what Andres said. To be specific, I meant that we should define an interface for the following methods which are defined in https://github.com/kubeapps/kubeapps/blob/master/cmd/tiller-proxy/internal/handler/handler.go#L169 so that we are guaranteed that the dashboard does not need to change (until we remove helm2 support later at which point we can remove this restriction):

func (h *TillerProxy) CreateRelease(w http.ResponseWriter, req *http.Request, params Params)
func (h *TillerProxy) RollbackRelease(w http.ResponseWriter, req *http.Request, params Params)
func (h *TillerProxy) UpgradeRelease(w http.ResponseWriter, req *http.Request, params Params)
func (h *TillerProxy) ListAllReleases(w http.ResponseWriter, req *http.Request)
func (h *TillerProxy) ListReleases(w http.ResponseWriter, req *http.Request, params Params)
func (h *TillerProxy) GetRelease(w http.ResponseWriter, req *http.Request, params Params)
func (h *TillerProxy) DeleteRelease(w http.ResponseWriter, req *http.Request, params Params)

which would require that the service continues to understand the release sent by the frontend (and return the same where appropriate).

@SimonAlling
Copy link
Contributor Author

This may be a problem. Note that we are not interested on maintaining the same interface in the Golang service. Tiller-proxy can keep sending back something like helm2.Chart and the new service a helm3.Chart. Also, just to clarify, what Tiller returns is a release (not a chart). A release is an instance of a chart. So let's say that tiller-proxy returns a helm2.Release and the new service will return a helm3.Release. What we are interested is in maintaining the interface in the dashboard. The definition of a release is here:

https://github.com/kubeapps/kubeapps/blob/master/dashboard/src/shared/hapi/release.d.ts#L9

If the returned value from helm 3 doesn't match that interface, then it's when we have a problem. Can you confirm if a helm 3 release does not match that interface? Note that in that case, we could define a minimum common interface between a release in helm 2 and helm 3 or we can post-process the information returned by helm in the helm3-agent to match the interface that the dashboard is expecting.

We are almost sure that we will be able to keep the Dashboard–backend* API untouched. But the part referred to as "proxy" in our schematic cannot be reused for Helm 3, due to incompatibilities between the Helm 2 and Helm 3 versions of Chart. We think we'll go for the post-processing strategy if necessary.

I'm aware that I may not be answering all your questions and thoughts, even with this comment and today's update to the document combined. I'm sorry for this, but the conversation has become pretty unwieldy and I'm having trouble keeping everything in my head at the moment. Hopefully, the next commit will provide some sort of a stepping stone from which we can move on. We're looking forward to your next piece of feedback!

Also, a small but useful tip: When replying to GitHub comments, you can select the piece of text you want to quote and, instead of copypasting it, click Quote reply. Then the quote gets properly formatted instead of losing all formatting such as code, emphasis and links. 😃

* Note the difference between "Dashboard–backend" (between Dashboard and backend) and "Dashboard-backend" (backend of Dashboard or something).

Copy link
Contributor

@andresmgot andresmgot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we agree on the core of this implementation so let's go!

Copy link
Contributor

@absoludity absoludity left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we agree on the core of this implementation so let's go!

Same.

We are almost sure that we will be able to keep the Dashboard–backend* API untouched.

Great - I'll check with Andres this evening regarding encoding that in an interface (I just re-read and saw that we had differing views there that I hadn't realised earlier - I was keen to have the current go handler code extracted to an interface that is used by both the old and the new, I want to understand why that would not be useful here).

A couple of things are still not clear to me, which I'd be keen to see in this PR if you can update before landing:

  • Where the condition will be for routing the request to the helm3 handler for dashboard requests (ie. it could be in the dashboard itself (ie. different host for the request), or it could be in the existing service (tiller-proxy, but I think you're against that), or it could be updating the nginx frontend config to point existing host to the new service. I'm guessing you want the first (ie. use a different host which will require a tiny conditional in the dashboard and an update to the nginx config), but it's not clear.

  • Whether you plan to use the same/similar go libraries as used in tiller-proxy or plan to use some other go libraries (I don't have a strong preference, given that we will eventually be able to remove tiller-proxy given the current direction, but think it's worth mentioning here rather than surprising people in a first PR with an Eiffel web handler or something :P)

@absoludity
Copy link
Contributor

Great - I'll check with Andres this evening regarding encoding that in an interface (I just re-read and saw that we had differing views there that I hadn't realised earlier - I was keen to have the current go handler code extracted to an interface that is used by both the old and the new, I want to understand why that would not be useful here).

Answering my own question (as I think Andres is away) - I took a look at what is involved to extract this interface and it is not worthwhile, or useful. So please ignore :)

Details: I first needed to refactor the tiller-proxy handler to remove the WithParams and WithoutParams to be able to get plain http handlers (I don't know the background to why we wrap with those, seems to obscure a standard signature), I then needed to move the router so it was encapsulated together with the handlers for those routes, so the handlers could use mux.Vars() directly (and be tested via routes), another yak, and what I end up with as an interface is just a bunch of named http.HandlerFunc's, which isn't really that useful in terms of ensuring compatability since it's the contents of the request/response that is important. The refactoring of tiller-proxy might have been useful on its own - if we weren't deleting the code once we remove support for helm2. So all-in-all, not a good use of time. Sorry for the noise.

@lindhe
Copy link
Contributor

lindhe commented Nov 1, 2019

I'm glad to hear that our suspicion could be confirmed, Absoludity.

@absoludity
Copy link
Contributor

I'm glad to hear that our suspicion could be confirmed, Absoludity.

Yeah - sorry I didn't check sooner when proposing it. I'm landing this now, but am keen to hear about the two points as soon as you're ready:

A couple of things are still not clear to me, which I'd be keen to see in this PR if you can update before landing:

* Where the condition will be for routing the request to the helm3 handler for dashboard requests (ie. it could be in the dashboard itself (ie. different host for the request), or it could be in the existing service (tiller-proxy, but I think you're against that), or it could be updating the nginx frontend config to point existing host to the new service. I'm guessing you want the first (ie. use a different host which will require a tiny conditional in the dashboard and an update to the nginx config), but it's not clear.

* Whether you plan to use the same/similar go libraries as used in `tiller-proxy` or plan to use some other go libraries (I don't have a strong preference, given that we will eventually be able to remove `tiller-proxy` given the current direction, but think it's worth mentioning here rather than surprising people in a first PR with an Eiffel web handler or something :P)

@absoludity absoludity merged commit 4829a95 into vmware-tanzu:master Nov 4, 2019
@SimonAlling
Copy link
Contributor Author

@absoludity Thank you for pointing out those things again. Hopefully, I'll be able to get back to you soon on those matters. For your information, I'm assuming main responsibility (within our team) for implementing Helm 3 support in Kubeapps from now on.

@SimonAlling
Copy link
Contributor Author

Status update: With the invaluable help of @andresmgot, we've finally managed to setup a development environment so that we can run a locally modified instance of Kubeapps. We should now be able to start experimenting with implementation details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants