-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[HELM] Refactor Chart #1872
base: main
Are you sure you want to change the base?
[HELM] Refactor Chart #1872
Conversation
@achraf-mer Just wondering if we could remove the Stack mode, is this in use ? Ideally on K8s vLLM should run separately instead of the same pod as h2oGPT I think. WDYT ? |
yes, we can do separate and keep the help straightforward, let's do, I think we might have used the same pod for latency considerations, but since vLLM can be resource intensive, it is best IMO to have on a separate pod. (more isolation and we can scale separately) |
@lakinduakash Lets remove Stack mode from h2oGPT and the checks as well, similar to what was done with Agents |
Stack is removed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets document the breaking changes that was done to the chart, ex: changing the path of the model lock in values. We will need to communicate this to the other teams
Reference : #1871