Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference Service is not scaling #521

Open
haiminh2001 opened this issue Jul 18, 2024 · 0 comments
Open

Inference Service is not scaling #521

haiminh2001 opened this issue Jul 18, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@haiminh2001
Copy link

Describe the bug
I set my serving runtime to have a minimum replicas of 3. But my inference service will not be scheduled in the third serving runtime server. The doc is vague and as far as I know modelmesh only scales the serving runtimes but it would not matter if the inference services or the models are not scheduled.

To Reproduce
Steps to reproduce the behavior:

  1. Scale 3 serving runtimes.
  2. Check the third serving runtime to not have any model scheduled in.

Expected behavior
The third serving runtime actually has some models scheduled in.

By the way, I have some questions:

  1. What is the proper way to scale inference services (models) ?
  2. As far as I know HPA only supports cpu and memory metrics, is there any other metrics supported ? I would like the CCU metrics in particular.
@haiminh2001 haiminh2001 added the bug Something isn't working label Jul 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant