Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Misc] Adding script to setup ray for multi-node vllm deployments #12913

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

Edwinhr716
Copy link

@Edwinhr716 Edwinhr716 commented Feb 7, 2025

FIX #8302 (link existing issues this PR will resolve)

Copy link

github-actions bot commented Feb 7, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@ahg-g
Copy link

ahg-g commented Feb 7, 2025

This is not LWS specific, it is useful for any multi-node vllm deployment

@@ -0,0 +1,94 @@
#!/bin/bash
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggest to name the file multi-node-serving.sh

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed

Copy link

@ahg-g ahg-g left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please show an example? I want to make sure we support setting any vllm backend flag

@Edwinhr716 Edwinhr716 changed the title [Misc] Adding script to setup ray for vllm deployments using LWS [Misc] Adding script to setup ray for multi-node vllm deployments Feb 7, 2025
@Edwinhr716
Copy link
Author

it would be very similar to how we currently do it in the LWS repo:

command:
- sh
- -c
- "/vllm-workspace/examples/online-serving/multi-node-serving.sh leader --ray_cluster_size=$(LWS_GROUP_SIZE); 
python3 -m vllm.entrypoints.openai.api_server --port 8080 --model meta-llama/Meta-Llama-3.1-405B-Instruct --tensor-parallel-size 8 --pipeline_parallel_size 2"

Signed-off-by: Edwinhr716 <[email protected]>
Comment on lines +5 to +6
ray_port=6379
ray_init_timeout=300
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can those be changed a env vars?

@ahg-g
Copy link

ahg-g commented Feb 8, 2025

@robertgshaw2-redhat do you have a minute to look into this one? It will allow us to easily launch multi-node inference workloads, see examples at https://github.com/kubernetes-sigs/lws/tree/main/docs/examples/vllm

@ahg-g
Copy link

ahg-g commented Feb 8, 2025

@robertgshaw2-redhat do you have a minute to look into this one? It will allow us to easily launch multi-node inference workloads, see examples at https://github.com/kubernetes-sigs/lws/tree/main/docs/examples/vllm

Right now we have to-rebuild the container to include this script, but it would be great to have it embedded in the main vllm container.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature]: Add ray cluster start logic to vllm container for multi host inference with leaderworkerset
2 participants