Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: New pattern for llama-3.2-1b on inf2.xlarge #679

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

shivam-dubey-1
Copy link
Contributor

@shivam-dubey-1 shivam-dubey-1 commented Oct 21, 2024

What does this PR do?

🛑 Please open an issue first to discuss any significant work and flesh out details/direction. When we triage the issues, we will add labels to the issue like "Enhancement", "Bug" which should indicate to you that this issue can be worked on and we are looking forward to your PR. We would hate for your time to be wasted.
Consult the CONTRIBUTING guide for submitting pull-requests.

Motivation

#667

This pattern can be used for demos and can also be used to understand small LLMs on inf2.xlarge

More

  • Yes, I have tested the PR using my local account setup (Provide any test evidence report under Additional Notes)
  • Mandatory for new blueprints. Yes, I have added a example to support my blueprint PR
  • [] Mandatory for new blueprints. Yes, I have updated the website/docs or website/blog section for this feature
  • Yes, I ran pre-commit run -a with this PR. Link for installing pre-commit locally

For Moderators

  • E2E Test successfully complete before merge?
  • [ ]

Additional Notes

  • Model used "meta-llama/Llama-3.2-1B"
  • Dockerfile and others cloned from vllm-rayserve-inf2

pip3 install --no-cache-dir awscli neuronx-cc==2.* --pre torch-neuronx==2.1.* torchvision transformers-neuronx pynvml ray

# Copy patch file before cloning and patching vllm
COPY patches/vllm_v0.5.0_neuron.patch /tmp/vllm_v0.5.0_neuron.patch
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need to use the patch?
Please see vllm-project/vllm#7166 as the issue seems to be fixed now.

COPY patches/vllm_v0.5.0_neuron.patch /tmp/vllm_v0.5.0_neuron.patch

# Clone vllm, apply patch, and install
RUN git clone --depth 1 --branch v0.5.0 https://github.com/vllm-project/vllm.git && \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need vLLM >= 0.6.2 for llama 3.2? https://github.com/vllm-project/vllm/releases/tag/v0.6.2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants