Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Bamba Model #10909

Open
wants to merge 65 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 60 commits
Commits
Show all changes
65 commits
Select commit Hold shift + click to select a range
62181d5
initial pr without tp fix
fabianlim Dec 5, 2024
51bc78c
fix casting in rms norm gated
fabianlim Dec 5, 2024
81b93b4
TP fix
fabianlim Dec 5, 2024
0f93e4a
fix mamba scan invalid address
fabianlim Dec 8, 2024
742ae79
some fixes and remove unused kernels
fabianlim Dec 12, 2024
b2dc5ca
fmt + lint
fabianlim Dec 12, 2024
9ad9e20
more comments
fabianlim Dec 12, 2024
25bf381
initial fix for chunked prefill (incomplete)
fabianlim Dec 12, 2024
43ce07c
improve comments
fabianlim Dec 12, 2024
80f14b5
do not attach seq_idx to attn_metadata
fabianlim Dec 12, 2024
6b8ac49
activate initial states for chunked prefill
fabianlim Dec 12, 2024
d788db6
reuse softplus and remove triton2 remark
fabianlim Dec 13, 2024
400db27
add comment on weight loader and format
fabianlim Dec 13, 2024
bda8ea7
rename test_jamba to test_hybrid and got rid of test_bamba
fabianlim Dec 13, 2024
66078d6
Merge remote-tracking branch 'upstream/main' into bamba-pr
fabianlim Dec 16, 2024
a74de9f
update bamba to ishybrid and support pp
fabianlim Dec 16, 2024
b44caa7
lint
fabianlim Dec 16, 2024
8cf3644
add unit test for mamba ssd
fabianlim Dec 16, 2024
e375b40
fix lint
fabianlim Dec 16, 2024
dcbae7b
full chunked-prefill fix (sans unit tests)
fabianlim Dec 21, 2024
2597105
format and add cont batch unit tests (will need more cases)
fabianlim Dec 23, 2024
db5eea5
fix kernel tests and add more chunked prefill cases
fabianlim Dec 23, 2024
dfbcb16
bound adjustment
fabianlim Dec 23, 2024
7913009
bound adjustment
fabianlim Dec 26, 2024
9c5d045
lint errors
fabianlim Dec 26, 2024
6bc9dac
Add permalink correction from @tlrmchlsmth
fabianlim Jan 3, 2025
6d02e85
improved comment for segsum, add more sizes for test_mamba_chunk_scan…
fabianlim Jan 3, 2025
e5882f2
rename and comment functions, add more sizes for test_mamba_chunk_sca…
fabianlim Jan 3, 2025
6d6fa86
addressed comments on mamba_mixer2.py
fabianlim Jan 3, 2025
773dd80
replace with get_rope
fabianlim Jan 3, 2025
63f5340
rope scaling
fabianlim Jan 4, 2025
89e36d8
fixes
fabianlim Jan 6, 2025
7a4ae96
zero out ssm states
fabianlim Jan 7, 2025
a9e149c
fix tests (sans updating dev checkpoint)
fabianlim Jan 7, 2025
5c9f48d
not replacing dev model for now
fabianlim Jan 11, 2025
55647b1
update requirements
fabianlim Jan 13, 2025
2342bc0
remove extraneous comment
fabianlim Jan 14, 2025
011c141
update test
fabianlim Jan 14, 2025
503bc42
fix lint
fabianlim Jan 15, 2025
312cf1d
fix lint
fabianlim Jan 15, 2025
c1db743
fix requirements-test
fabianlim Jan 15, 2025
c956a30
Mamba2 changes from #10909
tlrmchlsmth Jan 16, 2025
17923ad
Get Mamba2 working!
tlrmchlsmth Jan 16, 2025
4183d45
Add integration test -- something is wrong!!
tlrmchlsmth Jan 17, 2025
5377644
format
tlrmchlsmth Jan 17, 2025
39f55d1
fixes
tlrmchlsmth Jan 17, 2025
dd31f19
update test registry, fixes
fabianlim Jan 16, 2025
e2e5aac
Fix for conv state shape and update placeholder_attn
tlrmchlsmth Jan 19, 2025
bc1b8af
back out placeholder_attn changes
tlrmchlsmth Jan 19, 2025
9db0dd5
make seq_idx to chunk indices more efficient
fabianlim Jan 20, 2025
cd89283
WIP debugging, restore local mamba and placeholder_attn changes
tlrmchlsmth Jan 20, 2025
9a838a3
Integration tests are now green
tlrmchlsmth Jan 20, 2025
be8318e
remove bamba-specific files
tlrmchlsmth Jan 20, 2025
f34d434
Merge branch 'main' into tms/mamba2
tlrmchlsmth Jan 27, 2025
a65e2cb
Handle grouping in Mixer2RMSNormGated
tlrmchlsmth Jan 30, 2025
0d4bb0f
debug cruft
tlrmchlsmth Jan 30, 2025
74f6088
Remove codestral integration test
tlrmchlsmth Jan 30, 2025
95583b8
Merge branch 'tms/mamba2' into bamba-pr
fabianlim Feb 1, 2025
b72389c
update mamba_cache
fabianlim Feb 1, 2025
10d75eb
remove changes to requirements
fabianlim Feb 1, 2025
5aea1e6
revert changes
fabianlim Feb 1, 2025
2ee8d07
Merge remote-tracking branch 'upstream/main' into bamba-pr
fabianlim Feb 1, 2025
043e006
fix lint
fabianlim Feb 1, 2025
7e4ce4f
fix lint
fabianlim Feb 1, 2025
8219480
more reverts
fabianlim Feb 1, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion requirements-common.txt
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you merge in latest main? We've already landed this change

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ya actually yest i reverted this file and took it from latest main, but somehow the diff shows up in github. The version on the left shown by github is actually old

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok if i merge in latest main it seems fine..

Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ requests >= 2.26.0
tqdm
blake3
py-cpuinfo
transformers >= 4.45.2 # Required for Llama 3.2 and Qwen2-VL.
transformers >= 4.48.2 # Required for Bamba.
tokenizers >= 0.19.1 # Required for Llama 3.
protobuf # Required by LlamaTokenizer.
fastapi >= 0.107.0, < 0.113.0; python_version < '3.9'
Expand Down
42 changes: 38 additions & 4 deletions requirements-test.txt
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this file should be changed now that #12599 has been merged. Could you revert this file?

Also wondering why this has more changes than in #12599 - did you run into any additional issues that required these additional chagnes?

Copy link
Author

@fabianlim fabianlim Feb 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above, the version on the left is old. the right is from latest main

Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# This file is autogenerated by pip-compile with Python 3.12
# by the following command:
#
# python3.12 -m piptools compile requirements-test.in -o requirements-test.txt
# python3.12 -m piptools compile requirements-test.in -o requirements-test.txt
#
absl-py==2.1.0
# via rouge-score
Expand Down Expand Up @@ -106,9 +106,17 @@ dnspython==2.7.0
docutils==0.16
# via awscli
einops==0.8.0
# via -r requirements-test.in
# via
# -r requirements-test.in
# encodec
# vector-quantize-pytorch
# vocos
einx==0.3.0
# via vector-quantize-pytorch
email-validator==2.2.0
# via pydantic
encodec==0.1.1
# via vocos
evaluate==0.4.3
# via lm-eval
fastparquet==2024.11.0
Expand All @@ -125,6 +133,8 @@ filelock==3.16.1
# triton
fonttools==4.54.1
# via matplotlib
frozendict==2.4.6
# via einx
frozenlist==1.5.0
# via
# aiohttp
Expand Down Expand Up @@ -159,6 +169,7 @@ huggingface-hub==0.26.2
# timm
# tokenizers
# transformers
# vocos
idna==3.10
# via
# anyio
Expand Down Expand Up @@ -261,6 +272,8 @@ numpy==1.26.4
# cupy-cuda12x
# datasets
# decord
# einx
# encodec
# evaluate
# fastparquet
# genai-perf
Expand All @@ -283,6 +296,7 @@ numpy==1.26.4
# torchvision
# transformers
# tritonclient
# vocos
nvidia-cublas-cu12==12.4.5.8
# via
# nvidia-cudnn-cu12
Expand Down Expand Up @@ -455,6 +469,7 @@ pyyaml==6.0.2
# responses
# timm
# transformers
# vocos
ray[adag]==2.40.0
# via -r requirements-test.in
redis==5.2.0
Expand Down Expand Up @@ -517,6 +532,7 @@ scipy==1.13.1
# scikit-learn
# sentence-transformers
# statsmodels
# vocos
sentence-transformers==3.2.1
# via -r requirements-test.in
sentencepiece==0.2.0
Expand All @@ -540,7 +556,9 @@ sqlitedict==2.1.0
statsmodels==0.14.4
# via genai-perf
sympy==1.13.1
# via torch
# via
# einx
# torch
tabledata==1.3.3
# via pytablewriter
tabulate==0.9.0
Expand Down Expand Up @@ -568,12 +586,21 @@ torch==2.5.1
# -r requirements-test.in
# accelerate
# bitsandbytes
# encodec
# lm-eval
# peft
# sentence-transformers
# tensorizer
# timm
# torchaudio
# torchvision
# vector-quantize-pytorch
# vocos
torchaudio==2.5.1
# via
# -r requirements-test.in
# encodec
# vocos
torchvision==0.20.1
# via timm
tqdm==4.66.6
Expand All @@ -584,13 +611,15 @@ tqdm==4.66.6
# lm-eval
# nltk
# peft
# pqdm
# sentence-transformers
# tqdm-multiprocess
# transformers
tqdm-multiprocess==0.0.11
# via lm-eval
transformers==4.47.0
transformers==4.48.2
# via
# -r requirements-test.in
# genai-perf
# lm-eval
# peft
Expand All @@ -615,6 +644,7 @@ typing-extensions==4.12.2
# huggingface-hub
# librosa
# mistral-common
# pqdm
# pydantic
# pydantic-core
# torch
Expand All @@ -626,6 +656,10 @@ urllib3==2.2.3
# requests
# responses
# tritonclient
vector-quantize-pytorch==1.21.2
# via -r requirements-test.in
vocos==0.1.0
# via -r requirements-test.in
word2number==1.1
# via lm-eval
xxhash==3.5.0
Expand Down
Loading
Loading