-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about use_tensor_cores = True or False
#520
Comments
Hi @sleepwalker2017 , thanks for doing the benchmark!
|
Hi @yzh119 I'm also looking into the decode kernel implementation. I tested it with llama3-8B size (num_qo_heads=32, num_kv_heads=8) and input batch size=64, seq_len=1024. I noticed that the
My questions:
|
Hi, I'm benchmarking flashinfer on H100, and I'm running attention for the decoding stage.
I use q_head = kv_head = 40, which is the standard attention for llama 13B.
I tried use_tensor_cores = True and False, I get nearly the same performance.
My question is:
Is this result reliable? If use_tensor_cores=True, it will invoke the prefill kernel?
I tested the tensor core usage for both kernels, but find they both uses tensor cores, why is that?
The text was updated successfully, but these errors were encountered: