You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi. I've recently tested llama implementation(cpp, pytorch) on blip2_vicuna_instruct model. It utilizes vit_qformer's embedding as a prefix_soft_embedding, which will be fed into vicuna with prompt's token_ids.
According to my test result, I found that:
When testing only vicuna-13b, FT outputs same quality text as huggingface's.
However, when token_ids are fed along with prefix_soft_embedding, a noticeable quality decrease occurs.
For example,
image:
prompt: Describe the environment in which the product in the middle of the image is located
pytorch output:
. The product in the middle of this image is located within a refrigerator, surrounded by various fruits and vegetables on both sides as well
FT output:
. The refrigerator is open and filled with food.
The refrigerator is open and filled with food.
Does anyone has experience in using fasterTransformer's prefix soft prompt feature. What problem might cause this issue. Counld it be a usage mistake? I need some hits to debug it. I have checked that InputIdsEmbeddingLookupPosEncodingSoftPrompt's output is correct
Thanks in advance!
The text was updated successfully, but these errors were encountered:
Hi. I've recently tested llama implementation(cpp, pytorch) on blip2_vicuna_instruct model. It utilizes vit_qformer's embedding as a prefix_soft_embedding, which will be fed into vicuna with prompt's token_ids.
According to my test result, I found that:
When testing only vicuna-13b, FT outputs same quality text as huggingface's.
However, when token_ids are fed along with prefix_soft_embedding, a noticeable quality decrease occurs.
For example,
image:
prompt:
Describe the environment in which the product in the middle of the image is located
pytorch output:
FT output:
Does anyone has experience in using fasterTransformer's prefix soft prompt feature. What problem might cause this issue. Counld it be a usage mistake? I need some hits to debug it. I have checked that
InputIdsEmbeddingLookupPosEncodingSoftPrompt
's output is correctThanks in advance!
The text was updated successfully, but these errors were encountered: