Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Token pruning #2

Open
xiewende opened this issue Feb 3, 2023 · 0 comments
Open

Token pruning #2

xiewende opened this issue Feb 3, 2023 · 0 comments

Comments

@xiewende
Copy link

xiewende commented Feb 3, 2023

Very good work!

But after a brief reading of the VisionTransformerDiffPruning model code under vit_l2_3keep_senet.py, I was puzzled by the token pruning. Token pruning implies a reduction in the number of tokens (Figure 2 in the paper). I didn't find any reduction in the number of tokens in VisionTransformerDiffPruning, but rather the mask of informative token and placeholder. then the representive token is obtained based on the mask and then concatenation with x (x = torch.cat((x,represent_token), dim=1)). Here I am confused, the number of tokens is not reduced under feature x. Does this affect the efficiency?

Maybe I misunderstood, and I hope you can give a detailed explanation.

I look forward to your reply.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant