Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can Not Use Attention Maps Padding #411

Open
spacegoing opened this issue Aug 28, 2024 · 0 comments
Open

Can Not Use Attention Maps Padding #411

spacegoing opened this issue Aug 28, 2024 · 0 comments

Comments

@spacegoing
Copy link

spacegoing commented Aug 28, 2024

I'm training 93x480p model with videos less than 93 frames, and need to pad frames / mask attention weights.

However,

if self.batch_size == 1 or self.group_frame or self.group_resolution:
assert torch.all(attention_mask.bool())

As can be seen here, collate function assert no paddings in attention mask.

WOuld love to know why mask is asserted to be all True; And how do we pad attention mask if drop_short_ratio<1.0 when training with mixed lengths of clips.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant