Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support flash-attention custom call #8

Merged
merged 1 commit into from
May 8, 2024

Conversation

ApsarasX
Copy link

@ApsarasX ApsarasX commented May 7, 2024

Add custom call for the following four types of flash-attention:

  • __gpu$flash_attn_fwd
  • __gpu$flash_attn_varlen_fwd
  • __gpu$flash_attn_bwd
  • __gpu$flash_attn_varlen_bwd

They correspond to Dao-AILab/flash-attention's

Please ensure to invoke these flash-attention custom-calls via torch_xla, as they enforce strict requirements on the shapes of inputs and outputs.

@ApsarasX ApsarasX merged commit 5628d13 into feature/auto_reorder May 8, 2024
2 of 4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant