Releases: facebookresearch/xformers
Releases · facebookresearch/xformers
v0.0.6
Fixed
- Fix self attention optimization not being triggered, broken residual path [#119]
- Improve speed by not using contiguous Tensors when not needed [#119]
Added
- Attention mask wrapper [#113]
- ViT comparison benchmark [#117]
v0.0.5
fixing the 0.0.4 pip package, next release will be better in that we'll try to expose pre-built binaries
v0.0.4
- Fixing causality not being respected by the scaled dot product attention
- Fixing Favor causal trainability
- Enabling FusedLayerNorm by default if Triton is available
- Fixing Favor with fp16
v0.03
[0.0.3] - 2021-11-01
Fixed
- Nystrom causal attention [#75]
v0.0.2
[0.0.2] - 2021-11-01
Fixed
- More robust blocksparse [#24]
Added
- Rotary embeddings [#32]
- More flexible layernorm [#50]
- More flexible blockfactory config (key deduplication)