Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Roadmap #6

Open
justheuristic opened this issue Mar 15, 2022 · 0 comments
Open

Roadmap #6

justheuristic opened this issue Mar 15, 2022 · 0 comments

Comments

@justheuristic
Copy link
Member

[DONE] Version 0.3: it works decently enough in all use cases

  • extract from private debug-1 codebase
  • make base transformer general enough for ViT/SWIN/VGGTransformer integration
  • make into a pip-installable package

Version 0.4: it works

Version 0.4-0.6:

  • memory-efficient attention kernel (@krunt + @xtinkt )
  • feature-rich inference (with prefices, various sampling & beam search variants)
  • pre-trained model zoo (convert sahaj2, calm, debug0 & 1, rubert)
  • demonstrable memory savings & performance comparison
  • efficient sota vision architectures

Finale:

  • If it works fine, consinder merging into HF transformers
@justheuristic justheuristic pinned this issue Mar 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant