-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[QST] Triton MLIR #3
Comments
Hi Jerome, Nice to hear from you. This seems like a very interesting project. It is a bit beyond my area of expertise as I have only dabbled a bit in the internals of MLIR generation and lowering. My only worry is that this area does seem like it is evolving quite quickly so there may not yet be stable enough foundations to document for lay-users. @Jokeren might have more to add. |
I doubt the Triton developers will find the time to craft documentation or develop tutorials. However, there's a bunch of Chinese users out there who've been diving deep into Triton's code, breaking down every compiler pass in their blogs (in Chinese). Honestly, their enthusiasm has taken me by surprise... I've glanced over a few of these blogs and, honestly, they're top-notch. |
Wait, do you read Chinese? I meant they are written in Chinese, or you plan to translate? |
Plan to translate -- wonders of multilingual llms (or google translate for that matter). I am also Chinese. |
That's great. Feel free to search with keyword "triton" on zhihu.com :) That's what I'm referring to |
Thanks -- yes, I've come across many of those already. Many deep dives into all things CUDA / Cutlass there as well, haha. Are you planning on teaching a course on |
Maybe not. They didn't assign me to a compiler course unfortunately. |
@srush
Always appreciate your wonderful OSS educational contributions!
I'm relatively familiar with
CUDA
andtriton
but less so with machine learning compilers and am interested in getting into the weeds oftriton
's compilation pipeline.I've come across a few resources for learning
MLIR
as well as related projects such asTVM
(which has a comprehensive set of tutorials / learning materials spearheaded by Tianqi Chen of CMU), but have yet to bridge the gap from basicMLIR
to something on the scale oftriton
.The overarching motivation -- other than the fact that ML compilers are super-interesting :) -- is that in a world of increased demand for ML training / inference but limited GPU (NVIDIA) supply, the ability to write code that is backend-agnostic is evermore important.
A few questions:
MLIR
incrementally, ideally building from basics to something like atoy
triton
, and more ambitiously, understanding enough of thetriton
backend to be able to contribute new optimization passes?I'd be willing to do as much of the heavy lifting as needed:
triton
tutorials, starting withvec-add
.C++
MLIR
pipeline and provides greater visibility -- and hackability -- than simply observing the output ofMLIR_ENABLE_DUMP
.cc @Jokeren
The text was updated successfully, but these errors were encountered: