Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What's the different with Liger-Kernel #356

Open
CharlesJhonson opened this issue Dec 12, 2024 · 3 comments
Open

What's the different with Liger-Kernel #356

CharlesJhonson opened this issue Dec 12, 2024 · 3 comments

Comments

@CharlesJhonson
Copy link

Please describe your question

I noticed that there is an open source project called Liger-Kernel:[https://github.com/linkedin/Liger-Kernel], which is also a triton operator collection library. What is the difference between it and gems?

@iclementine
Copy link
Collaborator

iclementine commented Dec 13, 2024

We implement a broader range of kernels, many of which are "standard" kernels for general purpose.

  1. elementwise, reduce, scan for general n-dimensional tensor.
  2. some kernels that are also for general purpose but are a little challenging in triton, like sort, topk, unique, etc, mainly search&sort related.

We also have kernels targeting neural networks, like RoPE, which are common in LLM related libraries.

@iclementine
Copy link
Collaborator

Also, we are working on supporting more accelerators in a single source library.

@CharlesJhonson
Copy link
Author

CharlesJhonson commented Dec 16, 2024

Ok, thank you very much!
In addition, I noticed that many operators only have forward propagation implementation functions, such as RMS_norm. If it is used for LLM training, how should its backward propagation be implemented? @iclementine

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants