-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Empower Open Source Community Operation with LLMs #840
Comments
I think we can add a translation function. When students encounter problems while doing OSS101 homework, some may give up commenting because they need to communicate in English on the issue. For me too, sometimes I don't know how to express in English, so I need to translate it into English on other software and paste it back. |
IMO, it is important to think about how we are gonna integrated LLM with our stuff. What I can think of is to provide context to LLM. Take the translation feature as an example. We can use other tools to do translation, but where LLM can truely be helpful is to translate based on the context, knowing what programming languages are used in the project and giving better results. This is not just for translation, but also for features like introduction, Q&A, etc. |
In fact, this is a basic question and answer function, which involves inputting a paragraph and using LLM for translation in OSSGPT. Or do you have any other better presentation options for the feature you mentioned? A more refined idea. |
Yes, this function should be included in the basic functions of OSSGPT. I have tried other translation plug-ins, such as deepl, and they have indeed achieved very good results. |
I have an idea. Can oss-gpt answer questions based on the information in the issue? Sometimes when I use other people's open source code, I encounter various errors, but some of them are discussed in the issue. If I can directly give my problem to oss-gpt, it can find solutions and links to the original issue from the issue. I think it is very good. This should be very helpful for large open source projects. |
Essentially, it is the process of information retrieval within a knowledge base. That is to say, when establishing the knowledge base, include the issue as well. |
The component we use when developing OSS-GPT is |
Recently, I’ve tried various solutions to integrate large models into projects. Unlike other projects, this browser plugin project encountered a lot of difficulties and issues. After several attempts and practical trials, I created a small demo and finalized a plan for the project’s further advancement. iShot_2024-09-22_19.19.39.mp4iShot_2024-09-22_19.20.57.mp4Next, the ProChat component will be introduced later. Although some parts of ProChat need to be modified due to the limitations of the browser plugin project, it’s still quite convenient overall. Some LLM applications are also using this component. Introducing this component involves upgrading the React version to 18 or higher, which will require some code refactoring. We provide application services, but the model itself must be supplied by the user, including the model’s address, API key, and model name. There must be a place for users to input and parse this information. A simple Q&A wouldn’t be very innovative. Since we’re developing a plugin centered around GitHub, we should focus on finding scenarios within the GitHub platform where LLMs can be applied to automate tasks. On GitHub, automation involves GitHub APP and Github Bot, which will be used for various automated tasks. Future features include automatic code annotations, repository-level Q&A, issue automation (automatic creation, summarization, etc.), and PR code review. While these are just initial ideas, they give us a direction to move towards. The priority is to first establish the infrastructure, and then gradually advance. Next, the project will proceed with the following steps:
The above tasks are expected to be completed around October. Stay tuned… We’re also welcoming first-year graduate students who are interested in participating. In the future, @Xsy41 will be joining the project. |
How to make LLM combine with the previous hypercrx functional positioning to do something. Currently, we have created a lot of visual dashboards for the data provided by OpenDigger. We have done various visualization tasks, but we have been lacking the next step in data visualization, which is data analysis. We already have this visual data trend display for everyone, so we can use big models to analyze this data trend. And similar to how we want to view data for a specific month in a data dashboard, except for clicking on that month in the dashboard to see the corresponding data, can we input questions or other forms and use LLM to process the data and directly return it. I think this is also a place to consider. |
Description
In the community building and sustainable development of CRX, X-lab2017/open-wonderland#422, it is mentioned to utilize the capabilities of LLMs to empower open source community operations.
Therefore, this issue is set up to discuss possible functions and solutions.
Welcome everyone to propose relevant ideas!
The text was updated successfully, but these errors were encountered: