Skip to content

Commit 4086cbc

Browse files
authored
Merge pull request #778 from Loghadhith/gsoc-loghadhith
GSOC: Dashbot
2 parents 7c6a866 + 87ababb commit 4086cbc

File tree

2 files changed

+221
-0
lines changed

2 files changed

+221
-0
lines changed
Lines changed: 221 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,221 @@
1+
# GSoC Proposal
2+
## DashBot - AI-Powered API Assistant
3+
4+
5+
## About
6+
7+
1. **Full Name**: Loghadhith J R
8+
2. **Email**: [email protected]
9+
3. **Phone-no**: +91-6380572051
10+
4. **Discord Handle**: Loghadhith
11+
5. **Home Page**: nil
12+
6. **Blog**: nil
13+
7. **GitHub Profile Link**: [github.com/Loghadhith](https://github.com/Loghadhith/)
14+
8. **Twitter**: nil
15+
9. **LinkedIn**: nil
16+
10. **Time Zone**: Indian Standard Time (IST, UTC+5:30)
17+
11. **Link to a Resume**: [Resume](https://drive.google.com/file/d/19UzV2vpNRMRkDMYV2NbjoB06mdw_GJgR/view?usp=sharing)
18+
---
19+
## University Info
20+
- **University Name**: Chennai Insititute of Technology
21+
- **Program**: B.E in Computer Science and Engineering (Artificial Intelligence and Machine Learning)
22+
- **Year**: 3rd Year (Started in 2022)
23+
- **Expected Graduation Date**: May 2026
24+
---
25+
## Motivation & Past Experience
26+
27+
1. **Have you worked on or contributed to a FOSS project before? Can you attach repo links or relevant PRs?**
28+
No
29+
30+
2. **What is your one project/achievement that you are most proud of? Why?**
31+
One project I’m most proud of is THREADS, an open-source e-commerce assistant that uses similarity search and voice assistance to guide non-tech savvy users. It taught me how to design complex architecture with LLMs, and I'm proud of how it combines technology with real-world user experience to make online shopping more accessible.
32+
33+
34+
3. **What kind of problems or challenges motivate you the most to solve them?**
35+
I’m motivated by challenges that simplify complex systems and improve user experience, especially for non-tech savvy individuals. Solving problems with new technologies, like LLMs, excites me because it allows me to create impactful, accessible solutions.
36+
37+
38+
4. **Will you be working on GSoC full-time? In case not, what will you be studying or working on while working on the project?**
39+
Yes, I’ll be working full-time on GSoC.
40+
41+
5. **Do you mind regularly syncing up with the project mentors?**
42+
Not at all, I’m happy to sync up regularly.
43+
44+
6. **What interests you the most about API Dash?**
45+
Its lightweight, developer-focused design and potential for smart automation.
46+
47+
7. **Can you mention some areas where the project can be improved?**
48+
Add LLM-powered plugins for automated insights, testing, and integration.
49+
---
50+
## Project Proposal Information
51+
52+
### 1. Proposal Title
53+
**DashBot - AI-Powered API Assistant for API Dash**
54+
55+
### 2. Abstract
56+
API Dash simplifies API interaction, but repetitive tasks like debugging, documentation, and testing still demand developer effort. DashBot introduces a natural-language-based AI assistant inside API Dash that helps explain, debug, document, test, and visualize APIs. By leveraging LLMs and context-aware routing, DashBot will supercharge developer productivity and reduce cognitive load.
57+
58+
### 3. Detailed Description
59+
DashBot will be a modular AI assistant built into API Dash to handle natural-language queries and automate critical but repetitive developer tasks. It will support the following core features:
60+
- **Response Explanation:** Converts raw API responses into easy-to-understand English.
61+
- **Debugging Assistant:** Diagnoses failed API calls using status codes and error messages.
62+
- **Documentation Generator:** Produces markdown/OpenAPI-style docs automatically.
63+
- **Test Generator:** Creates integration/unit test code based on API behavior.
64+
- **Visualizer:** Generates plots from JSON responses.
65+
- **Frontend Integration Code Generator:** Outputs working code snippets for frameworks like React, Vue, Flutter, etc.
66+
67+
DashBot will include:
68+
- An **NLP + prompt generation layer** to parse natural language.
69+
- A **modular routing system** to direct tasks to appropriate LLM-backed modules.
70+
- Integration hooks to read request/response data from API Dash’s store.
71+
- UI components to embed DashBot seamlessly inside Apidash’s existing interface.
72+
73+
The entire system will be modular and extensible, ensuring future contributions and new features can plug into the core easily.
74+
75+
---
76+
### 4. Architecture Diagram
77+
![1743958514](https://github.com/user-attachments/assets/ef3d1d7c-5c58-4183-864a-8711bcd1f62e)
78+
---
79+
80+
### 5. Weekly Timeline (12 Weeks)
81+
82+
| Week | Activities |
83+
|------|------------|
84+
| 1 | Finalize architecture, scope, and module breakdown |
85+
| 2 | Build DashBot UI inside Apidash; develop NLP intent router |
86+
| 3–4 | Implement Orchestration Layer + LLM Gateway |
87+
| 4–5 | Develop core modules: Explainer, Debugger, Test Generator |
88+
| 6–7| Build Visualizer & Frontend Code Generator modules |
89+
| 8–9| Documentation generator + context-aware prompt enhancements |
90+
| 10-11 | Logging, feedback collection, performance improvements |
91+
| 11 | End-to-end testing with real APIs; edge case handling |
92+
| 12 | Final polish, documentation, and MVP release 🎉 |
93+
94+
---
95+
96+
#### **Week 1 – Planning & Architecture**
97+
- Community bonding period
98+
- Finalize overall scope based on GSoC proposal.
99+
- Define system architecture: DashBot core, AI modules, orchestration layer, LLM integration, UI interface.
100+
- Plan modular structure for each feature (e.g., explainer, visualizer).
101+
- Choose tech stack, finalize third-party API providers (OpenAI, Ollama, etc.).
102+
- Setup project skeleton: repository structure, linting, CI, and code quality tools.
103+
104+
---
105+
106+
#### **Week 2 – UI & Intent Recognition**
107+
- Design and integrate DashBot’s user interface within Apidash:
108+
- Natural language input box
109+
- Output/response pane
110+
- Quick-action suggestions
111+
- Implement an **Intent Router**:
112+
- Parse natural language to route user queries (e.g., "Explain this response", "Generate test")
113+
- Map intents to module calls using regex or lightweight NLP libraries.
114+
115+
---
116+
117+
#### **Weeks 3–4 – Orchestration Layer & LLM Gateway**
118+
- Develop the **Orchestration Layer** to:
119+
- Receive user input
120+
- Call appropriate AI module
121+
- Return result to frontend
122+
- Build a **pluggable LLM gateway** to support multiple model providers (OpenAI, Claude, Gemini, Ollama).
123+
- Add retry logic, error handling, and token usage monitoring.
124+
125+
---
126+
127+
#### **Weeks 4–5 – Core Modules: Explainer, Debugger, Test Generator**
128+
- **Explainer**: Parses API response and uses LLM to output plain-English explanations.
129+
- **Debugger**: Analyzes failed requests based on status code, headers, and error messages. Suggests causes and fixes.
130+
- **Test Generator**: Produces unit/integration test cases using popular frameworks (e.g., Jest, Postman, or Python’s `unittest`).
131+
132+
---
133+
134+
#### **Weeks 6–7 – Visualizer & Frontend Code Generator**
135+
- **Visualizer**:
136+
- Extracts structured data from JSON
137+
- Generates basic plots (line, bar, pie) using lightweight charting libs
138+
- Allows user customization (fields, chart type)
139+
- **Frontend Code Generator**:
140+
- Converts API call structure into ready-to-use code snippets
141+
- Supports React (fetch/axios), Flutter (Dio/http), Vue, etc.
142+
- Customizes auth headers, parameters, and error handling
143+
144+
---
145+
146+
#### **Weeks 8–9 – Documentation & Context Enhancements**
147+
- **Documentation Generator**:
148+
- Automatically converts API calls and responses into structured Markdown or OpenAPI-style documentation.
149+
- Supports headers, body, query params, and sample responses.
150+
- Enhance prompts and modules using **context-aware input**:
151+
- Include request metadata (method, headers, params) in prompt
152+
- Maintain short-term context history (last few interactions)
153+
154+
---
155+
156+
#### **Weeks 10–11 – Logging, Feedback & Edge Cases**
157+
- Implement:
158+
- Logging system to track DashBot usage and performance
159+
- Anonymous feedback system for collecting suggestions/ratings
160+
- Improve:
161+
- Prompt quality (edge case prompts)
162+
- Error-handling for null/undefined/malformed responses
163+
- Optimize:
164+
- Caching repeated LLM outputs for performance
165+
- Limit token usage for cost efficiency
166+
167+
---
168+
169+
#### **Week 12 – Final Polish & MVP Release**
170+
- Finalize UX for all modules.
171+
- Conduct full test suite (unit, integration, manual flows).
172+
- Clean up codebase, remove dev tools, finalize docs.
173+
- Release DashBot MVP with full documentation and usage instructions.
174+
- Prepare final demo and handoff plan (if needed).
175+
176+
---
177+
178+
## 6.Techstack
179+
180+
181+
#### 1. **NLP Module (Intent + Entity Extraction)**
182+
- **SpaCy**: For tokenization and entity recognition.
183+
- **Hugging Face Transformers**: For intent detection with pre-trained models (e.g., BERT, GPT).
184+
- **Rasa/Dialogflow**: For conversational AI (optional).
185+
186+
#### 2. **AI Feature Modules (Response Explainer, Debugging, etc.)**
187+
- **OpenAI API**: For text generation (GPT-3/4, Codex for code).
188+
- **LangChain**: For complex chaining and interactions of LLMs.
189+
- **GPT-J**: Open-source alternative for local processing.
190+
191+
#### 3. **Orchestration Layer**
192+
- **Celery**: For managing asynchronous tasks.
193+
- **Redux** (Flutter equivalent: **Riverpod** or **Provider**): For state management and conversation context.
194+
- **BullMQ**: For background task management (Node.js backend, if any).
195+
196+
#### 4. **Knowledge Base / Contextual Data Store**
197+
- **Elasticsearch**: For indexing and searching API specs, logs.
198+
- **MongoDB**: Flexible NoSQL database for storing user interactions and configurations.
199+
- **Redis**: For quick access to session and context data.
200+
- **SQLite**: Lightweight local database for storing interaction history.
201+
202+
#### 5. **Logging & Feedback**
203+
- **Sentry**: For error monitoring and real-time feedback.
204+
- **LogRocket**: For frontend user session logging (Flutter can integrate via native bridge).
205+
- **Google Analytics / Mixpanel**: For usage analytics and tracking.
206+
207+
#### 6. **UI & Frontend (Flutter)**
208+
- **Flutter**: Cross-platform UI framework for DashBot integration.
209+
- **Riverpod** or **Provider**: For state management in Flutter.
210+
- **Tailwind CSS**: If required in Flutter web apps (via Flutter Web integration).
211+
- **Chart.js / Plotly**: For generating visualizations.
212+
213+
#### 7. **Code Integration & Test Generation**
214+
- **Swagger/OpenAPI**: For API documentation generation.
215+
- **Jest/Mocha**: For generating and running unit tests (via Node.js backend, if any).
216+
- **Yeoman**: For scaffolding code templates (if necessary).
217+
218+
#### 8. **Other**
219+
- **Redis**: For quick caching and session data.
220+
- **PostgreSQL**: If relational data is needed.
221+
Loading

0 commit comments

Comments
 (0)