Skip to content

JavaGroom leverages Large Language Models (LLMs) to assist grooming development task. By 'gisting' Java files and packages within large java codebase, it provides context-aware assistance for code navigation, task grooming, and project understanding.

Notifications You must be signed in to change notification settings

jyouturner/groom4j

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Groom4J: LLM-Powered Task Planning for Java Projects

🎯 Bridging the Gap in Enterprise Java Development

Groom4J is designed to address a critical challenge in enterprise Java development: helping new or entry-level developers navigate and understand complex, established codebases. Unlike general-purpose coding assistants, Groom4J is tailored specifically for Java projects in enterprise environments, where the primary hurdles are often not in writing code, but in:

  • Understanding the big picture: Grasping how different components of a large Java application interact.
  • Connecting the dots: Identifying relationships between various packages, classes, and services.
  • Focus On Development Process: Tailored for the unique challenges of large-scale, complex Java projects common in enterprise development process where coding is just one step in the middle.
  • Navigating internal complexity: Making sense of company-specific architectures, patterns, and legacy code.

Techniques:

  • Gisting Files and Packages: This provides a hierarchical understanding of the project structure.
  • Cached Prompt: Use cached prompt to reduce the cost and improve the response time, with Anthropic models.
  • Decomposing Questions: Break down the question or task into smaller questions
  • Review Conversation: After providing an answer, the LLM is asked to review the conversation and decide if it is time to stop.
  • Tier LLM Models: Use different tier models for different purposes. For example, use a more powerful model for answering question, and a cheaper model for reviewing conversation.

Inspiration

More about this project can be found at what inspired this project

Getting Started

This project uses Python 3.11+ and Poetry to manage dependencies. You can run it directly on your system or use Docker for easier setup, especially if you're not familiar with Python environments.

Set LLM and API Key

You can choose to use OpenAI, Gemini or Anthropic LLMs

cp application_example.yml application.yml

Depends on the LLM provider, you need to set the corresponding API key in the application.yml file.

llm:
  use: anthropic
  
anthropic:
  api_key: ...
  model: claude-3-5-sonnet-20240620

Running with Poetry (for Python developers)

If you have Python and Poetry installed:

  1. Install dependencies:

    poetry install
  2. Run Tool

poetry run python gist_files.py path/to/the/Java/Project/Repo
poetry run python gist_packages.py path/to/the/Java/Project/Repo
poetry run python grooming_task.py path/to/the/Java/Project/Repo --task="Your task description"

Running with Docker (recommended for Java developers)

If you prefer using Docker or are not familiar with Python environments:

Install Docker on your system if you haven't already.

Use the provided run-read-agent.sh script to run the tool:

# To gist files
./run-read-agent.sh gist-files /path/to/your/java/project

# To gist packages
./run-read-agent.sh gist-packages /path/to/your/java/project

# To groom a task
./run-read-agent.sh groom-task --task="Your task description" /path/to/your/java/project

The script will automatically build the Docker image if needed and run the tool inside a container using Poetry. The Java project directory is mounted into the container, allowing the tool to access and analyze the project files.

Note: Make sure you have read access to the Java project directory you're trying to analyze.

About Tracing (Optional)

This project supports opensource tracing tool langfuse (https://github.com/langfuse/langfuse)

git clone https://github.com/langfuse/langfuse
docker compose up -d

Visit http://localhost:3000 to sign up, create a project, and create the API key. Then update the applicaiton.yml file with the API key.

langfuse:
  secret_key: ...
  public_key: ...
  host: http://localhost:3000

Example Java Project

For testing purpose, there is a sample Java project "travel-service-dev" included in the "data" folder. It is an open source project available at Github https://github.com/ilkeratik/travel-service.

Travel Service Java Project Structure

Try the Example Java Project

The "gist" files are already created in the "data/travel-service-dev" project, under ".gist" folder. You can test the grooming with below command

poetry run python grooming_task.py ./data/travel-service-dev --task="add a new field 'mayor' to city, for the name of the mayor of the city"

poetry run python grooming_task.py ./data/travel-service-dev --task="add a new feature to search city by name"

poetry run python grooming_task.py ./data/travel-service-dev --task="refactor the Rest API to GraphQL"

or if you have a specific question to ask

poetry run python tell_me_about.py ./data/travel-service-dev/ --question="how data flow from database to the API"

or just use a dedicated script to find info in the API projects, which will generate a markdown file (api_note.md) under ".gist" folder

For example, the generated answer to above question "how data flow from database to the API" can be found data/travel-service-dev/.gist/tell_me_about/how_data_flow_from_database_to_the_api.md

If you want to summarize all the API endpoints:

poetry run python summarize_api.py ./data/travel-service-dev

The summaries can be found at ./data/travel-service-dev/.gist/tell_me_about/api_notes.md

If you want to trace an API endpoint from end to end, run this command

poetry run python trace_api_request.py data/travel-service-dev --api-request=/api/v1/city/New%20York

The detail of the API endpoint will be saved to ./data/travel-service-dev/.gist/tell_me_about/endpoint_api_v1_city_new_20york.md

Use on Your Project

Step One to Gist code files

poetry run python gist_files.py path/to/the/Java/Project/Repo

It will take a while before all the Java files are gisted. You will see a txt file "code_files.txt" generated afterwards, under the ".gist" folder within the Java project.

Step Two to Gist packages

poetry run python gist_packages.py path/to/the/Java/Project/Repo

After the process is done, you will see a file "package_notes.txt" created in the ".gist" folder.

Optional to Gist API

If your project is a API project, there is a dedicated script to create a markdown file to describe the endpoints of the API.

poetry run python gist_api.py path/to/the/Java/Project/Repo

After the process is done, you will see a mardown file "api_notes.md" created in the ".gist" folder.

Q&A

If you have a specific question to ask about the codebase, you can use below command to inspect the codebase

poetry run python grooming_task.py path/to/the/Java/Project/Repo --question="..."

More info can be found in tell_me_about

Groom Coding Task

Development tasks and stories are often bigger than one single Q/A, you can use below command for the "grooming" purpose.

poetry run python grooming_task.py path/to/the/Java/Project/Repo --task="..."

Groom A JIRA issue

In reality, developers often work on development stories from Jira. In this case, you can set up the necessary credentials and we can read the Jira story directly.

Make sure to set the JIRA properties in the .env file first.

jira:
  server: https://[host].atlassian.net
  username: [email protected]
  api_token: ...
poetry run python grooming_task.py path/to/the/Java/Project/Repo --jira=[issue key]

Examples

tracing image of gisting files

tracing image of gisting package

tracing image of asking

Common Questions

Is this similar to the "planning" phase in agentic coding assistants?

Yes, this project essentially covers the "planning" phase found in AI coding assistants like Open-Devin. We use the term "grooming" to align with common development team workflows, potentially integrating with Sprint planning or JIRA. Our focus is on supporting entry-level or junior engineers often assigned maintenance tasks in enterprise environments, where challenges frequently relate to domain knowledge, edge cases, integration testing, and dependency management.

How does this compare to RAG or GraphRAG?

While similar to Retrieval-Augmented Generation (RAG) in that it indexes documents first, our approach is specifically tailored for Java projects. It leverages the intrinsic structure of Java codebases, similar to GraphRAG's concept of connecting information nodes. We create a graph of Java programs, packages, and projects to provide context-aware assistance.

Doesn't this reinvent existing Java code indexing tools?

While Java IDEs have long had powerful code indexing capabilities (e.g., Eclipse's JDT Core Index), our approach leverages the natural language understanding of LLMs. This allows for more flexible and intuitive interactions with the codebase, potentially reducing the need for complex integration with legacy indexers.

Why not use function calling for file and package requests?

Function calling is on our roadmap for future improvements. The current approach using prompts and parsing works well across various LLMs (except for some limitations with Gemini). It provides a simple, consistent interface for requesting file and package information:

[I need access files: <file>file1 name</file>,<file>file2 name</file>]
[I need info about packages: <package>package name</package>]
[I need to search <keyword>keyword</keyword> in the project]

Will this be necessary when LLMs can process entire repositories?

While LLMs are evolving rapidly, handling private, enterprise-scale codebases with full context remains challenging. Until we can effectively fine-tune LLMs on internal repositories (considering security and privacy concerns), systems like this serve as crucial "assistants to LLMs." They provide structured, context-aware information to help LLMs navigate and understand complex project structures more effectively.

How does this tool benefit development teams?

  1. Context-Aware Assistance: By gisting files and packages, it provides LLMs with a hierarchical understanding of the project structure.
  2. Efficient Onboarding: Helps new team members quickly grasp project architecture and dependencies.
  3. Consistent Approach: Encourages a standardized method for task planning and code navigation across the team.
  4. Integration Potential: Designed to work alongside existing tools and processes, enhancing rather than replacing current workflows.

To Do

Handle Code Changes

Instead of re-index (gist) the whole project, we need to find the diff betweeen the commits and only update the gist files of the changes since the last success indexing.

Multi-Agent

Implement a "critical thinking" phase

Before proposing solutions, prompt the LLM to critically evaluate its own assumptions and initial ideas Ask it to consider alternative explanations for the observed behavior

Implement a review and refinement stage

After the initial analysis, prompt the LLM to review its own work Ask it to identify potential oversights or areas that need more investigation

Integration with Internal Knowledge Bases

Large organizations often have extensive internal documentation, wikis, and presentations that explain various aspects of their systems. For example, setup a RAG (Retrieval-Augmented Generation) based system, potentially using GraphRAG to index those documents and provide query interface. Then integrate this tool with such RAG system for Q/A.

Leveraging Approved Pull Requests (PRs)

Pull requests are a gold mine of information on how to implement new features or fix issues. We can apply the similar strategy to index those PRs and code changes.

Leverage Integration Testing

Integration tests are a valuable resource for understanding how a project interacts with its dependencies and how data flows between systems. In today's microservices-oriented world, these tests often encapsulate critical knowledge about system interactions.

About

JavaGroom leverages Large Language Models (LLMs) to assist grooming development task. By 'gisting' Java files and packages within large java codebase, it provides context-aware assistance for code navigation, task grooming, and project understanding.

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published