The WS Ai Research Agent is designed to simulate a world-class researcher. This agent is capable of conducting detailed research on any topic and producing factual results using the tools available to it. It achieves this by leveraging web scraping, search functionality, and AI-driven summarization.
- Search Tool: Searches for a given query and returns relevant results.
- Web Scraping Tool: Scrapes content from a given URL and, if the content is lengthy, summarizes it based on the given objective.
- AI-Driven Summarization: Uses the GPT-3.5 model to produce concise summaries of long texts.
- Agent Initialization: Configures the agent with the aforementioned tools and rules to ensure factual and reliable results.
- Streamlit GUI: User can interact with agent via Streamlit GUI or comment it out and utilize the Fast API code to create a web service.
- FastAPI Integration: Provides an API endpoint for users to post their research queries and get researched content in return, creating a web service that can be integrated with zapier or zapier like tools to create a workflow.
- Python 3.x
- Required environment variables:
BROWSERLESS_API_KEY
,OPENAI_API_KEY
,SERP_API_KEY
.
- Clone the repository.
git clone <repository_url>
- Navigate to the project directory.
cd path/to/project
- Install the required packages.
pip install -r requirements.txt
(Note: You may want to create a virtual environment before installing the packages.)
- Create a
.env
file in the root directory and add the following content:
BROWSERLESS_API_KEY=<Your_Browserless_API_Key>
SERP_API_KEY=<Your_SERP_API_Key>
Replace <Your_Browserless_API_Key>
and <Your_SERP_API_Key>
with your actual API keys.
- Run the FastAPI application.
uvicorn <filename>:app --reload
Replace <filename>
with the name of the Python file containing the FastAPI app.
To use the WS Ai Research Agent API:
- Start the FastAPI server (as described above).
- Send a POST request to the root endpoint (
/
) with the research query in the body. For example:
{
"query": "Your research topic here"
}
- The API will respond with the researched content.
- Do enough research to gather as much information as possible about the objective.
- If there are URLs of relevant links & articles, scrape them to gather more information.
- After scraping & searching, think: "is there anything new I should research & scrape based on the data I've found to improve my research quality?" If the answer is yes, continue. However, don't perform more than 3 iterations.
- Only present facts & data gathered. Do not make things up.
- Include all reference data & links in the final output to back up your research.
If you'd like to contribute, please fork the repository and use a feature branch. Pull requests are warmly welcome.
The code in this project is licensed under MIT license.