Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: add comprehensive LlamaIndex integration documentation #577

Closed
wants to merge 7 commits into from
1 change: 1 addition & 0 deletions docs/mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -96,6 +96,7 @@
"v1/integrations/groq",
"v1/integrations/langchain",
"v1/integrations/llama_stack",
"v1/integrations/llamaindex",
"v1/integrations/litellm",
"v1/integrations/mistral",
"v1/integrations/multion",
Expand Down
91 changes: 91 additions & 0 deletions docs/v1/integrations/llamaindex.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
---
title: 'LlamaIndex'
description: 'LlamaIndex is a data framework for LLM applications.'
---

import CodeTooltip from '/snippets/add-code-tooltip.mdx'
import EnvTooltip from '/snippets/add-env-tooltip.mdx'

# LlamaIndex Integration

AgentOps provides seamless integration with LlamaIndex, enabling comprehensive monitoring and evaluation of your LlamaIndex applications.

## Installation

<CodeGroup>
```bash pip
pip install llama-index-instrumentation-agentops
```
```bash poetry
poetry add llama-index-instrumentation-agentops
```
</CodeGroup>

## Usage

<CodeGroup>
```python python
from llama_index.core import set_global_handler

# Initialize AgentOps
set_global_handler("agentops")
```
</CodeGroup>

## Features

### LLM Event Tracking
- Monitor all LLM interactions
- Track token usage and costs
- Analyze prompt effectiveness

### Tool Usage Analytics
- Monitor tool calls and performance
- Track success/failure rates
- Identify bottlenecks

### Error Tracking
- Capture and analyze errors
- Debug agent behavior
- Improve reliability

## Examples

For a complete example of using AgentOps with LlamaIndex, see our [example notebook](https://github.com/AgentOps-AI/agentops/blob/main/examples/llamaindex_examples/llamaindex_examples.ipynb).

Here's a simple example demonstrating AgentOps integration with LlamaIndex:

<CodeGroup>
```python python
import os
from llama_index.core import set_global_handler
from llama_index.llms.openai import OpenAI
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

# Set up AgentOps handler
set_global_handler("agentops")

# Initialize LLM
llm = OpenAI(model="gpt-3.5-turbo")

# Load and index documents
documents = SimpleDirectoryReader('data').load_data()
index = VectorStoreIndex.from_documents(documents)

# Create query engine
query_engine = index.as_query_engine()

# Run queries (AgentOps will automatically track these)
response = query_engine.query("What is the capital of France?")
print(response)
```
</CodeGroup>

For more examples and detailed usage, visit our [documentation](https://docs.agentops.ai/integrations/llamaindex).

<script type="module" src="/scripts/github_stars.js"></script>
<script type="module" src="/scripts/link_to_api_button.js"></script>
<script type="module" src="/scripts/scroll-img-fadein-animation.js"></script>
<script type="module" src="/scripts/button_heartbeat_animation.js"></script>
<script type="css" src="/styles/styles.css"></script>
<script type="module" src="/scripts/adjust_api_dynamically.js"></script>
3 changes: 2 additions & 1 deletion examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,13 @@ At a high level, AgentOps gives you the ability to monitor LLM calls, costs, lat

## Integrations
- [AI21](./ai21_examples/ai21_examples.ipynb)
- [Anthropic](./anthropic_examples/)-
- [Anthropic](./anthropic_examples/)-
- [Autogen](./autogen_examples/)
- [Cohere](./cohere_examples/cohere_example.ipynb)
- [Crew.ai](./crew_examples/)
- [Groq](./multi_agent_groq_example.ipynb)
- [Langchain](./langchain_examples/langchain_examples.ipynb)
- [LlamaIndex](./llamaindex_examples/llamaindex_examples.ipynb)
- [LiteLLM](./litlelm_examples/litlelm_example.ipynb)
- [Mistral](./mistral_examples/mistral_example.ipynb)
- [MultiOn](./multion_examples/)
Expand Down
5 changes: 5 additions & 0 deletions examples/llamaindex_examples/.env.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# AgentOps API key for monitoring and observability
AGENTOPS_API_KEY=your_agentops_key

# OpenAI API key for LLM operations
OPENAI_API_KEY=your_openai_key
22 changes: 22 additions & 0 deletions examples/llamaindex_examples/data/sample.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# AgentOps LlamaIndex Integration Example

This document demonstrates the integration between AgentOps and LlamaIndex for monitoring and observability of LLM applications.

## Key Features
1. Document Loading: LlamaIndex can efficiently load and process various document formats
2. Vector Indexing: Documents are indexed for quick retrieval and semantic search
3. Query Processing: Natural language queries are processed against the indexed documents
4. Observability: AgentOps provides comprehensive monitoring of all LLM operations

## Benefits
- Real-time monitoring of LLM interactions
- Cost tracking and optimization
- Performance metrics and analytics
- Error detection and debugging
- Session replay capabilities

## Use Cases
- Document question-answering systems
- Knowledge base search and retrieval
- Content summarization and analysis
- Information extraction pipelines
183 changes: 183 additions & 0 deletions examples/llamaindex_examples/llamaindex_examples.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,183 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "82aa4ae2",
"metadata": {},
"source": [
"# LlamaIndex Integration Example\n",
"\n",
"This notebook demonstrates how to use AgentOps with LlamaIndex for monitoring and observability of your LLM applications.\n",
"\n",
"## Setup\n",
"\n",
"First, ensure you have the required packages installed:\n",
"```bash\n",
"pip install agentops llama-index python-dotenv openai\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cc6137f4",
"metadata": {},
"outputs": [],
"source": [
"# Import required packages\n",
"from llama_index.core import VectorStoreIndex, SimpleDirectoryReader\n",
"from llama_index.llms.openai import OpenAI\n",
"from llama_index.instrumentation.agentops import AgentOpsHandler\n",
"from agentops import Client\n",
"import os\n",
"from dotenv import load_dotenv"
]
},
{
"cell_type": "markdown",
"id": "1df168b3",
"metadata": {},
"source": [
"## Environment Setup\n",
"\n",
"Load environment variables from `.env` file. Make sure to copy `.env.tpl` to `.env` and fill in your API keys."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0dfcc445",
"metadata": {},
"outputs": [],
"source": [
"# Load environment variables\n",
"load_dotenv()\n",
"\n",
"# Configure API keys\n",
"AGENTOPS_API_KEY = os.getenv(\"AGENTOPS_API_KEY\")\n",
"OPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\")\n",
"\n",
"if not all([AGENTOPS_API_KEY, OPENAI_API_KEY]):\n",
" raise ValueError(\"Please set AGENTOPS_API_KEY and OPENAI_API_KEY in your .env file\")"
]
},
{
"cell_type": "markdown",
"id": "c14a16bc",
"metadata": {},
"source": [
"## Initialize AgentOps Handler\n",
"\n",
"Set up the AgentOps handler to track LlamaIndex operations."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1b54b2d4",
"metadata": {},
"outputs": [],
"source": [
"# Initialize AgentOps handler\n",
"AgentOpsHandler.init(api_key=AGENTOPS_API_KEY)"
]
},
{
"cell_type": "markdown",
"id": "5b38d476",
"metadata": {},
"source": [
"## Document Processing Example\n",
"\n",
"Let's create a simple example using LlamaIndex to process and query documents while tracking the operations with AgentOps."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3a3b44f4",
"metadata": {},
"outputs": [],
"source": [
"# Load documents from the data directory\n",
"documents = SimpleDirectoryReader('data').load_data()\n",
"\n",
"# Create an index from the documents\n",
"index = VectorStoreIndex.from_documents(documents)\n",
"\n",
"# Create a query engine\n",
"query_engine = index.as_query_engine()"
]
},
{
"cell_type": "markdown",
"id": "5c1c3a56",
"metadata": {},
"source": [
"## Query Example\n",
"\n",
"Now let's query our indexed documents. AgentOps will automatically track the LLM calls and their responses."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18a191f6",
"metadata": {},
"outputs": [],
"source": [
"# Query the index\n",
"response = query_engine.query(\"What is this document about?\")\n",
"print(f\"Response: {response}\")\n",
"\n",
"# Try another query\n",
"response = query_engine.query(\"What are the main topics covered?\")\n",
"print(f\"Response: {response}\")"
]
},
{
"cell_type": "markdown",
"id": "c03fdffa",
"metadata": {},
"source": [
"## Error Handling Example\n",
"\n",
"Let's demonstrate how AgentOps tracks errors in LlamaIndex operations."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "63187bb6",
"metadata": {},
"outputs": [],
"source": [
"try:\n",
" # Attempt to load documents from a non-existent directory\n",
" documents = SimpleDirectoryReader('nonexistent_directory').load_data()\n",
"except Exception as e:\n",
" print(f\"Error caught and tracked by AgentOps: {str(e)}\")"
]
},
{
"cell_type": "markdown",
"id": "bf4de9ab",
"metadata": {},
"source": [
"## Viewing Results\n",
"\n",
"You can view the tracked operations, including LLM calls, errors, and performance metrics in the AgentOps dashboard:\n",
"https://app.agentops.ai/\n",
"\n",
"The dashboard will show:\n",
"- LLM interactions and their costs\n",
"- Query performance metrics\n",
"- Error tracking and debugging information\n",
"- Document processing statistics"
]
}
],
"metadata": {},
"nbformat": 4,
"nbformat_minor": 5
}
Loading