From 1da9f22a8d66504f525e6347615e13a9461e3d2d Mon Sep 17 00:00:00 2001 From: Corneliu CROITORU Date: Wed, 25 Dec 2024 09:42:31 +0100 Subject: [PATCH] update classifier docs --- .../built-in/anthropic-classifier.mdx | 267 +++++++++++++----- .../built-in/bedrock-classifier.mdx | 231 ++++++++++----- 2 files changed, 356 insertions(+), 142 deletions(-) diff --git a/docs/src/content/docs/classifiers/built-in/anthropic-classifier.mdx b/docs/src/content/docs/classifiers/built-in/anthropic-classifier.mdx index 78e8957..3ac273c 100644 --- a/docs/src/content/docs/classifiers/built-in/anthropic-classifier.mdx +++ b/docs/src/content/docs/classifiers/built-in/anthropic-classifier.mdx @@ -14,7 +14,12 @@ The Anthropic Classifier extends the abstract `Classifier` class and uses the An - Supports custom system prompts and variables - Handles conversation history for context-aware classification -### Basic Usage +### Default Model + +The classifier uses Claude 3.5 Sonnet as its default model: +```typescript +ANTHROPIC_MODEL_ID_CLAUDE_3_5_SONNET = "claude-3-5-sonnet-20240620" +``` ### Python Package @@ -24,6 +29,8 @@ If you haven't already installed the Anthropic-related dependencies, make sure t pip install "multi-agent-orchestrator[anthropic]" ``` +### Basic Usage + To use the AnthropicClassifier, you need to create an instance with your Anthropic API key and pass it to the Multi-Agent Orchestrator: import { Tabs, TabItem } from '@astrojs/starlight/components'; @@ -55,74 +62,165 @@ import { Tabs, TabItem } from '@astrojs/starlight/components'; -### Custom Configuration +## System Prompt and Variables -You can customize the AnthropicClassifier by providing additional options: +### Full Default System Prompt - - - ```typescript - const customAnthropicClassifier = new AnthropicClassifier({ - apiKey: 'your-anthropic-api-key', - modelId: 'claude-3-sonnet-20240229', - inferenceConfig: { - maxTokens: 500, - temperature: 0.7, - topP: 0.9, - stopSequences: [''] - } - }); +The default system prompt used by the classifier is comprehensive and includes examples of both simple and complex interactions: - const orchestrator = new MultiAgentOrchestrator({ classifier: customAnthropicClassifier }); - ``` - - - ```python - from multi_agent_orchestrator.classifiers import AnthropicClassifier, AnthropicClassifierOptions - from multi_agent_orchestrator.orchestrator import MultiAgentOrchestrator +``` +You are AgentMatcher, an intelligent assistant designed to analyze user queries and match them with +the most suitable agent or department. Your task is to understand the user's request, +identify key entities and intents, and determine which agent or department would be best equipped +to handle the query. + +Important: The user's input may be a follow-up response to a previous interaction. +The conversation history, including the name of the previously selected agent, is provided. +If the user's input appears to be a continuation of the previous conversation +(e.g., "yes", "ok", "I want to know more", "1"), select the same agent as before. + +Analyze the user's input and categorize it into one of the following agent types: + +{{AGENT_DESCRIPTIONS}} + +If you are unable to select an agent put "unknown" + +Guidelines for classification: + + Agent Type: Choose the most appropriate agent type based on the nature of the query. + For follow-up responses, use the same agent type as the previous interaction. + Priority: Assign based on urgency and impact. + High: Issues affecting service, billing problems, or urgent technical issues + Medium: Non-urgent product inquiries, sales questions + Low: General information requests, feedback + Key Entities: Extract important nouns, product names, or specific issues mentioned. + For follow-up responses, include relevant entities from the previous interaction if applicable. + For follow-ups, relate the intent to the ongoing conversation. + Confidence: Indicate how confident you are in the classification. + High: Clear, straightforward requests or clear follow-ups + Medium: Requests with some ambiguity but likely classification + Low: Vague or multi-faceted requests that could fit multiple categories + Is Followup: Indicate whether the input is a follow-up to a previous interaction. + +Handle variations in user input, including different phrasings, synonyms, +and potential spelling errors. +For short responses like "yes", "ok", "I want to know more", or numerical answers, +treat them as follow-ups and maintain the previous agent selection. + +Here is the conversation history that you need to take into account before answering: + +{{HISTORY}} + + +Skip any preamble and provide only the response in the specified format. +``` - custom_anthropic_classifier = AnthropicClassifier(AnthropicClassifierOptions( - api_key='your-anthropic-api-key', - model_id='claude-3-sonnet-20240229', - inference_config={ - 'max_tokens': 500, - 'temperature': 0.7, - 'top_p': 0.9, - 'stop_sequences': [''] - } - )) +### Variable Replacements - orchestrator = MultiAgentOrchestrator(classifier=custom_anthropic_classifier) - ``` - - +#### AGENT_DESCRIPTIONS Example +``` +tech-support-agent:Specializes in resolving technical issues, software problems, and system configurations +billing-agent:Handles all billing-related queries, payment processing, and subscription management +customer-service-agent:Manages general inquiries, account questions, and product information requests +sales-agent:Assists with product recommendations, pricing inquiries, and purchase decisions +``` -The AnthropicClassifier accepts the following configuration options: +### Extended HISTORY Examples -- `api_key` (required): Your Anthropic API key. -- `model_id` (optional): The ID of the Anthropic model to use. Defaults to Claude 3.5 Sonnet. -- `inference_config` (optional): A dictionary containing inference configuration parameters: - - `max_tokens` (optional): The maximum number of tokens to generate. Defaults to 1000 if not specified. - - `temperature` (optional): Controls randomness in output generation. - - `top_p` (optional): Controls diversity of output generation. - - `stop_sequences` (optional): A list of sequences that, when generated, will stop the generation process. +The conversation history is formatted to include agent names in the responses, allowing the classifier to track which agent handled each interaction. Each assistant response is prefixed with `[agent-name]` in the history, making it clear who provided each response: + +``` +user: I need help with my subscription +assistant: [billing-agent] I can help you with your subscription. What specific information do you need? +user: The premium features aren't working +assistant: [tech-support-agent] I'll help you troubleshoot the premium features. Could you tell me which specific features aren't working? +user: The cloud storage says I only have 5GB but I'm supposed to have 100GB +assistant: [tech-support-agent] Let's verify your subscription status and refresh your storage allocation. When did you last see the correct storage amount? +user: How much am I paying for this subscription? +assistant: [billing-agent] I'll check your subscription details. Your current plan is $29.99/month for the Premium tier with 100GB storage. Would you like me to review your billing history? +user: Yes please +``` -## Customizing the System Prompt +Here, the history shows the conversation moving between `billing-agent` and `tech-support-agent` as the topic shifts between billing and technical issues. + + +The agent prefixing (e.g., `[agent-name]`) is automatically handled by the Multi-Agent Orchestrator when formatting the conversation history. This helps the classifier understand: +- Which agent handled each part of the conversation +- The context of previous interactions +- When agent transitions occurred +- How to maintain continuity for follow-up responses + +## Tool-Based Response Structure + +The AnthropicClassifier uses a tool specification to enforce structured output from the model. This is a design pattern that ensures consistent and properly formatted responses. + +### The Tool Specification +```json +{ + "name": "analyzePrompt", + "description": "Analyze the user input and provide structured output", + "input_schema": { + "type": "object", + "properties": { + "userinput": {"type": "string"}, + "selected_agent": {"type": "string"}, + "confidence": {"type": "number"} + }, + "required": ["userinput", "selected_agent", "confidence"] + } +} +``` + +### Why Use Tools? + +1. **Structured Output**: Instead of free-form text, the model must provide exactly the data structure we need. +2. **Guaranteed Format**: The tool schema ensures we always get: + - A valid agent identifier + - A properly formatted confidence score + - All required fields +3. **Implementation Note**: The tool isn't actually executed - it's a pattern to force the model to structure its response in a specific way that maps directly to our `ClassifierResult` type. + +Example Response: +```json +{ + "userinput": "I need to reset my password", + "selected_agent": "tech-support-agent", + "confidence": 0.95 +} +``` -You can customize the system prompt used by the AnthropicClassifier: +### Customizing the System Prompt + +You can override the default system prompt while maintaining the required agent descriptions and history variables. Here's how to do it: ```typescript orchestrator.classifier.setSystemPrompt( - ` - Custom prompt template with placeholders: + `You are a specialized routing expert with deep knowledge of {{INDUSTRY}} operations. + + Your available agents are: + {{AGENT_DESCRIPTIONS}} + + + Consider these key factors for {{INDUSTRY}} when routing: + {{INDUSTRY_RULES}} + + Recent conversation context: + {{HISTORY}} - {{CUSTOM_PLACEHOLDER}} - `, + + + Route based on industry best practices and conversation history.`, { - CUSTOM_PLACEHOLDER: "Value for custom placeholder" + INDUSTRY: "healthcare", + INDUSTRY_RULES: [ + "- HIPAA compliance requirements", + "- Patient data privacy protocols", + "- Emergency request prioritization", + "- Insurance verification processes" + ] } ); ``` @@ -130,44 +228,65 @@ You can customize the system prompt used by the AnthropicClassifier: ```python orchestrator.classifier.set_system_prompt( - """ - Custom prompt template with placeholders: + """You are a specialized routing expert with deep knowledge of {{INDUSTRY}} operations. + + Your available agents are: + {{AGENT_DESCRIPTIONS}} + + + Consider these key factors for {{INDUSTRY}} when routing: + {{INDUSTRY_RULES}} + + Recent conversation context: + {{HISTORY}} - {{CUSTOM_PLACEHOLDER}} - """, + + + Route based on industry best practices and conversation history.""", { - "CUSTOM_PLACEHOLDER": "Value for custom placeholder" + "INDUSTRY": "healthcare", + "INDUSTRY_RULES": [ + "- HIPAA compliance requirements", + "- Patient data privacy protocols", + "- Emergency request prioritization", + "- Insurance verification processes" + ] } ) ``` -## Processing Requests - -The AnthropicClassifier processes requests using the `process_request` method, which is called internally by the orchestrator. This method: +Note: When customizing the prompt, you must include: +- The `{{AGENT_DESCRIPTIONS}}` variable to list available agents +- The `{{HISTORY}}` variable for conversation context +- Clear instructions for agent selection +- Response format expectations -1. Prepares the user's message. -2. Constructs a request for the Anthropic API, including the system prompt and tool configurations. -3. Sends the request to the Anthropic API and processes the response. -4. Returns a `ClassifierResult` containing the selected agent and confidence score. +## Configuration Options -## Error Handling +The AnthropicClassifier accepts the following configuration options: -The AnthropicClassifier includes error handling to manage potential issues during the classification process. If an error occurs, it will log the error and raise an exception, which can be caught and handled by the orchestrator. +- `api_key` (required): Your Anthropic API key. +- `model_id` (optional): The ID of the Anthropic model to use. Defaults to Claude 3.5 Sonnet. +- `inference_config` (optional): A dictionary containing inference configuration parameters: + - `max_tokens` (optional): The maximum number of tokens to generate. Defaults to 1000. + - `temperature` (optional): Controls randomness in output generation. + - `top_p` (optional): Controls diversity of output generation. + - `stop_sequences` (optional): A list of sequences that will stop generation. ## Best Practices -1. **API Key Security**: Ensure your Anthropic API key is kept secure and not exposed in your codebase. -2. **Model Selection**: Choose an appropriate model based on your use case and performance requirements. -3. **Inference Configuration**: Experiment with different inference parameters to find the best balance between response quality and speed. -4. **System Prompt**: Craft a clear and comprehensive system prompt to guide the model's classification process effectively. +1. **API Key Security**: Keep your Anthropic API key secure and never expose it in your code. +2. **Model Selection**: Choose appropriate models based on your needs and performance requirements. +3. **Inference Configuration**: Experiment with different parameters to optimize classification accuracy. +4. **System Prompt**: Consider customizing the system prompt for your specific use case, while maintaining the core classification structure. ## Limitations -- Requires an active Anthropic API key. -- Classification quality depends on the chosen model and the quality of your system prompt and agent descriptions. -- API usage is subject to Anthropic's pricing and rate limits. +- Requires an active Anthropic API key +- Subject to Anthropic's API pricing and rate limits +- Classification quality depends on the quality of agent descriptions and system prompt -For more information on using and customizing the Multi-Agent Orchestrator, refer to the [Classifier Overview](/multi-agent-orchestrator/classifier/overview) and [Agents](/multi-agent-orchestrator/agents/overview) documentation. \ No newline at end of file +For more information, see the [Classifier Overview](/multi-agent-orchestrator/classifier/overview) and [Agents](/multi-agent-orchestrator/agents/overview) documentation. \ No newline at end of file diff --git a/docs/src/content/docs/classifiers/built-in/bedrock-classifier.mdx b/docs/src/content/docs/classifiers/built-in/bedrock-classifier.mdx index cc5d94f..d76222b 100644 --- a/docs/src/content/docs/classifiers/built-in/bedrock-classifier.mdx +++ b/docs/src/content/docs/classifiers/built-in/bedrock-classifier.mdx @@ -3,13 +3,7 @@ title: Bedrock Classifier description: How to configure the Bedrock classifier --- -The Bedrock Classifier is the default classifier used in the Multi-Agent Orchestrator. - -It leverages Amazon Bedrock's models through Converse API providing powerful and flexible classification capabilities. - -## Overview - -The BedrockClassifier extends the abstract `Classifier` class and uses Amazon Bedrock's runtime client to process requests and classify user intents. It's designed to analyze user input, consider conversation history, and determine the most appropriate agent to handle the query. +The Bedrock Classifier is the default classifier used in the Multi-Agent Orchestrator. It leverages Amazon Bedrock's models through Converse API providing powerful and flexible classification capabilities. ## Features @@ -18,9 +12,12 @@ The BedrockClassifier extends the abstract `Classifier` class and uses Amazon Be - Supports custom system prompts and variables - Handles conversation history for context-aware classification -### Basic Usage +### Default Model -By default, the Multi-Agent Orchestrator uses the Bedrock Classifier, which in turn utilizes the `anthropic.claude-3-5-sonnet-20240620-v1:0` (Claude 3.5 Sonnet) model for classification tasks. +The classifier uses Claude 3.5 Sonnet as its default model: +```typescript +BEDROCK_MODEL_ID_CLAUDE_3_5_SONNET = "anthropic.claude-3-5-sonnet-20240620-v1:0" +``` ### Python Package @@ -30,6 +27,10 @@ If you haven't already installed the AWS-related dependencies, make sure to inst pip install "multi-agent-orchestrator[aws]" ``` +### Basic Usage + +By default, the Multi-Agent Orchestrator uses the Bedrock Classifier: + import { Tabs, TabItem } from '@astrojs/starlight/components'; @@ -49,6 +50,145 @@ import { Tabs, TabItem } from '@astrojs/starlight/components'; +## System Prompt and Variables + +### Full Default System Prompt + +The default system prompt used by the classifier is comprehensive and includes examples of both simple and complex interactions: + +``` +You are AgentMatcher, an intelligent assistant designed to analyze user queries and match them with +the most suitable agent or department. Your task is to understand the user's request, +identify key entities and intents, and determine which agent or department would be best equipped +to handle the query. + +Important: The user's input may be a follow-up response to a previous interaction. +The conversation history, including the name of the previously selected agent, is provided. +If the user's input appears to be a continuation of the previous conversation +(e.g., "yes", "ok", "I want to know more", "1"), select the same agent as before. + +Analyze the user's input and categorize it into one of the following agent types: + +{{AGENT_DESCRIPTIONS}} + +If you are unable to select an agent put "unknown" + +Guidelines for classification: + + Agent Type: Choose the most appropriate agent type based on the nature of the query. + For follow-up responses, use the same agent type as the previous interaction. + Priority: Assign based on urgency and impact. + High: Issues affecting service, billing problems, or urgent technical issues + Medium: Non-urgent product inquiries, sales questions + Low: General information requests, feedback + Key Entities: Extract important nouns, product names, or specific issues mentioned. + For follow-up responses, include relevant entities from the previous interaction if applicable. + For follow-ups, relate the intent to the ongoing conversation. + Confidence: Indicate how confident you are in the classification. + High: Clear, straightforward requests or clear follow-ups + Medium: Requests with some ambiguity but likely classification + Low: Vague or multi-faceted requests that could fit multiple categories + Is Followup: Indicate whether the input is a follow-up to a previous interaction. + +Handle variations in user input, including different phrasings, synonyms, +and potential spelling errors. +For short responses like "yes", "ok", "I want to know more", or numerical answers, +treat them as follow-ups and maintain the previous agent selection. + +Here is the conversation history that you need to take into account before answering: + +{{HISTORY}} + + +Skip any preamble and provide only the response in the specified format. +``` + +### Example Variable Replacements + +#### AGENT_DESCRIPTIONS Example +``` +aws-services-agent:Expert in AWS services configuration, troubleshooting, and best practices +security-agent:Handles security-related inquiries, compliance questions, and access management +billing-agent:Manages AWS billing inquiries, cost optimization, and pricing details +developer-support:Assists with code, SDK issues, and AWS service implementation +``` + +### Extended HISTORY Examples + +The conversation history is formatted to include agent names in the responses, allowing the classifier to track which agent handled each interaction. Each assistant response is prefixed with `[agent-name]` in the history, making it clear who provided each response: + +#### Example 1: AWS Services to Billing Transition +``` +user: How do I set up an S3 bucket with versioning? +assistant: [aws-services-agent] I'll help you set up S3 versioning. First, do you already have an S3 bucket created? +user: No, but how much will it cost? +assistant: [billing-agent] I can help explain S3 pricing. The cost depends on storage usage, versioning, and access patterns. What's your expected storage volume? +user: Around 500GB of files that change frequently +``` + +In this example, you can see how the history shows the transition from `aws-services-agent` to `billing-agent`, with each response clearly marked. + +#### Example 2: Multiple Department Interaction +``` +user: I can't access my EC2 instance +assistant: [aws-services-agent] Let's troubleshoot your EC2 access. Can you see the instance in your console? +user: Yes, but SSH keeps saying permission denied +assistant: [security-agent] This sounds like a security group or key pair issue. When was the last time you successfully connected to this instance? +user: Yesterday, but we rotated some keys +assistant: [security-agent] I'll help you verify your key configuration and permissions. Which key pair are you currently using to connect? +``` + +Here, the history shows the conversation moving between `aws-services-agent` and `security-agent` as the topic shifts from general access to security configuration. + +The agent prefixing (e.g., `[agent-name]`) is automatically handled by the Multi-Agent Orchestrator when formatting the conversation history. This helps the classifier understand: +- Which agent handled each part of the conversation +- The context of previous interactions +- When agent transitions occurred +- How to maintain continuity for follow-up responses + +## Tool-Based Response Structure + +Like the Anthropic Classifier, the BedrockClassifier uses a tool specification to enforce structured output from the model. This is a design pattern that ensures consistent and properly formatted responses. + +### The Tool Specification +```json +{ + "toolSpec": { + "name": "analyzePrompt", + "description": "Analyze the user input and provide structured output", + "inputSchema": { + "json": { + "type": "object", + "properties": { + "userinput": {"type": "string"}, + "selected_agent": {"type": "string"}, + "confidence": {"type": "number"} + }, + "required": ["userinput", "selected_agent", "confidence"] + } + } + } +} +``` + +### Why Use Tools? + +1. **Structured Output**: Instead of free-form text, the model must provide exactly the data structure we need. +2. **Guaranteed Format**: The tool schema ensures we always get: + - A valid agent identifier + - A properly formatted confidence score + - All required fields +3. **Implementation Note**: The tool isn't actually executed - it's a pattern to force the model to structure its response in a specific way that maps directly to our `ClassifierResult` type. + +Example Response: +```json +{ + "userinput": "How do I configure VPC endpoints?", + "selected_agent": "aws-services-agent", + "confidence": 0.95 +} +``` + ### Custom Configuration You can customize the BedrockClassifier by creating an instance with specific options: @@ -59,7 +199,8 @@ You can customize the BedrockClassifier by creating an instance with specific op import { BedrockClassifier, MultiAgentOrchestrator } from "multi-agent-orchestrator"; const customBedrockClassifier = new BedrockClassifier({ - modelId: 'anthropic.claude-v2', + modelId: 'anthropic.claude-3-sonnet-20240229-v1:0', + region: 'us-west-2', inferenceConfig: { maxTokens: 500, temperature: 0.7, @@ -76,7 +217,8 @@ You can customize the BedrockClassifier by creating an instance with specific op from multi_agent_orchestrator.classifiers import BedrockClassifier, BedrockClassifierOptions custom_bedrock_classifier = BedrockClassifier(BedrockClassifierOptions( - model_id='anthropic.claude-v2', + model_id='anthropic.claude-3-sonnet-20240229-v1:0', + region='us-west-2', inference_config={ 'maxTokens': 500, 'temperature': 0.7, @@ -97,67 +239,20 @@ The BedrockClassifier accepts the following configuration options: - `maxTokens` (optional): The maximum number of tokens to generate. - `temperature` (optional): Controls randomness in output generation. - `topP` (optional): Controls diversity of output generation. - - `stopSequences` (optional): A list of sequences that, when generated, will stop the generation process. - -## Customizing the System Prompt - -You can customize the system prompt used by the BedrockClassifier: - - - - ```typescript - orchestrator.classifier.setSystemPrompt( - ` - Custom prompt template with placeholders: - {{AGENT_DESCRIPTIONS}} - {{HISTORY}} - {{CUSTOM_PLACEHOLDER}} - `, - { - CUSTOM_PLACEHOLDER: "Value for custom placeholder" - } - ); - ``` - - - ```python - orchestrator.classifier.set_system_prompt( - """ - Custom prompt template with placeholders: - {{AGENT_DESCRIPTIONS}} - {{HISTORY}} - {{CUSTOM_PLACEHOLDER}} - """, - { - "CUSTOM_PLACEHOLDER": "Value for custom placeholder" - } - ) - ``` - - - -## Processing Requests - -The BedrockClassifier processes requests using the `process_request` method, which is called internally by the orchestrator. This method: - -1. Prepares the user's message and conversation history. -2. Constructs a command for the Bedrock API, including the system prompt and tool configurations. -3. Sends the request to the Bedrock API and processes the response. -4. Returns a `ClassifierResult` containing the selected agent and confidence score. - -## Error Handling - -The BedrockClassifier includes error handling to manage potential issues during the classification process. If an error occurs, it will log the error and raise an exception, which can be caught and handled by the orchestrator. + - `stopSequences` (optional): A list of sequences that will stop generation. ## Best Practices -1. **Model Selection**: Choose an appropriate model based on your use case and performance requirements. -2. **Inference Configuration**: Experiment with different inference parameters to find the best balance between response quality and speed. -3. **System Prompt**: Craft a clear and comprehensive system prompt to guide the model's classification process effectively. +1. **AWS Configuration**: Ensure proper AWS credentials and Bedrock access are configured. +2. **Model Selection**: Choose appropriate models based on your use case requirements. +3. **Region Selection**: Consider using the region closest to your application for optimal latency. +4. **Inference Configuration**: Experiment with different parameters to optimize classification accuracy. +5. **System Prompt**: Consider customizing the system prompt for your specific use case, while maintaining the core classification structure. ## Limitations -- Requires an active AWS account with access to Amazon Bedrock. -- Classification quality depends on the chosen model and the quality of your system prompt and agent descriptions. +- Requires an active AWS account with access to Amazon Bedrock +- Classification quality depends on the chosen model and the quality of agent descriptions +- Subject to Amazon Bedrock service quotas and pricing -For more information on using and customizing the Multi-Agent Orchestrator, refer to the [Classifier Overview](/multi-agent-orchestrator/classifier/overview) and [Agents](/multi-agent-orchestrator/agents/overview) documentation. \ No newline at end of file +For more information, see the [Classifier Overview](/multi-agent-orchestrator/classifier/overview) and [Agents](/multi-agent-orchestrator/agents/overview) documentation. \ No newline at end of file