Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Anthropic Agents - Only shows intermediate steps #47

Open
chadvavra opened this issue Jul 6, 2024 · 0 comments
Open

Anthropic Agents - Only shows intermediate steps #47

chadvavra opened this issue Jul 6, 2024 · 0 comments

Comments

@chadvavra
Copy link

chadvavra commented Jul 6, 2024

I moved over to Anthropic as my LLM for the Agent example and find that it only returns messages if the 'intermediate steps' are turned on. Below is my updated code for api/agents/route.ts I think an event needs to change in the const textEncoder, but I can't figure out what.

More Anthropic examples would be appreciated.

'''
import { NextRequest, NextResponse } from "next/server";
import { Message as VercelChatMessage, StreamingTextResponse } from "ai";

import { createReactAgent } from "@langchain/langgraph/prebuilt";
// import { ChatOpenAI } from "@langchain/openai";
import { ChatAnthropicMessages } from "@langchain/anthropic";
import { SerpAPI } from "@langchain/community/tools/serpapi";
import { Calculator } from "@langchain/community/tools/calculator";
import {
AIMessage,
BaseMessage,
ChatMessage,
HumanMessage,
SystemMessage,
} from "@langchain/core/messages";

export const runtime = "edge";

const convertVercelMessageToLangChainMessage = (message: VercelChatMessage) => {
if (message.role === "user") {
return new HumanMessage(message.content);
} else if (message.role === "assistant") {
return new AIMessage(message.content);
} else {
return new ChatMessage(message.content, message.role);
}
};

const convertLangChainMessageToVercelMessage = (message: BaseMessage) => {
if (message._getType() === "human") {
return { content: message.content, role: "user" };
} else if (message._getType() === "ai") {
return {
content: message.content,
role: "assistant",
tool_calls: (message as AIMessage).tool_calls,
};
} else {
return { content: message.content, role: message._getType() };
}
};

const AGENT_SYSTEM_TEMPLATE = You have a degree in Business Analysis (MBA). Your responses should reflect your education and reseach into business success.;

/**

  • This handler initializes and calls an tool caling ReAct agent.

  • See the docs for more information:

  • https://langchain-ai.github.io/langgraphjs/tutorials/quickstart/
    /
    export async function POST(req: NextRequest) {
    try {
    const body = await req.json();
    const returnIntermediateSteps = body.show_intermediate_steps;
    /
    *

    • We represent intermediate steps as system messages for display purposes,
    • but don't want them in the chat history.
      */
      const messages = (body.messages ?? [])
      .filter(
      (message: VercelChatMessage) =>
      message.role === "user" || message.role === "assistant",
      )
      .map(convertVercelMessageToLangChainMessage);

    // Requires process.env.SERPAPI_API_KEY to be set: https://serpapi.com/
    // You can remove this or use a different tool instead.
    const tools = [new Calculator(), new SerpAPI()];

    const chat = new ChatAnthropicMessages({
    model: "claude-3-5-sonnet-20240620" });

    /**

    if (!returnIntermediateSteps) {
    /**
    * Stream back all generated tokens and steps from their runs.
    *
    * We do some filtering of the generated events and only stream back
    * the final response as a string.
    *
    * For this specific type of tool calling ReAct agents with OpenAI, we can tell when
    * the agent is ready to stream back final output when it no longer calls
    * a tool and instead streams back content.
    *
    * See: https://langchain-ai.github.io/langgraphjs/how-tos/stream-tokens/
    */
    const eventStream = await agent.streamEvents(
    { messages },
    { version: "v2" },
    );

    const textEncoder = new TextEncoder();
    const transformStream = new ReadableStream({
    async start(controller) {
    for await (const { event, data } of eventStream) {
    if (event === "on_chat_model_stream") {
    // Intermediate chat model generations will contain tool calls and no content
    if (!!data.chunk.content) {
    controller.enqueue(textEncoder.encode(data.chunk.content));
    }
    }
    }
    controller.close();
    },
    });

    return new StreamingTextResponse(transformStream);
    } else {
    /**
    * We could also pick intermediate steps out from streamEvents chunks, but
    * they are generated as JSON objects, so streaming and displaying them with
    * the AI SDK is more complicated.
    */
    const result = await agent.invoke({ messages });

    return NextResponse.json(
    {
    messages: result.messages.map(convertLangChainMessageToVercelMessage),
    },
    { status: 200 },
    );
    }
    } catch (e: any) {
    return NextResponse.json({ error: e.message }, { status: e.status ?? 500 });
    }
    }
    '''

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant