Skip to content

Commit

Permalink
Feature/chat deployment (#40)
Browse files Browse the repository at this point in the history
* Rename auth method in docs

* fix(core): Fix trim messages mutation bug (langchain-ai#7547)

* release(core): 0.3.31 (langchain-ai#7548)

* fix(community): Updated Embeddings URL (langchain-ai#7545)

* fix(community): make sure guardrailConfig can be added even with anthropic models (langchain-ai#7542)

* docs: Fix PGVectorStore import in install dependencies (TypeScript) example (langchain-ai#7533)

* fix(community): Airtable url (langchain-ai#7532)

* docs: Fix typo in OpenAIModerationChain example (langchain-ai#7528)

* docs: Resolves langchain-ai#7483, resolves langchain-ai#7274 (langchain-ai#7505)

Co-authored-by: jacoblee93 <[email protected]>

* docs: Rename auth method in IBM docs (langchain-ai#7524)

* docs: correct misspelling (langchain-ai#7522)

Co-authored-by: jacoblee93 <[email protected]>

* release(community): 0.3.25 (langchain-ai#7549)

* feat(azure-cosmosdb): add session context for a user mongodb (langchain-ai#7436)

Co-authored-by: jacoblee93 <[email protected]>

* release(azure-cosmosdb): 0.2.7 (langchain-ai#7550)

* fix(ci): Fix build (langchain-ai#7551)

* feat(anthropic): Add Anthropic PDF support (document type) in invoke (langchain-ai#7496)

Co-authored-by: jacoblee93 <[email protected]>

* release(anthropic): 0.3.12 (langchain-ai#7552)

* chore(core,langchain,community): Relax langsmith deps (langchain-ai#7556)

* release(community): 0.3.26 (langchain-ai#7557)

* release(core): 0.3.32 (langchain-ai#7558)

* Release 0.3.12 (langchain-ai#7559)

* fix(core): Prevent cache misses from triggering model start callback runs twice (langchain-ai#7565)

* fix(core): Ensure that cached flag in run extras is only set for cache hits (langchain-ai#7566)

* release(core): 0.3.33 (langchain-ai#7567)

* feat(community): Adds graph_document to export list (langchain-ai#7555)

Co-authored-by: quantropi-minh <[email protected]>
Co-authored-by: jacoblee93 <[email protected]>

* fix(langchain): Fix ZeroShotAgent createPrompt with correct formatted tool names (langchain-ai#7510)

* docs: Add document for AzureCosmosDBMongoChatMessageHistory (langchain-ai#7519)

Co-authored-by: root <root@CPC-yangq-FRSGK>

* fix(langchain): Allow pulling hub prompts with associated models (langchain-ai#7569)

* fix(community,aws): Update handleLLMNewToken to include chunk metadata (langchain-ai#7568)

Co-authored-by: jacoblee93 <[email protected]>

* feat(community): Provide fallback relationshipType in case it is not present in graph_transformer (langchain-ai#7521)

Co-authored-by: quantropi-minh <[email protected]>
Co-authored-by: jacoblee93 <[email protected]>

* docs: Add redirect (langchain-ai#7570)

* fix(langchain,core): Add shim for hub mustache templates with nested input variables (langchain-ai#7581)

* fix(chat-models): honor disableStreaming even for `generateUncached` (langchain-ai#7575)

* release(core): 0.3.34 (langchain-ai#7584)

* feat(langchain): Add hub entrypoint with automatic dynamic entrypoint of models (langchain-ai#7583)

* chore(ollama): Export `OllamaEmbeddingsParams` interface (langchain-ai#7574)

* docs: Clarify tool creation process in structured outputs documentation (langchain-ai#7578)

Co-authored-by: Sahar Shemesh <[email protected]>
Co-authored-by: jacoblee93 <[email protected]>

* fix(community): Set awaitHandlers to true in upstash ratelimit (langchain-ai#7571)

Co-authored-by: Jacob Lee <[email protected]>

* fix(core): Fix trim messages mutation (langchain-ai#7585)

* feat(openai): Make only AzureOpenAI respect Azure env vars, remove class defaults, update withStructuredOutput defaults (langchain-ai#7535)

* fix(community): Make postgresConnectionOptions optional in PostgresRecordManager (langchain-ai#7580)

Co-authored-by: jacoblee93 <[email protected]>

* release(community): 0.3.27 (langchain-ai#7586)

* release(ollama): 0.1.5 (langchain-ai#7587)

* Release 0.3.13 (langchain-ai#7588)

* release(openai): 0.4.0 (langchain-ai#7589)

* release(core): 0.3.35 (langchain-ai#7590)

* fix(ci): Update lock (langchain-ai#7591)

* feat(core): Allow passing returnDirect in tool wrapper params (langchain-ai#7594)

* release(core): 0.3.36 (langchain-ai#7595)

* fix(openai): Revert Azure default withStructuredOutput changes (langchain-ai#7596)

* release(openai): 0.4.1 (langchain-ai#7597)

* feat(openai): Refactor to allow easier subclassing (langchain-ai#7598)

* release(openai): 0.4.2 (langchain-ai#7599)

* feat(deepseek): Adds Deepseek integration (langchain-ai#7604)

* release(deepseek): 0.0.1 (langchain-ai#7608)

* feat: update Novita AI doc (langchain-ai#7602)

* Add deployment chat to chat class

* feat(langchain): Add DeepSeek to initChatModel (langchain-ai#7609)

* Release 0.3.14 (langchain-ai#7611)

* fix: Add test for pdf uploads anthropic (langchain-ai#7613)

* feat: Update google genai to support file uploads (langchain-ai#7612)

* chore(google-genai): Drop .only in test (langchain-ai#7614)

* release(google-genai): 0.1.7 (langchain-ai#7615)

* Upadate Watsonx sdk

* fix(core): Fix stream events bug when errors are thrown too quickly during iteration (langchain-ai#7617)

* release(core): 0.3.37 (langchain-ai#7619)

* fix(langchain): Fix Groq import for hub (langchain-ai#7620)

* docs: update README/intro

* Release 0.3.15

* feat(community): improve support for Tavily search tool args (langchain-ai#7561)

* feat(community): Add boolean metadata type support in Supabase structured query translator (langchain-ai#7601)

* feat(google-genai): Add support for fileUri in media type in Google GenAI (langchain-ai#7621)

Co-authored-by: Jacob Lee <[email protected]>

* release(google-genai): 0.1.8 (langchain-ai#7628)

* release(community): 0.3.28 (langchain-ai#7629)

* Rework interfaces in llms as well

* Bump watsonx-ai sdk version

* Remove unused code

* Add fake auth

* Fix broken changes

---------

Co-authored-by: Jacob Lee <[email protected]>
Co-authored-by: Jacky Chen <[email protected]>
Co-authored-by: Mohamed Belhadj <[email protected]>
Co-authored-by: Brian Ploetz <[email protected]>
Co-authored-by: Eduard-Constantin Ibinceanu <[email protected]>
Co-authored-by: Jonathan V <[email protected]>
Co-authored-by: ucev <[email protected]>
Co-authored-by: crisjy <[email protected]>
Co-authored-by: Adham Badr <[email protected]>
Co-authored-by: Minh Ha <[email protected]>
Co-authored-by: quantropi-minh <[email protected]>
Co-authored-by: Chi Thu Le <[email protected]>
Co-authored-by: fatmelon <[email protected]>
Co-authored-by: root <root@CPC-yangq-FRSGK>
Co-authored-by: Mohamad Mohebifar <[email protected]>
Co-authored-by: David Duong <[email protected]>
Co-authored-by: Brace Sproul <[email protected]>
Co-authored-by: Matus Gura <[email protected]>
Co-authored-by: Sahar Shemesh <[email protected]>
Co-authored-by: Sahar Shemesh <[email protected]>
Co-authored-by: Cahid Arda Öz <[email protected]>
Co-authored-by: Jason <[email protected]>
Co-authored-by: vbarda <[email protected]>
Co-authored-by: Vadym Barda <[email protected]>
Co-authored-by: Hugo Borsoni <[email protected]>
Co-authored-by: Arman Ghazaryan <[email protected]>
Co-authored-by: Andy <[email protected]>
  • Loading branch information
1 parent 82a239e commit 185f221
Show file tree
Hide file tree
Showing 137 changed files with 3,956 additions and 2,026 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ The LangChain libraries themselves are made up of several different packages.
- **[`@langchain/core`](https://github.com/langchain-ai/langchainjs/blob/main/langchain-core)**: Base abstractions and LangChain Expression Language.
- **[`@langchain/community`](https://github.com/langchain-ai/langchainjs/blob/main/libs/langchain-community)**: Third party integrations.
- **[`langchain`](https://github.com/langchain-ai/langchainjs/blob/main/langchain)**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **[LangGraph.js](https://langchain-ai.github.io/langgraphjs/)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. Integrates smoothly with LangChain, but can be used without it.
- **[LangGraph.js](https://langchain-ai.github.io/langgraphjs/)**: LangGraph powers production-grade agents, trusted by Linkedin, Uber, Klarna, GitLab, and many more. Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. Integrates smoothly with LangChain, but can be used without it.

Integrations may also be split into their own compatible packages.

Expand Down
12 changes: 9 additions & 3 deletions docs/core_docs/docs/concepts/structured_outputs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ Several more powerful methods that utilizes native features in the model provide

Many [model providers support](/docs/integrations/chat/) tool calling, a concept discussed in more detail in our [tool calling guide](/docs/concepts/tool_calling/).
In short, tool calling involves binding a tool to a model and, when appropriate, the model can _decide_ to call this tool and ensure its response conforms to the tool's schema.
With this in mind, the central concept is straightforward: _simply bind our schema to a model as a tool!_
With this in mind, the central concept is straightforward: _create a tool with our schema and bind it to the model!_
Here is an example using the `ResponseFormatter` schema defined above:

```typescript
Expand All @@ -90,8 +90,14 @@ const model = new ChatOpenAI({
temperature: 0,
});

// Bind ResponseFormatter schema as a tool to the model
const modelWithTools = model.bindTools([ResponseFormatter]);
// Create a tool with ResponseFormatter as its schema.
const responseFormatterTool = tool(async () => {}, {
name: "responseFormatter",
schema: ResponseFormatter,
});

// Bind the created tool to the model
const modelWithTools = model.bindTools([responseFormatterTool]);

// Invoke the model
const aiMsg = await modelWithTools.invoke(
Expand Down
2 changes: 1 addition & 1 deletion docs/core_docs/docs/how_to/graph_constructing.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@
"\n",
"const model = new ChatOpenAI({\n",
" temperature: 0,\n",
" model: \"gpt-4-turbo-preview\",\n",
" model: \"gpt-4o-mini\",\n",
"});\n",
"\n",
"const llmGraphTransformer = new LLMGraphTransformer({\n",
Expand Down
4 changes: 2 additions & 2 deletions docs/core_docs/docs/how_to/query_high_cardinality.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -392,7 +392,7 @@
"metadata": {},
"source": [
"```{=mdx}\n",
"<ChatModelTabs customVarName=\"llmLong\" openaiParams={`{ model: \"gpt-4-turbo-preview\" }`} />\n",
"<ChatModelTabs customVarName=\"llmLong\" openaiParams={`{ model: \"gpt-4o-mini\" }`} />\n",
"```"
]
},
Expand Down Expand Up @@ -635,4 +635,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}
312 changes: 312 additions & 0 deletions docs/core_docs/docs/integrations/chat/deepseek.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,312 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: DeepSeek\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatDeepSeek\n",
"\n",
"This will help you getting started with DeepSeek [chat models](/docs/concepts/#chat-models). For detailed documentation of all `ChatDeepSeek` features and configurations head to the [API reference](https://api.js.langchain.com/classes/_langchain_deepseek.ChatDeepSeek.html).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [PY support](https://python.langchain.com/docs/integrations/chat/deepseek) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [`ChatDeepSeek`](https://api.js.langchain.com/classes/_langchain_deepseek.ChatDeepSeek.html) | [`@langchain/deepseek`](https://npmjs.com/@langchain/deepseek) | ❌ (see [Ollama](/docs/integrations/chat/ollama)) | beta | ✅ | ![NPM - Downloads](https://img.shields.io/npm/dm/@langchain/deepseek?style=flat-square&label=%20&) | ![NPM - Version](https://img.shields.io/npm/v/@langchain/deepseek?style=flat-square&label=%20&) |\n",
"\n",
"### Model features\n",
"\n",
"See the links in the table headers below for guides on how to use specific features.\n",
"\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | \n",
"\n",
"Note that as of 1/27/25, tool calling and structured output are not currently supported for `deepseek-reasoner`.\n",
"\n",
"## Setup\n",
"\n",
"To access DeepSeek models you'll need to create a DeepSeek account, get an API key, and install the `@langchain/deepseek` integration package.\n",
"\n",
"You can also access the DeepSeek API through providers like [Together AI](/docs/integrations/chat/togetherai) or [Ollama](/docs/integrations/chat/ollama).\n",
"\n",
"### Credentials\n",
"\n",
"Head to https://deepseek.com/ to sign up to DeepSeek and generate an API key. Once you've done this set the `DEEPSEEK_API_KEY` environment variable:\n",
"\n",
"```bash\n",
"export DEEPSEEK_API_KEY=\"your-api-key\"\n",
"```\n",
"\n",
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:\n",
"\n",
"```bash\n",
"# export LANGSMITH_TRACING=\"true\"\n",
"# export LANGSMITH_API_KEY=\"your-api-key\"\n",
"```\n",
"\n",
"### Installation\n",
"\n",
"The LangChain ChatDeepSeek integration lives in the `@langchain/deepseek` package:\n",
"\n",
"```{=mdx}\n",
"import IntegrationInstallTooltip from \"@mdx_components/integration_install_tooltip.mdx\";\n",
"import Npm2Yarn from \"@theme/Npm2Yarn\";\n",
"\n",
"<IntegrationInstallTooltip></IntegrationInstallTooltip>\n",
"\n",
"<Npm2Yarn>\n",
" @langchain/deepseek @langchain/core\n",
"</Npm2Yarn>\n",
"\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"import { ChatDeepSeek } from \"@langchain/deepseek\";\n",
"\n",
"const llm = new ChatDeepSeek({\n",
" model: \"deepseek-reasoner\",\n",
" temperature: 0,\n",
" // other params...\n",
"})"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"<!-- ## Invocation -->"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"AIMessage {\n",
" \"id\": \"e2874482-68a7-4552-8154-b6a245bab429\",\n",
" \"content\": \"J'adore la programmation.\",\n",
" \"additional_kwargs\": {,\n",
" \"reasoning_content\": \"...\",\n",
" },\n",
" \"response_metadata\": {\n",
" \"tokenUsage\": {\n",
" \"promptTokens\": 23,\n",
" \"completionTokens\": 7,\n",
" \"totalTokens\": 30\n",
" },\n",
" \"finish_reason\": \"stop\",\n",
" \"model_name\": \"deepseek-reasoner\",\n",
" \"usage\": {\n",
" \"prompt_tokens\": 23,\n",
" \"completion_tokens\": 7,\n",
" \"total_tokens\": 30,\n",
" \"prompt_tokens_details\": {\n",
" \"cached_tokens\": 0\n",
" },\n",
" \"prompt_cache_hit_tokens\": 0,\n",
" \"prompt_cache_miss_tokens\": 23\n",
" },\n",
" \"system_fingerprint\": \"fp_3a5770e1b4\"\n",
" },\n",
" \"tool_calls\": [],\n",
" \"invalid_tool_calls\": [],\n",
" \"usage_metadata\": {\n",
" \"output_tokens\": 7,\n",
" \"input_tokens\": 23,\n",
" \"total_tokens\": 30,\n",
" \"input_token_details\": {\n",
" \"cache_read\": 0\n",
" },\n",
" \"output_token_details\": {}\n",
" }\n",
"}\n"
]
}
],
"source": [
"const aiMsg = await llm.invoke([\n",
" [\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ],\n",
" [\"human\", \"I love programming.\"],\n",
"])\n",
"aiMsg"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore la programmation.\n"
]
}
],
"source": [
"console.log(aiMsg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"AIMessage {\n",
" \"id\": \"6e7f6f8c-8d7a-4dad-be07-425384038fd4\",\n",
" \"content\": \"Ich liebe es zu programmieren.\",\n",
" \"additional_kwargs\": {,\n",
" \"reasoning_content\": \"...\",\n",
" },\n",
" \"response_metadata\": {\n",
" \"tokenUsage\": {\n",
" \"promptTokens\": 18,\n",
" \"completionTokens\": 9,\n",
" \"totalTokens\": 27\n",
" },\n",
" \"finish_reason\": \"stop\",\n",
" \"model_name\": \"deepseek-reasoner\",\n",
" \"usage\": {\n",
" \"prompt_tokens\": 18,\n",
" \"completion_tokens\": 9,\n",
" \"total_tokens\": 27,\n",
" \"prompt_tokens_details\": {\n",
" \"cached_tokens\": 0\n",
" },\n",
" \"prompt_cache_hit_tokens\": 0,\n",
" \"prompt_cache_miss_tokens\": 18\n",
" },\n",
" \"system_fingerprint\": \"fp_3a5770e1b4\"\n",
" },\n",
" \"tool_calls\": [],\n",
" \"invalid_tool_calls\": [],\n",
" \"usage_metadata\": {\n",
" \"output_tokens\": 9,\n",
" \"input_tokens\": 18,\n",
" \"total_tokens\": 27,\n",
" \"input_token_details\": {\n",
" \"cache_read\": 0\n",
" },\n",
" \"output_token_details\": {}\n",
" }\n",
"}\n"
]
}
],
"source": [
"import { ChatPromptTemplate } from \"@langchain/core/prompts\"\n",
"\n",
"const prompt = ChatPromptTemplate.fromMessages(\n",
" [\n",
" [\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ],\n",
" [\"human\", \"{input}\"],\n",
" ]\n",
")\n",
"\n",
"const chain = prompt.pipe(llm);\n",
"await chain.invoke(\n",
" {\n",
" input_language: \"English\",\n",
" output_language: \"German\",\n",
" input: \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatDeepSeek features and configurations head to the API reference: https://api.js.langchain.com/classes/_langchain_deepseek.ChatDeepSeek.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "TypeScript",
"language": "typescript",
"name": "tslab"
},
"language_info": {
"codemirror_mode": {
"mode": "typescript",
"name": "javascript",
"typescript": true
},
"file_extension": ".ts",
"mimetype": "text/typescript",
"name": "typescript",
"version": "3.7.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Loading

0 comments on commit 185f221

Please sign in to comment.