Skip to content

Commit bfcd056

Browse files
quantstruct-botdev-docs-github-app[bot]dsinghviarthurchi93stevenbdf
authored
Recover commits on main (#313)
* Create file * chore: upgrade python generator to 4.14.2 * Add AI Condition page and more details on Logic Condition (#308) * add AI condition page and more details on logic condition * info * add Assistant node page * feat: add langfuse metadata and tags * feat: add test results * feat: add more context about dtmf tool * feat: add explicitly instructions on silent transfers * minor tweak * updated mcp server examples --------- Co-authored-by: dev-docs-github-app[bot] <178952281+dev-docs-github-app[bot]@users.noreply.github.com> Co-authored-by: Deep Singhvi <[email protected]> Co-authored-by: arthurchi93 <[email protected]> Co-authored-by: stevenbdf <[email protected]> Co-authored-by: Sri <[email protected]>
1 parent ac0a806 commit bfcd056

File tree

13 files changed

+207
-47
lines changed

13 files changed

+207
-47
lines changed

dev-docs.json

+9
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
{
2+
"gitHubApp": {
3+
"approvalWorkflow": true,
4+
"userDocsWorkflows": [
5+
"generateUserDocs"
6+
],
7+
"issues": true
8+
}
9+
}

fern/apis/api/generators.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ groups:
1010
python-sdk:
1111
generators:
1212
- name: fernapi/fern-python-sdk
13-
version: 4.3.15
13+
version: 4.14.2
1414
disable-examples: true
1515
api:
1616
settings:

fern/docs.yml

+4-2
Original file line numberDiff line numberDiff line change
@@ -239,16 +239,18 @@ navigation:
239239
path: workflows/nodes/gather.mdx
240240
- page: API Request
241241
path: workflows/nodes/api-request.mdx
242+
- page: Assistant
243+
path: workflows/nodes/assistant.mdx
242244
- page: Transfer
243245
path: workflows/nodes/transfer.mdx
244246
- page: Hangup
245247
path: workflows/nodes/hangup.mdx
246-
247248
- section: Edges
248249
contents:
249250
- page: Logical Conditions
250251
path: workflows/edges/logical-conditions.mdx
251-
252+
- page: AI Conditions
253+
path: workflows/edges/ai-conditions.mdx
252254
- section: Squads
253255
path: squads.mdx
254256
contents:

fern/providers/observability/langfuse.mdx

+36
Original file line numberDiff line numberDiff line change
@@ -76,3 +76,39 @@ To make the most out of this integration, you can now use Langfuse's [evaluation
7676

7777
</Step>
7878
</Steps>
79+
80+
## Enrich Traces
81+
Vapi allows you to enrich Langfuse traces by integrating [Metadata](https://langfuse.com/docs/tracing-features/metadata) and [Tags](https://langfuse.com/docs/tracing-features/tags).
82+
83+
By default, we will add the following values to the metadata of each trace:
84+
85+
- `call.metadata`
86+
- `assistant.metadata`
87+
- `assistantOverrides.metadata`
88+
- `assistantOverrides.variableValues`
89+
90+
### Usage
91+
You can enhance your observability in Langfuse by adding metadata and tags:
92+
93+
**Metadata**
94+
95+
Use the [`assistant.observabilityPlan.metadata`](https://docs.vapi.ai/api-reference/assistants/create#request.body.observabilityPlan.metadata) field to attach custom key-value pairs.
96+
97+
Examples:
98+
- Track experiment versions ("experiment": "v2.1")
99+
- Store user segments ("user_type": "beta_tester")
100+
- Log environment details ("env": "production")
101+
102+
**Tags**
103+
104+
Use the [`assistant.observabilityPlan.tags`](https://docs.vapi.ai/api-reference/assistants/create#request.body.observabilityPlan.tags) field to add searchable labels.
105+
106+
Examples:
107+
- Mark important runs ("priority")
108+
- Group related sessions ("onboarding", "A/B_test")
109+
- Filter by feature ("voice_assistant")
110+
111+
Adding metadata and tags makes it easier to filter, analyze, and monitor your assistants activity in Langfuse.
112+
113+
### Example
114+
![Langfuse Metadata Example](../../static/images/providers/langfuse-example.png)

fern/sdk/mcp-server.mdx

+37-9
Original file line numberDiff line numberDiff line change
@@ -163,16 +163,43 @@ async function main() {
163163
});
164164

165165
// Create SSE transport for connection to remote Vapi MCP server
166-
const transport = new SSEClientTransport({
167-
url: 'https://mcp.vapi.ai/sse',
168-
headers: {
169-
'Authorization': `Bearer ${process.env.VAPI_TOKEN}`
170-
}
171-
});
166+
const serverUrl = 'https://mcp.vapi.ai/sse';
167+
const headers = {
168+
Authorization: `Bearer ${process.env.VAPI_TOKEN}`,
169+
};
170+
const options = {
171+
requestInit: { headers: headers },
172+
eventSourceInit: {
173+
fetch: (url, init) => {
174+
return fetch(url, {
175+
...(init || {}),
176+
headers: {
177+
...(init?.headers || {}),
178+
...headers,
179+
},
180+
});
181+
},
182+
},
183+
};
184+
const transport = new SSEClientTransport(new URL(serverUrl), options);
172185

173186
console.log('Connecting to Vapi MCP server via SSE...');
174187
await mcpClient.connect(transport);
175188
console.log('Connected successfully');
189+
190+
// Helper function to parse tool responses
191+
function parseToolResponse(response) {
192+
if (!response?.content) return response;
193+
const textItem = response.content.find(item => item.type === 'text');
194+
if (textItem?.text) {
195+
try {
196+
return JSON.parse(textItem.text);
197+
} catch {
198+
return textItem.text;
199+
}
200+
}
201+
return response;
202+
}
176203

177204
try {
178205
// List available tools
@@ -189,7 +216,7 @@ async function main() {
189216
arguments: {},
190217
});
191218

192-
const assistants = assistantsResponse.content;
219+
const assistants = parseToolResponse(assistantsResponse);
193220
if (!(Array.isArray(assistants) && assistants.length > 0)) {
194221
console.log('No assistants found. Please create an assistant in the Vapi dashboard first.');
195222
return;
@@ -207,7 +234,7 @@ async function main() {
207234
arguments: {},
208235
});
209236

210-
const phoneNumbers = phoneNumbersResponse.content;
237+
const phoneNumbers = parseToolResponse(phoneNumbersResponse);
211238
if (!(Array.isArray(phoneNumbers) && phoneNumbers.length > 0)) {
212239
console.log('No phone numbers found. Please add a phone number in the Vapi dashboard first.');
213240
return;
@@ -236,7 +263,8 @@ async function main() {
236263
},
237264
});
238265

239-
console.log('Call created:', JSON.stringify(createCallResponse.content, null, 2));
266+
const createdCall = parseToolResponse(createCallResponse);
267+
console.log('Call created:', JSON.stringify(createdCall, null, 2));
240268
} finally {
241269
console.log('\nDisconnecting from server...');
242270
await mcpClient.close();

fern/squads/silent-transfers.mdx

+13-4
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,13 @@ slug: squads/silent-transfers
88
If you want to allow your call flow to move seamlessly from one assistant to another _without_ the caller hearing `Please hold while we transfer you` here’s what to do:
99

1010
1. **Update the Destination Assistant’s First Message**
11-
- Set the assistant’s `firstMessage` to an _empty string_.
12-
- Make sure the `firstMessageMode` is set to `assistant-speaks-first-with-model-generated-message`.
13-
2. **Trigger the Transfer from the Source Assistant**
11+
- Set the assistant's `firstMessage` to an _empty string_.
12+
- Set the assistant's `firstMessageMode` to `assistant-speaks-first-with-model-generated-message`.
13+
14+
2. **Update the Squad's assistant destinations messages**
15+
- For every `members[*].assistantDestinations[*]`, set the `message` property to an _empty string_.
16+
17+
3. **Trigger the Transfer from the Source Assistant**
1418

1519
- In that assistant’s prompt, include a line instructing it to transfer to the desired assistant:
1620

@@ -20,7 +24,7 @@ If you want to allow your call flow to move seamlessly from one assistant to ano
2024

2125
- Replace `'assistantName'` with the exact name of the next assistant.
2226

23-
3. **Direct the Destination Assistant’s Behavior**
27+
4. **Direct the Destination Assistant’s Behavior**
2428
- In that assistant’s prompt, include a line instructing it to _`Proceed directly to the Task section without any greetings or small talk.`_
2529
- This ensures there’s no awkward greeting or “Hello!” when the next assistant begins speaking.
2630

@@ -38,6 +42,10 @@ Below are the key JSON examples you’ll need. These show how to structure your
3842

3943
### **HP Payment Squad With SubAgent**
4044

45+
<Warning>
46+
Make sure the `members[*].assistantDestinations[*].message` properties are set to an _empty string_.
47+
</Warning>
48+
4149
```json
4250
{
4351
"members": [
@@ -97,6 +105,7 @@ Below are the key JSON examples you’ll need. These show how to structure your
97105
"temperature": 0.3
98106
},
99107
"firstMessage": "",
108+
"firstMessageMode": "assistant-speaks-first-with-model-generated-message",
100109
"transcriber": {
101110
"model": "nova-2",
102111
"language": "en",
Loading
66.3 KB
Loading

fern/tools/default-tools.mdx

+53-11
Original file line numberDiff line numberDiff line change
@@ -62,9 +62,9 @@ This function is provided when `endCall` is included in the assistant's list of
6262
}
6363
```
6464

65-
#### Dial Keypad (DTMF)
65+
#### Send Text
6666

67-
This function is provided when `dtmf` is included in the assistant's list of available tools (see configuration options [here](/api-reference/assistants/create#request.body.model.openai.tools.dtmf)). The assistant will be able to enter digits on the keypad.
67+
This function is provided when `sms` is included in the assistant's list of available tool (see configuration options [here](/api-reference/assistants/create#request.body.model.openai.tools.sms)). The assistant can use this function to send SMS messages using a configured Twilio account.
6868

6969
```json
7070
{
@@ -74,21 +74,24 @@ This function is provided when `dtmf` is included in the assistant's list of ava
7474
"messages": [
7575
{
7676
"role": "system",
77-
"content": "You are an assistant at a law firm. When you hit a menu, use the dtmf function to enter the digits."
77+
"content": "You are an assistant. When the user asks you to send a text message, use the sms function."
7878
}
7979
],
8080
"tools": [
8181
{
82-
"type": "dtmf"
82+
"type": "sms",
83+
"metadata": {
84+
"from": "+15551234567"
85+
}
8386
}
8487
]
8588
}
8689
}
8790
```
8891

89-
#### Send Text
92+
#### Dial Keypad (DTMF)
9093

91-
This function is provided when `sms` is included in the assistants list of available tool (see configuration options [here](/api-reference/assistants/create#request.body.model.openai.tools.sms)). The assistant can use this function to send SMS messages using a configured Twilio account.
94+
This function is provided when `dtmf` is included in the assistant's list of available tools (see configuration options [here](/api-reference/assistants/create#request.body.model.openai.tools.dtmf)). The assistant will be able to enter digits on the keypad.
9295

9396
```json
9497
{
@@ -98,21 +101,60 @@ This function is provided when `sms` is included in the assistant’s list of av
98101
"messages": [
99102
{
100103
"role": "system",
101-
"content": "You are an assistant. When the user asks you to send a text message, use the sms function."
104+
"content": "You are an assistant at a law firm. When you hit a menu, use the dtmf function to enter the digits."
102105
}
103106
],
104107
"tools": [
105108
{
106-
"type": "sms",
107-
"metadata": {
108-
"from": "+15551234567"
109-
}
109+
"type": "dtmf"
110110
}
111111
]
112112
}
113113
}
114114
```
115115

116+
<Note>
117+
There are three methods for sending DTMF in a phone call:
118+
119+
1. **In-band DTMF**: DTMF tones are transmitted as part of the regular audio stream. This is the simplest method, but it can suffer from quality issues if the audio stream is compressed or degraded.
120+
2. **Out-of-band DTMF via RFC 2833**: This method sends DTMF tones separately from the audio stream, within RTP (Real-Time Protocol) packets. It's typically more reliable than in-band DTMF, particularly for VoIP applications where the audio stream might be compressed. RFC 2833 is the standard that initially defined this method. It is now replaced by RFC 4733 but this method is still referred by RFC 2833.
121+
3. **Out-of-band DTMF via SIP INFO messages**: In this approach, DTMF tones are sent as separate SIP INFO messages. While this can be more reliable than in-band DTMF, it's not as widely supported as the RFC 2833 method.
122+
123+
As of writing, Vapi's DTMF tool uses in-band DTMF. Please note that this method may not work with certain IVRs. If you are running into this issue, the recommended approach is to have your assistant say the options out loud if available. For example, when an IVR says "Press 1 or say Sales for the Sales department," prefer having the assistant say "Sales."
124+
</Note>
125+
126+
#### Send Text
127+
128+
1. **In-band**: tones are transmitted as part of the regular audio stream. This is the simplest method, but it can suffer from quality issues if the audio stream is compressed or degraded.
129+
2. **Out-of-band via RFC 2833**: tones are transmitted separately from the audio stream, within RTP (Real-Time Protocol) packets. It's typically more reliable than in-band DTMF, particularly for VoIP applications where the audio stream might be compressed. RFC 2833 is the standard that initially defined this method. It is now replaced by RFC 4733 but this method is still referred by RFC 2833.
130+
3. **Out-of-band via SIP INFO messages**: tones are sent as separate SIP INFO messages. While this can be more reliable than in-band DTMF, it's not as widely supported as the RFC 2833 method.
131+
132+
<Note>
133+
Vapi's DTMF tool uses in-band method. Please note that this method may not work with certain IVRs. If you are running into this issue, the recommended approach is to have your assistant say the options out loud if available. For example, when an IVR says "Press 1 or say Sales for the Sales department," prefer having the assistant say "Sales."
134+
</Note>
135+
136+
##### Tool Effectiveness
137+
138+
To evaluate this tool, we set up a Vapi assistant with the DTMF tool enabled and conducted calls to a range of IVR systems, including a Twilio IVR (configured via Studio Flows) and several third-party IVRs such as pharmacies and insurance companies.
139+
140+
**Testing Methodology**
141+
142+
We called and navigated through the IVRs using three different strategies:
143+
144+
1. **Direct Dialpad**: calling from a personal phone and dialing options using the dialpad.
145+
2. **Vapi DTMF Tool**: an assistant configured with the DTMF tool.
146+
3. **Manual DTMF Sound**: calling from a personal phone and playing DTMF tones generated by software. _(similar approach as the Vapi DTMF Tool)_
147+
148+
**Key Findings**
149+
150+
- The assistant successfully navigated some of the third-party IVRs.
151+
- The assistant encountered issues with Twilio IVRs, likely due to Twilio’s preference for RFC 2833.
152+
- Observed occasional delays in DTMF tone transmission, which may affect effectiveness with IVRs that have short timeouts.
153+
154+
**Conclusion**
155+
156+
The tool's effectiveness depends on the IVR system's configuration and DTMF capturing method. We are working to improve compatibility and reduce transmission delays for broader and more reliable support.
157+
116158

117159
<Accordion title="Custom Functions: Deprecated">
118160
### Custom Functions
+14-17
Original file line numberDiff line numberDiff line change
@@ -1,32 +1,29 @@
11
---
22
title: AI Conditions
3-
subtitle: Dynamic AI-driven branching in workflows
3+
subtitle: Smart workflow branching powered by AI
44
slug: /workflows/edges/ai-conditions
55
---
66

77
## Overview
88

9-
The **AI Conditions** feature leverages artificial intelligence to determine the next step in your workflow based on conversation context. Unlike traditional logical conditions—which rely on explicit rules—AI Conditions allow your voice agent to evaluate complex or ambiguous scenarios, making branching decisions dynamically.
9+
AI Conditions use artificial intelligence to decide the next step in your workflow based on the conversation. Instead of using fixed rules, they can understand complex situations and make smart decisions in real-time.
1010

1111
## How It Works
1212

13-
- **Contextual Evaluation:** The AI considers data from previous steps (e.g., user input, API responses) to gauge the conversation context.
14-
- **Adaptive Decision-Making:** It uses its judgment to choose the most appropriate branch without relying solely on fixed comparisons.
15-
- **Seamless Integration:** AI Conditions can complement existing logical conditions, offering a balance between predictable rules and adaptive behavior.
13+
1. The AI looks at the conversation history and context
14+
2. It makes a smart decision about which path to take, based on variables collected from Gather verbs and data returned from API requests.
15+
3. Works alongside your existing rules for maximum flexibility
1616

17-
## Configuration
1817

19-
- **Activation:** Enable AI Conditions on a condition node where you want the AI to drive the branching logic.
20-
- **Context Input:** The AI will utilize variables collected from Gather verbs and data returned from API requests.
21-
- **Decision Logic:** No manual rules are required—the AI interprets context in real time to select the optimal branch.
22-
- **Fallback:** You can combine AI Conditions with traditional logical conditions for added control.
18+
## Configuration
19+
- **Condition Node:** Start by inserting a condition node into your workflow.
20+
- **Branch Setup:** Attach one or more nodes to the condition node.
21+
- **AI Tag:** Click on the connecting edge and choose `AI` from the `Condition Type` dropdown
22+
- **AI Condition** Use the input to define when the chosen branch should be taken.
2323

2424
## Usage
2525

26-
Deploy AI Conditions when your workflow requires flexibility and context-sensitive decision-making, such as:
27-
28-
- Handling ambiguous or multi-faceted user responses.
29-
- Addressing scenarios where strict rules may not capture the conversation's nuances.
30-
- Enhancing the user experience by providing more natural, human-like interactions.
31-
32-
For detailed configuration instructions and best practices, please refer to our dedicated documentation on AI-driven workflows.
26+
Use AI Conditions when you need:
27+
- To handle unclear or complex user responses
28+
- More flexibility than traditional rules can provide
29+
- More natural, human-like conversations

fern/workflows/edges/logical-conditions.mdx

+14-2
Original file line numberDiff line numberDiff line change
@@ -12,10 +12,22 @@ Logical Conditions enable you to create branching paths within your workflow. Th
1212

1313
- **Condition Node:** Start by inserting a condition node into your workflow.
1414
- **Branch Setup:** Attach one or more nodes to the condition node.
15-
- **Logic Tag:** Click the "Logic" tag on each connecting edge to define rules or comparisons (e.g., equals, greater than) using variables collected from previous steps.
15+
- **Logic Tag:** Click the "Logic" tag on each connecting edge and select `Logic` from the `Condition Type` dropdown.
16+
- **Condition Type:** Choose between requiring ALL conditions to be met (AND logic) or ANY condition to be met (OR logic)
17+
- **Logic Conditions** Use the panel to define one or more rules or comparisons (e.g., equals, greater than) using variables collected from previous steps.
18+
19+
<Note>
20+
To remove a comparison, click on the Trash icon to the right of the comparison.
21+
</Note>
22+
23+
<Frame>
24+
<img src="../../static/images/workflows/logic-condition.png" />
25+
</Frame>
1626

1727
## Usage
1828

1929
Implement Logical Conditions to guide your conversation dynamically. They allow your workflow to adjust its path based on real-time data, ensuring more personalized and responsive interactions.
2030

21-
For detailed configuration instructions and advanced usage, please refer to our dedicated documentation on condition configuration.
31+
<Note>
32+
When [`Gathering`](/workflows/nodes/gather) string values that will be used in conditions, consider using `enum` types to ensure consistent value comparison. This helps prevent issues with case sensitivity, whitespace, or formatting differences that could affect condition evaluation.
33+
</Note>

0 commit comments

Comments
 (0)