You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/open-source/sentry.mdx
+68-38
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ Integrating Sentry into your application allows you to:
16
16
- Gain insights into the user experience and identify bottlenecks.
17
17
- Improve the overall reliability and stability of your application.
18
18
19
-
## Configuring Sentry SDK
19
+
## Configuring Sentry
20
20
21
21
To integrate Sentry into your application, you need to initialize the Sentry SDK at the earliest instantiation point in your code. This ensures that Sentry starts capturing errors and performance data as soon as possible.
Head to [sentry.io](sentry.io) to get your free DSN! See https://docs.sentry.io/platforms/python/configuration/options/ for more info about the above options.
48
47
49
-
-`dsn`: The Data Source Name (DSN) is a unique identifier for your Sentry project. It tells the SDK where to send the captured data.
50
-
-`environment`: The environment in which your application is running (e.g., production, staging, development).
51
-
-`traces_sample_rate`: The sample rate for capturing performance data (transactions). A value of `1.0` means 100% of transactions will be captured.
52
-
-`profiles_sample_rate`: The sample rate for capturing profiling data. Similar to `traces_sample_rate`.
53
-
-`sample_rate`: The sample rate for capturing errors and exceptions. A value of `1.0` means 100% of errors will be captured.
54
-
-`max_request_body_size`: Configures the maximum size of request bodies to capture. Setting it to `"always"` ensures all request bodies are captured.
55
-
-`integrations`: A list of integrations to use with Sentry. In this case, we're using `AsyncioIntegration` for asyncio support and `LoguruIntegration` for Loguru logging support.
48
+
## Instrumenting your application
56
49
57
-
## Custom Sentry Spans
50
+
Vocode exposes a set of custom spans that get automatically sent to Sentry during Vocode conversations. To use these spans, you'll need to manually attach a transaction to the current scope.
58
51
59
-
In addition to automatic error and performance monitoring, you can manually create and manage spans to gain deeper insights into specific parts of your application. The `CustomSentrySpans` class defines several custom spans that you can use to measure specific events and durations within your application. Here's a more human-readable explanation of each span and what it captures:
52
+
### Example 1: Streaming Conversation
60
53
61
-
### Span Descriptions
54
+
Update `quickstarts/streaming_conversation.py`, replace the `main` function with the following code:
62
55
63
-
1.**Connected to First Send**_(`CONNECTED_TO_FIRST_SEND`)_: Measures the time from when a connection is established to when the first data is sent. This can help identify delays in the initial data transmission.
56
+
```python
57
+
import sentry_sdk
58
+
from sentry_sdk.integrations.asyncio import AsyncioIntegration
59
+
from sentry_sdk.integrations.loguru import LoguruIntegration
60
+
from vocode import sentry_transaction
64
61
65
-
2.**Endpointing Latency**_(`ENDPOINTING_LATENCY`)_: Captures the latency involved in endpointing, which is the process of determining the end of a spoken phrase or sentence. This is crucial for applications involving speech recognition.
62
+
sentry_sdk.init(
63
+
...,
64
+
integrations=[
65
+
AsyncioIntegration(),
66
+
LoguruIntegration(),
67
+
],
68
+
)
66
69
67
-
3.**First Send to First Receive**_(`FIRST_SEND_TO_FIRST_RECEIVE`)_: Measures the time from when the first data is sent to when the first response is received. This span helps in understanding the round-trip time for the initial communication.
70
+
asyncdefmain():
71
+
...
72
+
await conversation.start()
73
+
...
68
74
69
-
4.**Language Model Time to First Token**_(`LANGUAGE_MODEL_TIME_TO_FIRST_TOKEN`)_: Tracks the time taken by the language model to generate the first token (word or character) in its response. This is useful for evaluating the performance of language models like GPT-4o.
70
75
71
-
5.**Latency of Conversation**_(`LATENCY_OF_CONVERSATION`)_: Measures the overall latency of a conversation, from start to finish. This span provides insights into the responsiveness of the entire conversational flow.
6.**Latency of Transcription Start**_(`LATENCY_OF_TRANSCRIPTION_START`)_: Captures the time taken to start the transcription process after receiving audio input. This is important for applications that convert speech to text.
84
+
Head to the Performance pane in Sentry and click into the trace, you should see something that looks like this:
74
85
75
-
7.**LLM First Sentence Total**_(`LLM_FIRST_SENTENCE_TOTAL`)_: Measures the total time taken by the language model to generate the first complete sentence. This span helps in assessing the initial response time of the language model.
86
+

76
87
77
-
8.**Start to Connection**_(`START_TO_CONNECTION`)_: Tracks the time from the start of an operation to the establishment of a connection. This can help identify delays in the connection setup phase.
88
+
### Example 2: Telephony Server
78
89
79
-
9.**Synthesis Generate First Chunk**_(`SYNTHESIS_GENERATE_FIRST_CHUNK`)_: Measures the time taken to generate the first chunk of synthesized speech. This is crucial for applications that convert text to speech.
90
+
Simply instantiate the Sentry SDK at the top of the file, e.g. in `app/telephony_app/main.py`
80
91
81
-
10.**Synthesis Time to First Token**_(`SYNTHESIS_TIME_TO_FIRST_TOKEN`)_: Captures the time taken to generate the first token (word or character) in the synthesized speech. This span helps in evaluating the performance of speech synthesis models.
92
+
```python
93
+
sentry_sdk.init(
94
+
...
95
+
)
82
96
83
-
11.**Time to First Token**_(`TIME_TO_FIRST_TOKEN`)_: Measures the overall time taken to generate the first token in any process, whether it's language modeling or speech synthesis. This span provides a general metric for initial response time.
97
+
app = FastAPI(docs_url=None)
98
+
```
84
99
85
-
12.**Synthesizer Synthesis Total**_(`SYNTHESIZER_SYNTHESIS_TOTAL`)_: Tracks the total time taken for the entire speech synthesis process. This span helps in understanding the overall performance of the synthesizer.
100
+
## Custom Spans Overview
86
101
87
-
13.**Synthesizer Time to First Token**_(`SYNTHESIZER_TIME_TO_FIRST_TOKEN`)_: Measures the time taken by the synthesizer to generate the first token in the synthesized speech. This is useful for evaluating the initial response time of the synthesizer.
102
+
### Latency of Conversation
88
103
89
-
14.**Synthesizer Create Speech**_(`SYNTHESIZER_CREATE_SPEECH`)_: Captures the time taken to create the entire speech output. This span provides insights into the performance of the speech creation process.
104
+
**Latency of Conversation**_(`LATENCY_OF_CONVERSATION`)_ measures the overall latency of a conversation, from when the user finishes their utterance to when the agent begins its response. It is broken up into the following sub-spans:
90
105
91
-
#### Note on Synthesizer Spans
106
+
-**[Deepgram Only] Endpointing Latency**_(`ENDPOINTING_LATENCY`)_: Captures the extra latency involved from retrieving finalized transcripts from Deepgram before deciding to invoke the agent.
107
+
-**Language model Time to First Token**_(`LANGUAGE_MODEL_TIME_TO_FIRST_TOKEN`)_: Tracks the time taken by the language model to generate the first token (word or character) in its response.
108
+
-**Synthesis Time to First Token**_(`SYNTHESIS_TIME_TO_FIRST_TOKEN`)_: Measures the time taken by the synthesizer to generate the first token in the synthesized speech. This is useful for evaluating the initial response time of the synthesizer.
92
109
93
-
The following spans will have the actual synthesizer's name prepended to them. For example, if the synthesizer is `ElevenLabsSynthesizer`, the span `SYNTHESIZER_SYNTHESIS_TOTAL` will be recorded as `ElevenLabsSynthesizer.synthesis_total`:
-**Time to First Token**_(`SYNTHESIZER_TIME_TO_FIRST_TOKEN`)_
97
-
-**Create Speech**_(`SYNTHESIZER_CREATE_SPEECH`)_
112
+
We capture the following spans in our Deepgram integration:
113
+
114
+
-**Connected to First Send**_(`CONNECTED_TO_FIRST_SEND`)_: Measures the time from when the Deepgram websocket connection is established to when the first data is sent
115
+
-**[Deepgram Only] First Send to First Receive**_(`FIRST_SEND_TO_FIRST_RECEIVE`)_: Measures the time from when the first data is sent to Deepgram to when the first response is received
116
+
-**[Deepgram Only] Start to Connection**_(`START_TO_CONNECTION`)_: Tracks the time it takes to establish the websocket connection with Deepgram
117
+
118
+
### LLM
98
119
99
-
This naming convention helps in identifying and differentiating spans for various synthesizers in your application.
120
+
For our OpenAI and Anthropic integrations, we capture:
100
121
101
-
## Wrap Up
122
+
-**Time to First Token**_(`TIME_TO_FIRST_TOKEN`)_: Measures the time taken by the language model to generate the first token (word or character) in its response.
123
+
-**LLM First Sentence Total**_(`LLM_FIRST_SENTENCE_TOTAL`)_: Measures the total time taken by the language model to generate the first complete sentence.
102
124
103
-
Integrating Sentry into your application provides valuable insights into errors, exceptions, and performance issues. By configuring the Sentry SDK and using custom spans, you can monitor the health of your application and improve its reliability and stability.
125
+
### Synthesizer
104
126
105
-
For more information on Sentry and its features, refer to the [official documentation](https://docs.sentry.io/).
127
+
For most of our synthesizer integrations, we capture:
128
+
129
+
-**Synthesis Generate First Chunk**_(`SYNTHESIS_GENERATE_FIRST_CHUNK`)_: Measures the time taken to generate the first chunk of synthesized speech.
130
+
-**Synthesizer Synthesis Total**_(`SYNTHESIZER_SYNTHESIS_TOTAL`)_: Tracks the total time taken for the entire speech synthesis process. This span helps in understanding the overall performance of the synthesizer.
131
+
132
+
These spans will have the actual synthesizer's name prepended to them. For example, if the synthesizer is `ElevenLabsSynthesizer`, the span `SYNTHESIZER_SYNTHESIS_TOTAL` will be recorded as `ElevenLabsSynthesizer.synthesis_total`:
0 commit comments