-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add WebSocket support for workflow ID transmission #189
Conversation
WalkthroughThe changes introduce a new Changes
Sequence Diagram(s)sequenceDiagram
participant Main
participant ArkClient
participant ChatService
Main->>Main: Retrieve environment variables
Main->>Main: Start 50 test iterations
Main->>ArkClient: Initialize with endpoint and token
ArkClient->>ChatService: Send chat request with text input
ChatService-->>ArkClient: Return chat response
ArkClient-->>Main: Provide latency data
Main->>Main: Calculate & log average latency
Possibly related PRs
Poem
✨ Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
@@ Coverage Diff @@
## main #189 +/- ##
=======================================
Coverage 89.94% 89.94%
=======================================
Files 65 65
Lines 5858 5858
=======================================
Hits 5269 5269
Misses 589 589
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (4)
examples/benchmark_ark_text.py (2)
11-18
: Consider improving the latency calculation function.The
cal_latency
function has a few issues:
- The name is abbreviated - consider renaming to
calculate_latency
for clarity- It removes the last value from the sorted list when calculating the average, which may be intentional but isn't explained
- It uses
"%2d"
for formatting, which forces integer output and may truncate decimal values-def cal_latency(latency_list: List[int]) -> str: +def calculate_latency(latency_list: List[int]) -> str: if latency_list is None or len(latency_list) == 0: return "0" if len(latency_list) == 1: return f"{latency_list[0]}" res = latency_list.copy() res.sort() - return "%2d" % ((sum(res[:-1]) * 1.0) / (len(res) - 1)) + # Calculate average excluding highest value to reduce outlier impact + return "%.2f" % ((sum(res[:-1]) * 1.0) / (len(res) - 1))
21-39
: Consider parameterizing the base URL and enhancing latency measurement.The function has hardcoded the base URL and only measures latency until the first token arrives rather than complete response time.
-def test_latency(ep: str, token: str, text: str): +def test_latency(ep: str, token: str, text: str, base_url: str = "https://ark.cn-beijing.volces.com/api/v3"): from volcenginesdkarkruntime import Ark - client = Ark(base_url="https://ark.cn-beijing.volces.com/api/v3", api_key=token) + client = Ark(base_url=base_url, api_key=token) start = get_current_time_ms() stream = client.chat.completions.create( model=ep, messages=[ {"role": "user", "content": text}, ], stream=True, ) + first_token_time = None + total_tokens = 0 for chunk in stream: if not chunk.choices: continue if chunk.choices[0].delta.content: + if first_token_time is None: + first_token_time = get_current_time_ms() + total_tokens += 1 + end_time = get_current_time_ms() + first_token_latency = first_token_time - start if first_token_time else 0 + total_latency = end_time - start + return "", chunk.choices[0].delta.content, first_token_latency, total_latency, total_tokens - return "", chunk.choices[0].delta.content, get_current_time_ms() - startcozepy/websockets/chat/__init__.py (2)
339-339
: Ensure backward compatibility in create methods.The workflow_id parameter is passed without checking if it exists in kwargs. Ensure backward compatibility by checking if the parameter exists before passing it.
- workflow_id=workflow_id, + workflow_id=workflow_id, # Can be NoneAlso applies to: 575-575
163-579
: Consider adding documentation for the workflow_id parameter.The purpose of the
workflow_id
parameter and when it should be used is not explained in the code. Consider adding docstrings to the relevant methods to explain its purpose and usage.Add docstrings to the relevant methods, for example:
def __init__( self, base_url: str, auth: Auth, requester: Requester, bot_id: str, workflow_id: Optional[str] = None, on_event: Union[WebsocketsChatEventHandler, Dict[WebsocketsEventType, Callable]], **kwargs, ): """Initialize a WebSocket chat client. Args: base_url: The base URL for the WebSocket connection. auth: Authentication details. requester: The requester to use for the WebSocket connection. bot_id: The ID of the bot to connect to. workflow_id: Optional ID of the workflow to use with this chat connection. on_event: Event handler or dictionary mapping event types to handler functions. **kwargs: Additional arguments to pass to the WebSocket connection. """
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
cozepy/websockets/chat/__init__.py
(9 hunks)examples/benchmark_ark_text.py
(1 hunks)examples/benchmark_websockets_chat.py
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- examples/benchmark_websockets_chat.py
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: test (Python 3.8 on macOS)
- GitHub Check: test (Python 3.8 on Windows)
🔇 Additional comments (4)
examples/benchmark_ark_text.py (1)
54-55
: LGTM!The usage of
asyncio.run(main())
is correct for initiating the async main function.cozepy/websockets/chat/__init__.py (3)
9-9
: LGTM!Adding the import for
remove_none_values
is appropriate for the changes being made.
196-201
: LGTM! Good use of remove_none_values for query parameters.The implementation correctly uses
remove_none_values
to filter out None values from the query parameters dictionary, which is a good practice for handling optional parameters in URL construction.Also applies to: 432-437
163-205
:❓ Verification inconclusive
Verify the impact of the workflow_id addition on existing WebSocket connections.
Adding a new parameter to the WebSocket URL query could affect the server's response. Ensure the server properly handles this new parameter.
🏁 Script executed:
#!/bin/bash # Description: Check if any tests or documentation exist that verify WebSocket connection with workflow_id # Look for tests related to WebSocketsChatClient with workflow_id rg -t py "WebsocketsChatClient.*workflow_id" --glob "!cozepy/websockets/chat/__init__.py" # Check for other references to workflow_id in the codebase that might provide context rg -t py "workflow_id" --glob "!cozepy/websockets/chat/__init__.py"Length of output: 6649
Review Verification: Confirm Server Handling of
workflow_id
Query ParameterThe changes introduce the
workflow_id
parameter into the WebSocket URL query for the chat client. While similar usage exists throughout the codebase (e.g., in workflows modules and tests intests/test_workflows_chat.py
), please double-check that this addition does not adversely affect existing WebSocket connections and that the server’s endpoint (v1/chat
) properly handles the new parameter. Specifically:
- Verify (via tests or direct server inspection) that the server does not reject or misinterpret connections that include
workflow_id
.- Ensure that any changes in query parameters do not lead to unexpected responses or connection instability.
async def main(): | ||
ep = os.getenv("ARK_EP") | ||
token = os.getenv("ARK_TOKEN") | ||
text = os.getenv("COZE_TEXT") or "讲个笑话" | ||
|
||
times = 50 | ||
text_latency = [] | ||
for i in range(times): | ||
logid, text, latency = test_latency(ep, token, text) | ||
text_latency.append(latency) | ||
print(f"[latency.ark.text] {i}, latency: {cal_latency(text_latency)} ms, log: {logid}, text: {text}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor main function to properly utilize async functionality and improve benchmark quality.
The function is declared as async but doesn't use await statements. Also, running 50 iterations back-to-back might skew results.
async def main():
ep = os.getenv("ARK_EP")
token = os.getenv("ARK_TOKEN")
text = os.getenv("COZE_TEXT") or "讲个笑话"
times = 50
text_latency = []
+ ttft_latency = [] # Time to first token
+ ttr_latency = [] # Time to complete response
+ token_counts = []
for i in range(times):
- logid, text, latency = test_latency(ep, token, text)
- text_latency.append(latency)
- print(f"[latency.ark.text] {i}, latency: {cal_latency(text_latency)} ms, log: {logid}, text: {text}")
+ # Add a small pause between requests to avoid rate limiting and network congestion
+ if i > 0:
+ await asyncio.sleep(0.5)
+
+ logid, content, first_token_latency, total_latency, tokens = test_latency(ep, token, text)
+ ttft_latency.append(first_token_latency)
+ ttr_latency.append(total_latency)
+ token_counts.append(tokens)
+
+ print(f"[latency.ark.text] {i}, TTFT: {calculate_latency(ttft_latency)} ms, " +
+ f"TTR: {calculate_latency(ttr_latency)} ms, tokens: {tokens}, " +
+ f"log: {logid}, first token: {content[:20]}...")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async def main(): | |
ep = os.getenv("ARK_EP") | |
token = os.getenv("ARK_TOKEN") | |
text = os.getenv("COZE_TEXT") or "讲个笑话" | |
times = 50 | |
text_latency = [] | |
for i in range(times): | |
logid, text, latency = test_latency(ep, token, text) | |
text_latency.append(latency) | |
print(f"[latency.ark.text] {i}, latency: {cal_latency(text_latency)} ms, log: {logid}, text: {text}") | |
async def main(): | |
ep = os.getenv("ARK_EP") | |
token = os.getenv("ARK_TOKEN") | |
text = os.getenv("COZE_TEXT") or "讲个笑话" | |
times = 50 | |
text_latency = [] | |
ttft_latency = [] # Time to first token | |
ttr_latency = [] # Time to complete response | |
token_counts = [] | |
for i in range(times): | |
# Add a small pause between requests to avoid rate limiting and network congestion | |
if i > 0: | |
await asyncio.sleep(0.5) | |
logid, content, first_token_latency, total_latency, tokens = test_latency(ep, token, text) | |
ttft_latency.append(first_token_latency) | |
ttr_latency.append(total_latency) | |
token_counts.append(tokens) | |
print(f"[latency.ark.text] {i}, TTFT: {calculate_latency(ttft_latency)} ms, " + | |
f"TTR: {calculate_latency(ttr_latency)} ms, tokens: {tokens}, " + | |
f"log: {logid}, first token: {content[:20]}...") |
@@ -167,6 +167,7 @@ def __init__( | |||
auth: Auth, | |||
requester: Requester, | |||
bot_id: str, | |||
workflow_id: str, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Make workflow_id parameter optional to maintain backward compatibility.
The workflow_id
parameter should be optional with a default value of None
to prevent breaking existing code. This is especially important for public APIs.
- workflow_id: str,
+ workflow_id: Optional[str] = None,
This change should be applied to all four instances where workflow_id is being added as a parameter.
Also applies to: 330-330, 406-406, 566-566
Summary by CodeRabbit
New Features
Bug Fixes