Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add WebSocket support for workflow ID transmission #189

Merged
merged 1 commit into from
Mar 3, 2025

Conversation

chyroc
Copy link
Collaborator

@chyroc chyroc commented Mar 3, 2025

Summary by CodeRabbit

  • New Features

    • Enhanced chat client setup by adding an extra configuration option to enable more flexible workflow interactions.
    • Introduced a new benchmarking tool that measures chat response latency using asynchronous tests.
  • Bug Fixes

    • Adjusted the chat demonstration by disabling automatic audio file saving, reducing unintended file creation.

@chyroc chyroc added the feature label Mar 3, 2025
Copy link

coderabbitai bot commented Mar 3, 2025

Walkthrough

The changes introduce a new workflow_id parameter into several chat client classes and their corresponding factory methods. This parameter is forwarded in the constructor to be included in the query parameters using a helper function that removes None values. Additionally, a new benchmark script for measuring chat completion latency via the Ark SDK has been added, along with utility functions for time measurement and latency calculation. A minor modification was made in another benchmark script by commenting out the line that writes audio content to a file.

Changes

File Change Summary
cozepy/.../chat/init.py Updated constructors and create methods in WebsocketsChatClient, WebsocketsChatBuildClient, AsyncWebsocketsChatClient, and AsyncWebsocketsChatBuildClient to include a new workflow_id parameter, which is incorporated into the query dictionary using remove_none_values.
examples/benchmark_ark_text.py Introduced a new script for benchmarking the latency of a chat completion service via the Ark SDK. Added helper functions (get_current_time_ms, cal_latency, test_latency) and an asynchronous main function to drive the testing loop.
examples/benchmark_websockets_chat.py Commented out the line responsible for writing audio content to a file (content.write_to_file("test.wav")), thus disabling the file output functionality.

Sequence Diagram(s)

sequenceDiagram
    participant Main
    participant ArkClient
    participant ChatService

    Main->>Main: Retrieve environment variables
    Main->>Main: Start 50 test iterations
    Main->>ArkClient: Initialize with endpoint and token
    ArkClient->>ChatService: Send chat request with text input
    ChatService-->>ArkClient: Return chat response
    ArkClient-->>Main: Provide latency data
    Main->>Main: Calculate & log average latency
Loading

Possibly related PRs

Poem

I'm Bunny, hopping through the code so clear,
Adding workflow_id to make our chats adhere.
Benchmarking speeds, latencies in delight,
With Ark and websockets, our scripts take flight.
In lines of code and binkies so bright,
I celebrate these changes from morning till night!
🥕🐇 Happy coding, hopping into the light!

✨ Finishing Touches
  • 📝 Generate Docstrings

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

codecov bot commented Mar 3, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 89.94%. Comparing base (b1127b9) to head (2f18e05).
Report is 1 commits behind head on main.

@@           Coverage Diff           @@
##             main     #189   +/-   ##
=======================================
  Coverage   89.94%   89.94%           
=======================================
  Files          65       65           
  Lines        5858     5858           
=======================================
  Hits         5269     5269           
  Misses        589      589           
Files with missing lines Coverage Δ
cozepy/websockets/chat/__init__.py 50.00% <100.00%> (ø)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (4)
examples/benchmark_ark_text.py (2)

11-18: Consider improving the latency calculation function.

The cal_latency function has a few issues:

  1. The name is abbreviated - consider renaming to calculate_latency for clarity
  2. It removes the last value from the sorted list when calculating the average, which may be intentional but isn't explained
  3. It uses "%2d" for formatting, which forces integer output and may truncate decimal values
-def cal_latency(latency_list: List[int]) -> str:
+def calculate_latency(latency_list: List[int]) -> str:
    if latency_list is None or len(latency_list) == 0:
        return "0"
    if len(latency_list) == 1:
        return f"{latency_list[0]}"
    res = latency_list.copy()
    res.sort()
-    return "%2d" % ((sum(res[:-1]) * 1.0) / (len(res) - 1))
+    # Calculate average excluding highest value to reduce outlier impact
+    return "%.2f" % ((sum(res[:-1]) * 1.0) / (len(res) - 1))

21-39: Consider parameterizing the base URL and enhancing latency measurement.

The function has hardcoded the base URL and only measures latency until the first token arrives rather than complete response time.

-def test_latency(ep: str, token: str, text: str):
+def test_latency(ep: str, token: str, text: str, base_url: str = "https://ark.cn-beijing.volces.com/api/v3"):
    from volcenginesdkarkruntime import Ark

-    client = Ark(base_url="https://ark.cn-beijing.volces.com/api/v3", api_key=token)
+    client = Ark(base_url=base_url, api_key=token)
    start = get_current_time_ms()
    stream = client.chat.completions.create(
        model=ep,
        messages=[
            {"role": "user", "content": text},
        ],
        stream=True,
    )
+    first_token_time = None
+    total_tokens = 0
    for chunk in stream:
        if not chunk.choices:
            continue

        if chunk.choices[0].delta.content:
+            if first_token_time is None:
+                first_token_time = get_current_time_ms()
+            total_tokens += 1
+    end_time = get_current_time_ms()
+    first_token_latency = first_token_time - start if first_token_time else 0
+    total_latency = end_time - start
+    return "", chunk.choices[0].delta.content, first_token_latency, total_latency, total_tokens
-            return "", chunk.choices[0].delta.content, get_current_time_ms() - start
cozepy/websockets/chat/__init__.py (2)

339-339: Ensure backward compatibility in create methods.

The workflow_id parameter is passed without checking if it exists in kwargs. Ensure backward compatibility by checking if the parameter exists before passing it.

-            workflow_id=workflow_id,
+            workflow_id=workflow_id,  # Can be None

Also applies to: 575-575


163-579: Consider adding documentation for the workflow_id parameter.

The purpose of the workflow_id parameter and when it should be used is not explained in the code. Consider adding docstrings to the relevant methods to explain its purpose and usage.

Add docstrings to the relevant methods, for example:

def __init__(
    self,
    base_url: str,
    auth: Auth,
    requester: Requester,
    bot_id: str,
    workflow_id: Optional[str] = None,
    on_event: Union[WebsocketsChatEventHandler, Dict[WebsocketsEventType, Callable]],
    **kwargs,
):
    """Initialize a WebSocket chat client.
    
    Args:
        base_url: The base URL for the WebSocket connection.
        auth: Authentication details.
        requester: The requester to use for the WebSocket connection.
        bot_id: The ID of the bot to connect to.
        workflow_id: Optional ID of the workflow to use with this chat connection.
        on_event: Event handler or dictionary mapping event types to handler functions.
        **kwargs: Additional arguments to pass to the WebSocket connection.
    """
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b1127b9 and 2f18e05.

📒 Files selected for processing (3)
  • cozepy/websockets/chat/__init__.py (9 hunks)
  • examples/benchmark_ark_text.py (1 hunks)
  • examples/benchmark_websockets_chat.py (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • examples/benchmark_websockets_chat.py
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: test (Python 3.8 on macOS)
  • GitHub Check: test (Python 3.8 on Windows)
🔇 Additional comments (4)
examples/benchmark_ark_text.py (1)

54-55: LGTM!

The usage of asyncio.run(main()) is correct for initiating the async main function.

cozepy/websockets/chat/__init__.py (3)

9-9: LGTM!

Adding the import for remove_none_values is appropriate for the changes being made.


196-201: LGTM! Good use of remove_none_values for query parameters.

The implementation correctly uses remove_none_values to filter out None values from the query parameters dictionary, which is a good practice for handling optional parameters in URL construction.

Also applies to: 432-437


163-205:

❓ Verification inconclusive

Verify the impact of the workflow_id addition on existing WebSocket connections.

Adding a new parameter to the WebSocket URL query could affect the server's response. Ensure the server properly handles this new parameter.


🏁 Script executed:

#!/bin/bash
# Description: Check if any tests or documentation exist that verify WebSocket connection with workflow_id

# Look for tests related to WebSocketsChatClient with workflow_id
rg -t py "WebsocketsChatClient.*workflow_id" --glob "!cozepy/websockets/chat/__init__.py"

# Check for other references to workflow_id in the codebase that might provide context
rg -t py "workflow_id" --glob "!cozepy/websockets/chat/__init__.py"

Length of output: 6649


Review Verification: Confirm Server Handling of workflow_id Query Parameter

The changes introduce the workflow_id parameter into the WebSocket URL query for the chat client. While similar usage exists throughout the codebase (e.g., in workflows modules and tests in tests/test_workflows_chat.py), please double-check that this addition does not adversely affect existing WebSocket connections and that the server’s endpoint (v1/chat) properly handles the new parameter. Specifically:

  • Verify (via tests or direct server inspection) that the server does not reject or misinterpret connections that include workflow_id.
  • Ensure that any changes in query parameters do not lead to unexpected responses or connection instability.

Comment on lines +41 to +51
async def main():
ep = os.getenv("ARK_EP")
token = os.getenv("ARK_TOKEN")
text = os.getenv("COZE_TEXT") or "讲个笑话"

times = 50
text_latency = []
for i in range(times):
logid, text, latency = test_latency(ep, token, text)
text_latency.append(latency)
print(f"[latency.ark.text] {i}, latency: {cal_latency(text_latency)} ms, log: {logid}, text: {text}")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Refactor main function to properly utilize async functionality and improve benchmark quality.

The function is declared as async but doesn't use await statements. Also, running 50 iterations back-to-back might skew results.

 async def main():
     ep = os.getenv("ARK_EP")
     token = os.getenv("ARK_TOKEN")
     text = os.getenv("COZE_TEXT") or "讲个笑话"
 
     times = 50
     text_latency = []
+    ttft_latency = []  # Time to first token
+    ttr_latency = []   # Time to complete response
+    token_counts = []
     for i in range(times):
-        logid, text, latency = test_latency(ep, token, text)
-        text_latency.append(latency)
-        print(f"[latency.ark.text] {i}, latency: {cal_latency(text_latency)} ms, log: {logid}, text: {text}")
+        # Add a small pause between requests to avoid rate limiting and network congestion
+        if i > 0:
+            await asyncio.sleep(0.5)
+            
+        logid, content, first_token_latency, total_latency, tokens = test_latency(ep, token, text)
+        ttft_latency.append(first_token_latency)
+        ttr_latency.append(total_latency)
+        token_counts.append(tokens)
+        
+        print(f"[latency.ark.text] {i}, TTFT: {calculate_latency(ttft_latency)} ms, " +
+              f"TTR: {calculate_latency(ttr_latency)} ms, tokens: {tokens}, " +
+              f"log: {logid}, first token: {content[:20]}...")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async def main():
ep = os.getenv("ARK_EP")
token = os.getenv("ARK_TOKEN")
text = os.getenv("COZE_TEXT") or "讲个笑话"
times = 50
text_latency = []
for i in range(times):
logid, text, latency = test_latency(ep, token, text)
text_latency.append(latency)
print(f"[latency.ark.text] {i}, latency: {cal_latency(text_latency)} ms, log: {logid}, text: {text}")
async def main():
ep = os.getenv("ARK_EP")
token = os.getenv("ARK_TOKEN")
text = os.getenv("COZE_TEXT") or "讲个笑话"
times = 50
text_latency = []
ttft_latency = [] # Time to first token
ttr_latency = [] # Time to complete response
token_counts = []
for i in range(times):
# Add a small pause between requests to avoid rate limiting and network congestion
if i > 0:
await asyncio.sleep(0.5)
logid, content, first_token_latency, total_latency, tokens = test_latency(ep, token, text)
ttft_latency.append(first_token_latency)
ttr_latency.append(total_latency)
token_counts.append(tokens)
print(f"[latency.ark.text] {i}, TTFT: {calculate_latency(ttft_latency)} ms, " +
f"TTR: {calculate_latency(ttr_latency)} ms, tokens: {tokens}, " +
f"log: {logid}, first token: {content[:20]}...")

@@ -167,6 +167,7 @@ def __init__(
auth: Auth,
requester: Requester,
bot_id: str,
workflow_id: str,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Make workflow_id parameter optional to maintain backward compatibility.

The workflow_id parameter should be optional with a default value of None to prevent breaking existing code. This is especially important for public APIs.

-        workflow_id: str,
+        workflow_id: Optional[str] = None,

This change should be applied to all four instances where workflow_id is being added as a parameter.

Also applies to: 330-330, 406-406, 566-566

@chyroc chyroc merged commit 0b231c1 into main Mar 3, 2025
18 checks passed
@chyroc chyroc deleted the ws-workflow-id branch March 3, 2025 11:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant