Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ECO-4952] Start implementing room lifecycle spec #1

Open
wants to merge 1 commit into
base: base-sha/7ef0a0ee1e16097b38632008577e1544e49d6715
Choose a base branch
from

Conversation

ruancomelli
Copy link
Collaborator

Note: This is based on top of ably-labs#60; please review that one first.

This is based on ably/specification#200 at b4a495e. It’s generated some questions, which I’ve asked on that PR.

Note that the spec has been through a few revisions since I started implementing this; I’ve tried to keep everything in sync but some older stuff might still be accidentally there.

I’ve implemented most of the specified behaviour for the ATTACH, DETACH, and RELEASE operations. I have not yet implemented RETRY since it’s quite different to those three and I wanted to get eyes on this first chunk of work; have created ably-labs#51 for implementing it.

Of the operations that I have implemented, I have not implemented the following (this is accurately reflected by the @spec… tags in the tests):

The room lifecycle manager introduced by this commit is currently a standalone object, which is not integrated with the rest of the SDK. I’ve created ably-labs#47 for doing this integration work.

The @spec… tags are based on the on the JS rules @ 8c9ce8b, but I camel-cased them and also decided that:

  • if a test doesn't relate to a spec point, it doesn't need any markers
  • there should be a way to know that a spec point is tested across multiple tests
  • there should be a way of marking a spec point as implemented but not tested (so that can show up differently in reporting; important given that we’re also going to use this report as a to-do list of what still needs to be implemented)

Part of ably-labs#28.

Summary by CodeRabbit

  • New Features

    • Introduced new error codes for improved error handling in the chat SDK.
    • Added a new section in the contribution guidelines for better test documentation related to SDK features.
    • Implemented a RoomLifecycleManager to manage chat room contributor states effectively.
    • Added a SimpleClock protocol to facilitate asynchronous task management in testing.
    • Introduced a mock implementation of RoomLifecycleContributorChannel for testing purposes.
  • Bug Fixes

    • Enhanced error reporting for attachment and detachment failures within the chat SDK.
  • Tests

    • Added comprehensive unit tests for the RoomLifecycleManager to ensure robust lifecycle management of chat rooms.
    • Introduced mock implementations to facilitate controlled testing scenarios.
    • Improved assertion capabilities in the testing framework for better error handling validation.

This is based on [1] at b4a495e. It’s generated some questions, which
I’ve asked on that PR.

Note that the spec has been through a few revisions since I started
implementing this; I’ve tried to keep everything in sync but some older
stuff might still be accidentally there.

I’ve implemented most of the specified behaviour for the ATTACH, DETACH,
and RELEASE operations. I have not yet implemented RETRY since it’s
quite different to those three and I wanted to get eyes on this first
chunk of work; have created ably-labs#51 for implementing it.

Of the operations that I have implemented, I have not implemented the
following (this is accurately reflected by the @SPEC… tags in the
tests):

- Lifecycle behaviour that relates to another ongoing operation (created
  ably-labs#52)

- Lifecycle behaviour relating to “transient disconnect timeouts”
  (created ably-labs#48)

- Lifecycle behaviour that occurs “asynchronously” outside an operation
  (created ably-labs#50)

- CHA-RL1g2, which refers to emitting “discontinuity events” which are a
  concept not yet implemented; this will come in ably-labs#53.

The room lifecycle manager introduced by this commit is currently a
standalone object, which is not integrated with the rest of the SDK.
I’ve created ably-labs#47 for doing this integration work.

The @SPEC… tags are based on the on the JS rules [2] @ 8c9ce8b, but I
camel-cased them and also decided that:

- if a test doesn't relate to a spec point, it doesn't need any markers

- there should be a way to know that a spec point is tested across
  multiple tests

- there should be a way of marking a spec point as implemented but not
  tested (so that can show up differently in reporting; important given
  that we’re also going to use this report as a to-do list of what still
  needs to be implemented)

Part of ably-labs#28.

[1] ably/specification#200
[2] https://github.com/ably/ably-js/blob/main/CONTRIBUTING.md#tests-alignment-with-the-ably-features-specification
@ruancomelli
Copy link
Collaborator Author

This is a benchmark review for experiment mermaid_diagrams.
Run ID: mermaid_diagrams/benchmark_2024-09-20T14-09-29_v1-22-0-140-ga8d2a8b27-dirty.

This pull request was cloned from https://github.com/ably-labs/ably-chat-swift/pull/54. (Note: the URL is not a link to avoid triggering a notification on the original pull request.)

Experiment configuration
review_config:
  # User configuration for the review
  # - benchmark - use the user config from the benchmark reviews
  # - <value> - use the value directly
  user_review_config:
    enable_ai_review: true
    enable_rule_comments: false

    enable_complexity_comments: false
    enable_security_comments: false
    enable_tests_comments: false
    enable_comment_suggestions: false
    enable_functionality_review: false

    enable_pull_request_summary: false
    enable_review_guide: true

    enable_approvals: false

  ai_review_config:
    # The model responses to use for the experiment
    # - benchmark - use the model responses from the benchmark reviews
    # - llm - call the language model to generate responses
    model_responses:
      comments_model: benchmark
      comment_area_model: benchmark
      comment_validation_model: benchmark
      comment_suggestion_model: benchmark
      complexity_model: benchmark
      functionality_model: benchmark
      security_model: benchmark
      tests_model: benchmark
      pull_request_summary_model: benchmark
      review_guide_model: llm
      overall_comments_model: benchmark

# The pull request dataset to run the experiment on
pull_request_dataset:
- https://github.com/sourcery-ai/core/pull/4607
- https://github.com/sourcery-ai/core/pull/4631
- https://github.com/sourcery-ai/core/pull/4647
    # CodeRabbit examples:
- https://github.com/2lambda123/-Orange-OpenSource-oorobot/pull/15
- https://github.com/2lambda123/galaxyproject-galaxy/pull/12
- https://github.com/a0v0/avtoolz/pull/79
- https://github.com/adityask98/Hotaru/pull/10
- https://github.com/agdas/vscode/pull/2
- https://github.com/agluszak/hirschgarten/pull/2
- https://github.com/alexsnow348/insightface/pull/46
- https://github.com/alikuxac/utilities/pull/10
- https://github.com/AlphaDev87/timba-api/pull/49
- https://github.com/AngeloTadeucci/Maple2/pull/239
- https://github.com/AngeloTadeucci/Maple2.File/pull/36
- https://github.com/AngeloTadeucci/Maple2/pull/233
    # Examples where CodeRabbit does not generate diagrams
- https://github.com/baptisteArno/typebot.io/pull/1778
- https://github.com/btipling/foundations/pull/33
- https://github.com/btipling/foundations/pull/31
- https://github.com/chintu-777/jaeger/pull/1
- https://github.com/coji/remix-docs-ja/pull/55
- https://github.com/DaveMBush/SmartNgRX/pull/622
- https://github.com/DaveMBush/SmartNgRX/pull/481
- https://github.com/dkittle/party-connections/pull/6
- https://github.com/Drajad-Kusuma-Adi/onstudy-backend/pull/6
- https://github.com/imaami/libcanth/pull/2
  # Requested by Tim
- https://github.com/2lambda123/-Orange-OpenSource-oorobot/pull/15
- https://github.com/2lambda123/galaxyproject-galaxy/pull/12
- https://github.com/a0v0/avtoolz/pull/79
- https://github.com/adityask98/Hotaru/pull/10
- https://github.com/agdas/vscode/pull/2
- https://github.com/agluszak/hirschgarten/pull/2
- https://github.com/4DNucleome/PartSeg/pull/1183
- https://github.com/ably/ably-cocoa/pull/1973
- https://github.com/ably-labs/ably-chat-swift/pull/54
- https://github.com/ably-labs/ably-chat-swift/pull/35

# Questions to ask to label the review comments
review_comment_labels: []
# - label: correct
#   question: Is this comment correct?

# Benchmark reviews generated by running
#   python -m scripts.experiment benchmark <experiment_name>
benchmark_reviews:
- dataset_pull_request: https://github.com/sourcery-ai/core/pull/4607
  review_pull_request: https://github.com/sourcery-ai-experiments/core/pull/338
- dataset_pull_request: https://github.com/sourcery-ai/core/pull/4631
  review_pull_request: https://github.com/sourcery-ai-experiments/core/pull/339
- dataset_pull_request: https://github.com/sourcery-ai/core/pull/4647
  review_pull_request: https://github.com/sourcery-ai-experiments/core/pull/340
- dataset_pull_request: https://github.com/2lambda123/-Orange-OpenSource-oorobot/pull/15
  review_pull_request: https://github.com/sourcery-ai-experiments/-Orange-OpenSource-oorobot/pull/2
- dataset_pull_request: https://github.com/2lambda123/galaxyproject-galaxy/pull/12
  review_pull_request: https://github.com/sourcery-ai-experiments/galaxyproject-galaxy/pull/1
- dataset_pull_request: https://github.com/adityask98/Hotaru/pull/10
  review_pull_request: https://github.com/sourcery-ai-experiments/Hotaru/pull/2
- dataset_pull_request: https://github.com/agdas/vscode/pull/2
  review_pull_request: https://github.com/sourcery-ai-experiments/vscode/pull/3
- dataset_pull_request: https://github.com/agluszak/hirschgarten/pull/2
  review_pull_request: https://github.com/sourcery-ai-experiments/hirschgarten/pull/2
- dataset_pull_request: https://github.com/alikuxac/utilities/pull/10
  review_pull_request: https://github.com/sourcery-ai-experiments/utilities/pull/2
- dataset_pull_request: https://github.com/AlphaDev87/timba-api/pull/49
  review_pull_request: https://github.com/sourcery-ai-experiments/timba-api/pull/2
- dataset_pull_request: https://github.com/AngeloTadeucci/Maple2/pull/239
  review_pull_request: https://github.com/sourcery-ai-experiments/Maple2/pull/3
- dataset_pull_request: https://github.com/AngeloTadeucci/Maple2.File/pull/36
  review_pull_request: https://github.com/sourcery-ai-experiments/Maple2.File/pull/2
- dataset_pull_request: https://github.com/AngeloTadeucci/Maple2/pull/233
  review_pull_request: https://github.com/sourcery-ai-experiments/Maple2/pull/4
- dataset_pull_request: https://github.com/baptisteArno/typebot.io/pull/1778
  review_pull_request: https://github.com/sourcery-ai-experiments/typebot.io/pull/1
- dataset_pull_request: https://github.com/btipling/foundations/pull/33
  review_pull_request: https://github.com/sourcery-ai-experiments/foundations/pull/1
- dataset_pull_request: https://github.com/btipling/foundations/pull/31
  review_pull_request: https://github.com/sourcery-ai-experiments/foundations/pull/2
- dataset_pull_request: https://github.com/chintu-777/jaeger/pull/1
  review_pull_request: https://github.com/sourcery-ai-experiments/jaeger/pull/1
- dataset_pull_request: https://github.com/coji/remix-docs-ja/pull/55
  review_pull_request: https://github.com/sourcery-ai-experiments/remix-docs-ja/pull/1
- dataset_pull_request: https://github.com/DaveMBush/SmartNgRX/pull/622
  review_pull_request: https://github.com/sourcery-ai-experiments/SmartNgRX/pull/1
- dataset_pull_request: https://github.com/DaveMBush/SmartNgRX/pull/481
  review_pull_request: https://github.com/sourcery-ai-experiments/SmartNgRX/pull/2
- dataset_pull_request: https://github.com/dkittle/party-connections/pull/6
  review_pull_request: https://github.com/sourcery-ai-experiments/party-connections/pull/1
- dataset_pull_request: https://github.com/Drajad-Kusuma-Adi/onstudy-backend/pull/6
  review_pull_request: https://github.com/sourcery-ai-experiments/onstudy-backend/pull/1
- dataset_pull_request: https://github.com/imaami/libcanth/pull/2
  review_pull_request: https://github.com/sourcery-ai-experiments/libcanth/pull/1

@SourceryAI
Copy link

SourceryAI commented Sep 20, 2024

Reviewer's Guide by Sourcery

This pull request implements a significant portion of the room lifecycle specification for the Ably Chat SDK. It introduces a new RoomLifecycleManager class that manages the lifecycle of chat room contributors, handling operations such as ATTACH, DETACH, and RELEASE. The implementation includes error handling, state transitions, and retry mechanisms as specified in the Chat SDK features spec.

File-Level Changes

Change Details Files
Implemented RoomLifecycleManager to handle room lifecycle operations
  • Created RoomLifecycleManager class to manage room lifecycle
  • Implemented ATTACH operation with error handling and state transitions
  • Implemented DETACH operation with error handling and retry mechanism
  • Implemented RELEASE operation with contributor detachment logic
  • Added support for multiple contributors in lifecycle management
Sources/AblyChat/RoomLifecycleManager.swift
Added new error codes and types for room lifecycle operations
  • Introduced new error codes for attachment and detachment failures
  • Added error codes for room state-related errors (e.g., room in failed state, releasing, or released)
  • Implemented ChatError enum for more detailed error information
Sources/AblyChat/Errors.swift
Created mock implementations and test helpers for room lifecycle testing
  • Implemented MockRoomLifecycleContributorChannel for simulating contributor behavior in tests
  • Created MockSimpleClock for controlled time management in tests
  • Added test helpers for asserting chat errors and async operations
Tests/AblyChatTests/Mocks/MockRoomLifecycleContributorChannel.swift
Tests/AblyChatTests/Mocks/MockSimpleClock.swift
Tests/AblyChatTests/Helpers/Helpers.swift
Implemented comprehensive test suite for RoomLifecycleManager
  • Added tests for ATTACH operation scenarios
  • Added tests for DETACH operation scenarios
  • Added tests for RELEASE operation scenarios
  • Implemented tests for error handling and state transitions
Tests/AblyChatTests/RoomLifecycleManagerTests.swift
Updated contribution guidelines for spec-related testing
  • Added guidelines for attributing tests to spec points using @SPEC tags
  • Introduced @specOneOf and @specPartial tags for partial spec coverage
  • Added @specUntested tag for marking implemented but untested spec points
CONTRIBUTING.md
Added supporting types and protocols for room lifecycle management
  • Created RoomFeature enum to represent different features of a chat room
  • Implemented SimpleClock protocol for abstracting time-related operations
Sources/AblyChat/RoomFeature.swift
Sources/AblyChat/SimpleClock.swift

Sequence Diagrams

ATTACH Operation Sequence

sequenceDiagram
    participant Client
    participant RLM as RoomLifecycleManager
    participant Contributor
    Client->>RLM: performAttachOperation()
    RLM->>RLM: changeStatus(to: .attaching)
    loop For each contributor
        RLM->>Contributor: attach()
        alt Attach succeeds
            Contributor-->>RLM: Success
        else Attach fails
            Contributor-->>RLM: Error
            RLM->>RLM: changeStatus(to: .suspended or .failed)
            RLM-->>Client: Throw error
        end
    end
    RLM->>RLM: changeStatus(to: .attached)
    RLM-->>Client: Operation complete
Loading

DETACH Operation Sequence

sequenceDiagram
    participant Client
    participant RLM as RoomLifecycleManager
    participant Contributor
    Client->>RLM: performDetachOperation()
    RLM->>RLM: changeStatus(to: .detaching)
    loop For each contributor
        RLM->>Contributor: detach()
        alt Detach succeeds
            Contributor-->>RLM: Success
        else Detach fails
            Contributor-->>RLM: Error
            RLM->>RLM: Retry logic
        end
    end
    alt All detaches successful
        RLM->>RLM: changeStatus(to: .detached)
        RLM-->>Client: Operation complete
    else Any detach failed
        RLM->>RLM: changeStatus(to: .failed)
        RLM-->>Client: Throw error
    end
Loading

Tips
  • Trigger a new Sourcery review by commenting @sourcery-ai review on the pull request.
  • Continue your discussion with Sourcery by replying directly to review comments.
  • You can change your review settings at any time by accessing your dashboard:
    • Enable or disable the Sourcery-generated pull request summary or reviewer's guide;
    • Change the review language;
  • You can always contact us if you have any questions or feedback.

Copy link

@SourceryAI SourceryAI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @ruancomelli - I've reviewed your changes - here's some feedback:

Overall Comments:

  • Consider breaking down some of the larger functions in RoomLifecycleManager (e.g., performAttachOperation, performDetachOperation) into smaller, more focused functions to improve readability and maintainability.
  • The error handling approach with custom error types and extensive mapping is complex. Consider exploring a more unified error handling strategy in the future to reduce potential inconsistencies.
Here's what I looked at during the review
  • 🟡 General issues: 3 issues found
  • 🟡 Documentation: 1 issue found

LangSmith trace

Help me be more useful! Please click 👍 or 👎 on each comment to tell me if it was helpful.

Comment on lines +98 to +107
switch contributorState {
case .suspended:
// CHA-RL1h2
let error = ARTErrorInfo(chatError: .attachmentFailed(feature: contributor.feature, underlyingError: contributorAttachError))
changeStatus(to: .suspended, error: error)

// CHA-RL1h3
throw error
case .failed:
// CHA-RL1h4

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Consider creating specific error types for contributor failures

The current approach of catching all errors and then checking the contributor's state could potentially mask important error information. Consider creating specific error types that encapsulate both the error and the resulting state. This would allow for more precise error handling without relying on checking the state after the fact.

}
default:
// CHA-RL2h3: Retry until detach succeeds, with a pause before each attempt
while true {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Add a retry limit or timeout for detach operations

The current implementation of retrying detach operations has no upper bound, which could lead to infinite loops in certain edge cases. Consider adding a maximum retry count or a timeout mechanism to ensure the operation eventually terminates, even in failure scenarios.

// CHA-RL2h1
guard let contributorError = await contributor.channel.errorReason else {
// TODO: The spec assumes this will be populated, but working in a multi-threaded environment means it might not be (https://github.com/ably-labs/ably-chat-swift/issues/49)
preconditionFailure("Contributor entered FAILED but its errorReason is not set")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Consider more graceful error handling in release builds

While using preconditionFailure is appropriate for catching programmer errors in debug builds, consider implementing a more graceful handling of these cases in release builds. This could involve falling back to a more general error type or providing a default error reason, ensuring that the application can continue functioning even in unexpected scenarios.

@@ -28,3 +28,46 @@ To check formatting and code quality, run `swift run BuildTool lint`. Run with `
- We should aim to make it easy for consumers of the SDK to be able to mock out the SDK in the tests for their own code. A couple of things that will aid with this:
- Describe the SDK’s functionality via protocols (when doing so would still be sufficiently idiomatic to Swift).
- When defining a `struct` that is emitted by the public API of the library, make sure to define a public memberwise initializer so that users can create one to be emitted by their mocks. (There is no way to make Swift’s autogenerated memberwise initializer public, so you will need to write one yourself. In Xcode, you can do this by clicking at the start of the type declaration and doing Editor → Refactor → Generate Memberwise Initializer.)
- When writing code that implements behaviour specified by the Chat SDK features spec, add a comment that references the identifier of the relevant spec item.

### Testing guidelines

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (documentation): Consider adding a brief explanation of the purpose of spec tags

It might be helpful to briefly explain why these spec tags are used at the beginning of the 'Attributing tests to a spec point' section. This would provide immediate context for developers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants