Skip to content

Update Cargo.toml, add Gemini example, refine error handling #6

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

retrokiller543
Copy link
Owner

@retrokiller543 retrokiller543 commented Feb 11, 2025

Resolves #5

Summary by CodeRabbit

  • New Features

    • Introduced a new example demonstrating asynchronous AI interaction with streaming responses, providing a practical showcase for users.
    • Added a mock implementation of a language model client for testing purposes.
  • Refactor

    • Enhanced error reporting by incorporating additional context for clearer troubleshooting and improved guidance.
  • Chores

    • Updated configuration to support improved documentation generation and streamlined dependency management, contributing to a more robust and intuitive system.
    • Made the Result type alias public for better accessibility across modules.

@retrokiller543 retrokiller543 added the enhancement New feature or request label Feb 11, 2025
@retrokiller543 retrokiller543 linked an issue Feb 11, 2025 that may be closed by this pull request
Copy link

coderabbitai bot commented Feb 11, 2025

Walkthrough

The pull request updates dependency configurations in Cargo.toml by adding new metadata sections for documentation and testing, revising dependency versions, and introducing development dependencies and feature flags. It also introduces a new Gemini example that implements an asynchronous AI interaction flow with streaming responses and JSON parsing. Enhancements in error handling have been made by refining error types, contexts, and macros, and public interfaces have been adjusted by exposing new constructors and type aliases. A new macro for mocking language model clients and a documentation module have been added.

Changes

File(s) Change Summary
Cargo.toml Added [package.metadata.docs.rs] and [lib] sections; updated dependency versions (tosic-utils, tokio, serde), removed futures; added [dev-dependencies] and [features] sections.
examples/gemini.rs New file introducing GeminiResponseParser, an asynchronous ask_ai function with streaming response processing, and a main entry for AI interaction with error handling.
src/error.rs, src/gemini/mod.rs Enhanced error handling: replaced LlmError with LlmErrorType, added ErrorContext and WithContext trait, new error macros; applied error context in parsing URLs and sending requests in the Gemini client.
src/lib.rs, src/provider.rs Made the Result type alias public; added LlmProvider::new constructor with inline optimization and updated example usage with the new macro.
src/utils/doc.rs, src/utils/mod.rs Added the mocked_llm_client! macro for testing a language model client and introduced a new public doc module for documentation utilities.

Sequence Diagram(s)

sequenceDiagram
    participant Main as Main (main)
    participant AskAI as ask_ai()
    participant Client as GeminiClient
    participant Provider as LlmProvider
    participant Parser as GeminiResponseParser
    participant Out as StdOut

    Main->>AskAI: Invoke ask_ai()
    AskAI->>Client: Initialize and send request
    Client-->>AskAI: Return streaming response chunks
    loop For each chunk
        AskAI->>Parser: Call parse_chunk(chunk)
        Parser-->>AskAI: Return parsed text (if available)
    end
    AskAI->>Out: Write parsed text to stdout
    Note right of AskAI: Errors are handled with context
Loading

Assessment against linked issues

Objective Addressed Explanation
Improve error handling (#5)

Poem

I hopped through lines of code so fine,
Mending errors, one happy little sign.
With context and macros to light the way,
My nimble paws made the bugs all sway.
A joyful rabbit sings—a coding day!
🐇💻


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@retrokiller543 retrokiller543 self-assigned this Feb 11, 2025
@coderabbitai coderabbitai bot changed the title @coderabbitai Update Cargo.toml, add Gemini example, refine error handling Feb 11, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (8)
examples/gemini.rs (3)

9-15: Consider unifying error types
Defining a separate local Error enum is perfectly valid for this example. However, if you want to reduce duplication, consider merging with or extending the main LlmError type to maintain consistency throughout the codebase.


33-69: Potential partial JSON chunk handling
The method assumes each chunk is a valid JSON object or bracket token. If the remote endpoint sends partial JSON objects across chunk boundaries, this could cause parse errors. Consider buffering incomplete data between calls to parse_chunk so you can handle streaming splits more robustly.


107-110: Handle top-level errors
Your main function is succinct. You may want to add user-friendly logging or retry logic here, in case of unhandled errors.

src/error.rs (2)

33-44: Custom context vs. backtrace
ErrorContext storing file and line data is useful. Depending on needs, you could consider Rust’s Backtrace or other introspection features for more detailed debugging information.


210-226: error_context! macro
Excellent convenience macro for creating structured error contexts. Future expansions might include more metadata or specialized formatting without complicating calling patterns.

src/utils/doc.rs (2)

32-34: Consider making the mock response configurable.

The current implementation returns a static "response" string. Consider making this configurable to support different test scenarios.

 async fn chat_completion(&self, input: Self::Input) -> Result<Self::Output, Self::Error> {
-    Ok(SimpleOutput("response".to_string()))
+    Ok(SimpleOutput(format!("Mock response for input: {}", input.0)))
 }

36-39: Consider implementing streaming functionality.

The current implementation returns an empty stream. Consider implementing basic streaming functionality to better simulate real LLM behavior.

 async fn stream_chat_completion(&self, input: Self::Input)
 -> Result<impl ::futures_util::Stream<Item = Result<Self::StreamedOutput, Self::Error>>, Self::Error> {
-    Ok(::futures_util::stream::empty())
+    use ::futures_util::stream;
+    Ok(stream::iter(vec![
+        Ok("Mock ".to_string()),
+        Ok("streaming ".to_string()),
+        Ok(format!("response for: {}", input.0))
+    ]))
 }
Cargo.toml (1)

37-39: Consider pinning the exact version for self-dependency.

When using path dependencies in dev-dependencies, it's recommended to pin the exact version to ensure reproducible builds.

 [dev-dependencies]
-tosic-llm = { path = ".", features = ["doc-utils"] }
+tosic-llm = { path = ".", version = "=0.1.0", features = ["doc-utils"] }
 tokio = { version = "1.43.0", features = ["full"] }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d48929a and 4d6eae8.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (8)
  • Cargo.toml (2 hunks)
  • examples/gemini.rs (1 hunks)
  • src/error.rs (1 hunks)
  • src/gemini/mod.rs (4 hunks)
  • src/lib.rs (1 hunks)
  • src/provider.rs (2 hunks)
  • src/utils/doc.rs (1 hunks)
  • src/utils/mod.rs (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • src/utils/mod.rs
🔇 Additional comments (14)
examples/gemini.rs (3)

1-8: Imports look good
No issues found with these import statements.


20-23: Struct definition is clear
Using flags like has_started and has_finished is a straightforward approach for controlling partial parsing flow.


71-99: Good usage of streaming
The logic for streaming responses is clear and well-structured. If the response isn't a stream, you currently return an error. Consider logging or an alternate fallback for a better user experience in production.

src/error.rs (6)

8-31: Broadened error coverage
Supporting multiple error variants such as Reqwest, Parse, Json, Io, and Generic makes error handling more precise. The Context variant further enriches diagnostic capabilities.


46-78: Layered context approach
add_context and new_context build upon existing errors nicely, but be cautious about repeatedly nesting contexts. Excessive recursion can make errors more verbose and harder to parse.


80-123: Strong implementation details
Combining From<E> conversions, standard traits (Display, Error), and a consistent source() method ensures a clean error chain. This design will yield clear error messages and enable easy debugging.


125-208: Error context extensions
Your WithContext trait provides a convenient interface for attaching contextual information to both Result and Option. This is a powerful and clean approach to error reporting.


228-234: bail! macro
A straightforward way to short-circuit on errors. This aligns well with typical early-return patterns in Rust.


235-242: ensure! macro
Enforcing runtime invariants with ensure! is beneficial. It keeps error handling consistent and self-documenting.

src/lib.rs (1)

12-12: Exposing the library-specific Result
Making this Result type public eases cross-crate usage and keeps error types more consistent throughout your ecosystem.

src/provider.rs (2)

52-56: LGTM! Clean and efficient constructor implementation.

The #[inline(always)] attribute is appropriate here as this is a simple constructor.


78-94: LGTM! Clear and comprehensive example usage.

The example effectively demonstrates both static and streaming response handling using the mock client.

src/gemini/mod.rs (1)

71-71: LGTM! Improved error handling with context.

The addition of error context using the context method and error_context! macro improves error messages and debugging capabilities.

Also applies to: 86-86

Cargo.toml (1)

6-9: Verify the docs.rs cargo arguments.

The -Zunstable-options and -Zrustdoc-scrape-examples are unstable features. Ensure they are supported by the current docs.rs build environment.

✅ Verification successful

🌐 Web query:

Are -Zunstable-options and -Zrustdoc-scrape-examples supported by docs.rs?

💡 Result:

Yes, -Zunstable-options and -Zrustdoc-scrape-examples are supported by Docs.rs when properly configured. Here's how it works:

  1. Enabling through Cargo.toml metadata
    You must add these flags to your crate's Cargo.toml under [package.metadata.docs.rs]:

    [package.metadata.docs.rs]
    cargo-args = ["-Zunstable-options", "-Zrustdoc-scrape-examples"]

    This tells Docs.rs to pass these flags during documentation builds[2][3].

  2. Implementation details

    • Docs.rs uses nightly Rust and accepts unstable features through explicit configuration[4]
    • Examples are scraped from examples/ directory using Rustdoc's code analysis[2][5]
    • Shows up to 5 examples per item with toggleable visibility[2]
  3. Handling dev-dependencies
    If your examples require dev-dependencies, you must explicitly enable scraping per example:

    [[example]]
    name = "your_example"
    doc-scrape-examples = true

    This signals acceptance of dev-dependency builds[8].

A real-world example can be seen in the tello-edu crate where scraped examples appear in documentation after proper configuration[8]. The feature works through Docs.rs' nightly toolchain but requires explicit opt-in due to its unstable nature[2][5].

Citations:


Docs.rs Flag Verification

Both the -Zunstable-options and -Zrustdoc-scrape-examples flags are supported by docs.rs when configured as shown. The current [package.metadata.docs.rs] configuration in your Cargo.toml is valid and aligns with docs.rs requirements.

@retrokiller543
Copy link
Owner Author

@coderabbitai full review

Copy link

coderabbitai bot commented Feb 11, 2025

✅ Actions performed

Full review triggered.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d48929a and 4d6eae8.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (8)
  • Cargo.toml (2 hunks)
  • examples/gemini.rs (1 hunks)
  • src/error.rs (1 hunks)
  • src/gemini/mod.rs (4 hunks)
  • src/lib.rs (1 hunks)
  • src/provider.rs (2 hunks)
  • src/utils/doc.rs (1 hunks)
  • src/utils/mod.rs (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
src/utils/doc.rs (1)
Learnt from: retrokiller543
PR: retrokiller543/tosic-llm#6
File: src/utils/doc.rs:12-16
Timestamp: 2025-02-11T20:50:08.362Z
Learning: The `mocked_llm_client!` macro in `src/utils/doc.rs` is a mock implementation specifically for doctests and is not used in production code. It's conditionally compiled only for documentation and testing purposes.
🔇 Additional comments (18)
examples/gemini.rs (4)

1-8: Imports look fine.
They correctly bring in the necessary dependencies and traits (like thiserror::Error and StreamExt).


25-32: Constructor logic is straightforward and clear.
Self-explanatory usage of booleans. No issues here.


71-105: Consider robust upstream data validation.
The function ensures res.is_stream(), but if the stream contains corrupted JSON or unexpected chunk formatting, parse_chunk will raise errors. This is acceptable for a sample, but you might want fallback or logging for chunk parse failures in a production scenario.


107-110: Main function usage is appropriate.
The top-level async function returns Result; error handling is concise. Overall, it’s a clear entry point example.

src/error.rs (3)

46-78: Custom LlmError struct is consistent, but be mindful of recursive context.
Storing Box<LlmErrorType> within a Context could lead to deep recursion if chaining becomes extensive. Currently, it seems safe, but be aware of potential complexity in repeated nested contexts.


176-208: Helpful handling of Option<T> with Empty variant.
This extension via WithContext is handy for situations where None arises. Good approach for encoding expectation failures into typed errors.


210-242: Macros are well-structured and convenient.
They cover common error patterns (error_context!, bail!, ensure!) with minimal ceremony. Great job keeping them flexible for usage across the codebase.

src/lib.rs (1)

12-12: Inconsistency with the AI-generated summary.
The summary states that Result uses LlmErrorType by default, but the code still references LlmError. Verify that tying Result to LlmError here is intentional. If you intended to reference LlmErrorType, this line needs updating.

Likely an incorrect or invalid review comment.

src/utils/mod.rs (1)

3-3: LGTM!

The addition of the doc module is clean and doesn't interfere with existing functionality.

src/utils/doc.rs (2)

1-2: LGTM! Well-structured conditional compilation.

The mock implementation is properly gated behind documentation and test features, and hidden from documentation.


12-16: Implementation uses todo!() as discussed.

Based on the retrieved learning, this is intentional as the mock is specifically for doctests and won't be used in production.

src/provider.rs (2)

52-56: LGTM! Clean constructor implementation.

The new constructor is well-documented and properly marked with inline optimization.


78-80: LGTM! Clear example usage.

The example effectively demonstrates the usage of the mock client for both static and streaming responses.

src/gemini/mod.rs (2)

6-6: LGTM! Enhanced error handling.

Good addition of WithContext and error_context macro for more informative error messages.

Also applies to: 19-19


71-71: LGTM! Improved error context.

The error messages now provide clear context about where and why errors occur.

Also applies to: 86-86

Cargo.toml (3)

17-17: Verify the custom registry configuration.

The tosic-utils dependency uses a custom registry "gitea". Ensure this registry is properly configured in your environment.

❌ Verification inconclusive

🏁 Script executed:

#!/bin/bash
# Check if the gitea registry is configured
cargo config get registries.gitea

Length of output: 131


It appears that the cargo command isn’t available in the current environment (as indicated by the “cargo: command not found” error), so we couldn’t automatically retrieve the custom registry configuration. Please manually verify that your custom registry "gitea" is correctly configured—typically by checking your local .cargo/config or .cargo/config.toml (or any relevant global Cargo configuration) to ensure that the registry details for "gitea" are properly set up.


6-9: Verify the unstable docs.rs options.

The docs.rs metadata uses unstable cargo options. Please ensure these are required and work as expected.

✅ Verification successful

🌐 Web query:

What are the current stable options for docs.rs metadata in Cargo.toml?

💡 Result:

The current stable configuration options for customizing docs.rs builds through [package.metadata.docs.rs] in Cargo.toml include:

Core Configuration Flags

  • features: Array of features to enable (default: empty)
    features = ["serde", "async"]
  • all-features: Boolean to enable all features (default: false)
  • no-default-features: Boolean to disable default features (default: false)[3][7]

Target Configuration

  • default-target: Primary target for documentation landing page
    default-target = "x86_64-pc-windows-msvc"
  • targets: Additional targets to build documentation for[3][8]

Build Flags

  • rustc-args: Extra compiler flags
    rustc-args = ["--cfg", "special_feature"]
  • rustdocflags: Custom rustdoc arguments
    rustdocflags = ["--cfg", "docsrs"]

Advanced Configuration

  • cargo-args: Unstable cargo flags (requires nightly toolchain)
    cargo-args = ["-Zbuild-std"]

While some configurations like cargo-args with -Z flags require nightly Rust internally on docs.rs[1][4], these metadata keys themselves are part of stable Rust's Cargo manifest format[7]. Recent changes have stabilized common patterns like #[cfg(docsrs)] detection without manual configuration[5].

For full compatibility, limit usage to non-nightly-dependent options unless explicitly requiring unstable features through cargo args[10].

Citations:


Re: Unstable docs.rs options in Cargo.toml
The current stable docs.rs configuration supports the usage of unstable flags via the advanced cargo-args option—but only when a nightly toolchain is used on docs.rs. Since your configuration specifies cargo-args with -Zunstable-options and -Zrustdoc-scrape-examples, please confirm that your crate intentionally depends on these unstable features and that you are prepared to use the nightly environment on docs.rs.

  • Note: The unstable flags are valid when using nightly, but for maximum compatibility, consider if stable alternatives (like rustc-args or rustdocflags) might suffice if nightly isn’t required.

24-24: Verify reqwest version for security updates.

Since reqwest is used for HTTP requests, it's critical to ensure you're using a secure version.

✅ Verification successful

🌐 Web query:

What is the latest version of reqwest crate and are there any security advisories?

💡 Result:

The latest stable version of the reqwest crate as of February 2025 is 0.12.9, based on its changelog entries showing recent updates to decompression handling and TLS revocation list support[3][4].

Security Advisories

  1. RUSTSEC-2025-0001 (Critical):
    Affects rust-gst-plugin-reqwest through its OpenSSL dependency:

    • Requires updating openssl to v0.10.70 and openssl-sys to v0.9.105[9]
    • Impacts Fedora 41 packages until patched
  2. Indirect Vulnerabilities:

    • Reqwest depends on hyper, tokio, and openssl, which occasionally have security updates[9]
    • No direct vulnerabilities reported for reqwest itself in the RustSec database[2][8]

Recommended Actions

  • Update using:
    [dependencies]
    reqwest = "0.12"
  • Run cargo update -p reqwest and rebuild[1]
  • Monitor RustSec Advisory Database for updates[2][8]

The crate maintains active development with recent fixes for connection reuse and TLS enhancements[3]. For WASM targets, use reqwest_wasi (latest: 0.1.0)[4].

Citations:


Reqwest Dependency Security Verified

  • The dependency is set to version 0.12.12 with default-features = false and uses the rustls-tls feature, ensuring it does not pull in the OpenSSL stack and its associated vulnerabilities.
  • Although the web query indicates that the latest stable version as of February 2025 is 0.12.9, no direct vulnerabilities for reqwest have been reported, and using a version above that baseline (0.12.12) appears to be safe.
  • Continue monitoring indirect advisories (such as those affecting hyper, tokio, and other dependencies) via the RustSec Advisory Database.

Comment on lines +9 to +15
#[derive(Debug, Error)]
enum Error {
#[error(transparent)]
Llm(#[from] LlmError),
#[error("{0}")]
Generic(String),
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Consider making the Error enum public if used across modules.
Currently, the Error enum is private to this file. If you anticipate needing it elsewhere, adding pub here can help maintain clarity and consistency across your project and keep your error types consolidated.

Comment on lines +19 to +23
#[derive(Debug)]
pub struct GeminiResponseParser {
has_started: bool,
has_finished: bool,
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Consider adding comments or docstrings.
GeminiResponseParser is self-explanatory, but docstrings can clarify the reasoning and usage patterns.

Comment on lines +33 to +69
pub fn parse_chunk(&mut self, chunk: &[u8]) -> JsonResult<Option<String>> {
// Convert bytes to string
let chunk_str = String::from_utf8_lossy(chunk);

// Handle the start and end markers
if chunk_str == "[" {
self.has_started = true;
return Ok(None);
} else if chunk_str == "]" || chunk_str.is_empty() {
self.has_finished = true;
return Ok(None);
}

// Remove leading comma if present (subsequent chunks start with ,\r\n)
let cleaned_chunk = if chunk_str.starts_with(",") {
chunk_str.trim_start_matches(",").trim_start()
} else {
&chunk_str
};

// Parse the JSON object
let v: Value = serde_json::from_str(cleaned_chunk)?;

// Extract the text from the nested structure
let text = v
.get("candidates")
.and_then(|c| c.get(0))
.and_then(|c| c.get("content"))
.and_then(|c| c.get("parts"))
.and_then(|p| p.get(0))
.and_then(|p| p.get("text"))
.and_then(|t| t.as_str())
.map(String::from);

Ok(text)
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Verify handling of chunked JSON.
Because the code handles [, ], and partial data, confirm you never receive a chunk that splits a JSON object across boundaries. Otherwise, serde_json::from_str will fail. If partial objects are possible, buffering logic might be necessary to reassemble JSON.

Could you confirm how GeminiClient streams JSON to ensure each chunk is a complete object (or bracket) for this parser?

Comment on lines +8 to +31
#[derive(Debug, Error, IsVariant)]
pub enum LlmErrorType {
#[error(transparent)]
Reqwest(#[from] reqwest::Error),
#[error(transparent)]
Parse(#[from] ParseError),
#[error(transparent)]
Json(#[from] serde_json::Error),
#[error(transparent)]
Io(#[from] std::io::Error),
#[error("An error occurred: {0}")]
Generic(#[from] Box<dyn std::error::Error + Send>),
#[error("{message}")]
Context {
message: String,
#[source]
source: Box<LlmErrorType>,
},
#[error("Value was empty. {value}, expected one of: {expected:?}")]
Empty {
value: String,
expected: &'static [&'static str],
},
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Robust error variant definitions.
The choice to wrap external errors (reqwest::Error, ParseError, etc.) and introduce a context variant is good for layering context. Consider adding doc comments for each variant to improve maintainability.

Comment on lines +11 to +13
[lib]
test = true
doctest = true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Consider removing redundant library settings.

The test and doctest settings are enabled by default for libraries. These explicit settings are unnecessary unless you have a specific reason to include them.


[dev-dependencies]
tosic-llm = { path = ".", features = ["doc-utils"] }
tokio = { version = "1.43.0", features = ["full"] }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Consider limiting tokio features in dev-dependencies.

Using features = ["full"] for tokio includes all features, which might be unnecessary for testing. Consider specifying only the features you need (e.g., "rt", "macros", "rt-multi-thread").

-tokio = { version = "1.43.0", features = ["full"] }
+tokio = { version = "1.43.0", features = ["rt", "macros", "rt-multi-thread"] }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
tokio = { version = "1.43.0", features = ["full"] }
tokio = { version = "1.43.0", features = ["rt", "macros", "rt-multi-thread"] }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Error Hanling
1 participant