Skip to content

deprecate LLMs with redirect logic #639

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
May 12, 2025
Merged

Conversation

nikochiko
Copy link
Member

@nikochiko nikochiko commented Apr 17, 2025

Q/A checklist

  • If you add new dependencies, did you update the lock file?
poetry lock --no-update
  • Run tests
ulimit -n unlimited && ./scripts/run-tests.sh
  • Do a self code review of the changes - Read the diff at least twice.
  • Carefully think about the stuff that might break because of this change - this sounds obvious but it's easy to forget to do "Go to references" on each function you're changing and see if it's used in a way you didn't expect.
  • The relevant pages still run when you press submit
  • The API for those pages still work (API tab)
  • The public API interface doesn't change if you didn't want it to (check API tab > docs page)
  • Do your UI changes (if applicable) look acceptable on mobile?
  • Ensure you have not regressed the import time unless you have a good reason to do so.
    You can visualize this using tuna:
python3 -X importtime -c 'import server' 2> out.log && tuna out.log

To measure import time for a specific library:

$ time python -c 'import pandas'

________________________________________________________
Executed in    1.15 secs    fish           external
   usr time    2.22 secs   86.00 micros    2.22 secs
   sys time    0.72 secs  613.00 micros    0.72 secs

To reduce import times, import libraries that take a long time inside the functions that use them instead of at the top of the file:

def my_function():
    import pandas as pd
    ...

Legal Boilerplate

Look, I get it. The entity doing business as “Gooey.AI” and/or “Dara.network” was incorporated in the State of Delaware in 2020 as Dara Network Inc. and is gonna need some rights from me in order to utilize my contributions in this PR. So here's the deal: I retain all rights, title and interest in and to my contributions, and by keeping this boilerplate intact I confirm that Dara Network Inc can use, modify, copy, and redistribute my contributions, under its choice of terms.

Copy link

LGTM 👍

@nikochiko nikochiko requested a review from Copilot April 17, 2025 07:12
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot reviewed 1 out of 1 changed files in this pull request and generated 1 comment.

Comments suppressed due to low confidence (1)

daras_ai_v2/language_model.py:878

  • Recursive redirection in run_language_model may lead to infinite recursion if the redirect target is itself deprecated. Consider adding a safeguard to detect and break potential redirection cycles, e.g. by tracking or limiting redirection depth.
if model.is_deprecated:

Comment on lines 881 to 883
else:
raise UserError(f"Model {model} is deprecated.")
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is new behavior for deprecated models without a redirect_to option. raising UserError here.

earlier, the behavior was to attempt to run the model regardless.

@devxpy
Copy link
Member

devxpy commented Apr 18, 2025

I wonder if a more robust way to do this would be to define the fallback on the LLMApis level instead of the model level. because the fallbacks themselves will get deprecated eventually!

Comment on lines 878 to 879
if model.is_deprecated:
if model.redirect_to:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are these 2 nested conditions

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the other way to write would be:

if model.is_deprecated and model.redirect_to:
    ...
elif model.is_deprecated:
  ...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds good but I dont see an elif condition required in your code

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is this at the moment:

if model.is_deprecated:
  if model.redirect_to:
    return run_language_model(...)
  else:
    raise UserError("model is deprecated")

Yes moving to non-nested if-elif will be more obvious

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah. that doesn't look right. will fix

Copy link
Member

@devxpy devxpy Apr 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fyi: I would have done it like this, but what you have works

if model.is_deprecated:
	if not model.redirect_to:
	    raise UserError(f"Model {model} is deprecated.")	
    return run_language_model(...)

@nikochiko
Copy link
Member Author

I wonder if a more robust way to do this would be to define the fallback on the LLMApis level instead of the model level. because the fallbacks themselves will get deprecated eventually!

we can add redirect_to on those models and that chain will work?

@nikochiko
Copy link
Member Author

run_language_model() -> run_language_model() -> run_language_model()

@nikochiko nikochiko force-pushed the redirect-deprecated-llms branch from 9458d79 to 98b1fc3 Compare April 22, 2025 12:57
@devxpy
Copy link
Member

devxpy commented Apr 29, 2025

@coderabbitai review

Copy link

coderabbitai bot commented Apr 29, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

coderabbitai bot commented Apr 29, 2025

Walkthrough

The changes introduce a redirect_to attribute to the LLMSpec named tuple and the LargeLanguageModels enum, enabling deprecated models to specify a recommended replacement model. The is_deprecated attribute in LLMSpec is repositioned below is_audio_model. Several deprecated models in the enum are updated to include the new redirect_to field and have their labels cleaned by removing explicit "[Deprecated]" tags. The LargeLanguageModels enum constructor assigns the redirect_to attribute from the specification. The value property appends redirection information to the label if applicable. The run_language_model function raises a UserError if a deprecated model without a redirect is selected or recursively calls itself with the redirected model if a redirect target exists. Additionally, a new test verifies that deprecated models have valid redirect targets that are not themselves deprecated.

Tip

⚡️ Faster reviews with caching
  • CodeRabbit now supports caching for code and dependencies, helping speed up reviews. This means quicker feedback, reduced wait times, and a smoother review experience overall. Cached data is encrypted and stored securely. This feature will be automatically enabled for all accounts on May 16th. To opt out, configure Review - Disable Cache at either the organization or repository level. If you prefer to disable all data retention across your organization, simply turn off the Data Retention setting under your Organization Settings.

Enjoy the performance boost—your workflow just got faster.

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3b93039 and f303191.

📒 Files selected for processing (1)
  • daras_ai_v2/language_model.py (11 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
daras_ai_v2/language_model.py (1)
daras_ai_v2/exceptions.py (1)
  • UserError (58-65)

Comment on lines 881 to 898
if model.is_deprecated and model.redirect_to:
return run_language_model(
prompt=prompt,
messages=messages,
model=model.redirect_to,
max_tokens=max_tokens,
num_outputs=num_outputs,
temperature=temperature,
stop=stop,
avoid_repetition=avoid_repetition,
tools=tools,
response_format_type=response_format_type,
stream=stream,
audio_url=audio_url,
audio_session_extra=audio_session_extra,
)
elif model.is_deprecated:
raise UserError(f"Model {model} is deprecated.")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Recursive redirect omits quality & risks infinite loops

  1. quality is accepted by run_language_model() but not forwarded, so calls that relied on a non-default quality silently regress to 1.0.
  2. A mis-configured redirect_to that points back to the same (or another already-visited) model will cause infinite recursion.
-        return run_language_model(
+        # keep original call semantics and break potential cycles
+        if model.redirect_to == model.name:
+            raise UserError(f"Redirect cycle detected for model {model}")
+        return run_language_model(
             prompt=prompt,
             messages=messages,
             model=model.redirect_to,
             max_tokens=max_tokens,
+            quality=quality,
             num_outputs=num_outputs,
             temperature=temperature,
             stop=stop,
             avoid_repetition=avoid_repetition,
             tools=tools,
             response_format_type=response_format_type,
             stream=stream,
             audio_url=audio_url,
             audio_session_extra=audio_session_extra,
         )

Please forward all user-supplied params (or use locals() with explicit filtering) and add a small visited-set/guard to prevent recursion loops.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if model.is_deprecated and model.redirect_to:
return run_language_model(
prompt=prompt,
messages=messages,
model=model.redirect_to,
max_tokens=max_tokens,
num_outputs=num_outputs,
temperature=temperature,
stop=stop,
avoid_repetition=avoid_repetition,
tools=tools,
response_format_type=response_format_type,
stream=stream,
audio_url=audio_url,
audio_session_extra=audio_session_extra,
)
elif model.is_deprecated:
raise UserError(f"Model {model} is deprecated.")
if model.is_deprecated and model.redirect_to:
# keep original call semantics and break potential cycles
if model.redirect_to == model.name:
raise UserError(f"Redirect cycle detected for model {model}")
return run_language_model(
prompt=prompt,
messages=messages,
model=model.redirect_to,
max_tokens=max_tokens,
quality=quality,
num_outputs=num_outputs,
temperature=temperature,
stop=stop,
avoid_repetition=avoid_repetition,
tools=tools,
response_format_type=response_format_type,
stream=stream,
audio_url=audio_url,
audio_session_extra=audio_session_extra,
)
elif model.is_deprecated:
raise UserError(f"Model {model} is deprecated.")

@devxpy
Copy link
Member

devxpy commented Apr 29, 2025

Also:

  • the "[Redirects to GPT-4o-mini]" bit should be auto generated
  • redirect_to can be a direct reference to the instance variable gpt_4_o instead of the string "gpt_4_o"

@devxpy devxpy assigned devxpy and nikochiko and unassigned devxpy Apr 29, 2025
@nikochiko
Copy link
Member Author

nikochiko commented Apr 30, 2025

redirect_to can be a direct reference to the instance variable gpt_4_o instead of the string "gpt_4_o"

this is what i did initially, but the way the Enum is written doesn't allow me to obtain the name which needs to be passed down to the next run_language_model call.
The .name property is not available because within the class a model such as gpt_4_o refers to an instance of LLMSpec and not LargeLanguageModels.
__name__ also did not work.

@nikochiko nikochiko assigned devxpy and unassigned nikochiko May 5, 2025
@nikochiko nikochiko requested a review from devxpy May 5, 2025 08:05
… suite to verify that all deprecated models have a valid redirect and that the redirected models are not deprecated.
@devxpy devxpy force-pushed the redirect-deprecated-llms branch from ea43a8f to 087a79a Compare May 8, 2025 07:28
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
tests/test_llm.py (2)

8-9: Consider validating the redirect_to attribute type

The test correctly verifies that redirect_to is not empty, but it doesn't validate that it's a string type. While this may be implicitly tested in assertion #2, an explicit type check would make the test more robust.

- assert model.redirect_to, f"{model.name} is deprecated but has no redirect_to"
+ assert model.redirect_to and isinstance(model.redirect_to, str), f"{model.name} is deprecated but has no redirect_to or redirect_to is not a string"

16-20: Consider adding a test for multi-level redirects

The current implementation verifies that immediate redirects don't point to deprecated models, but it doesn't check for potential chained/cascading redirects in the future (if model A redirects to B, and later B becomes deprecated and redirects to C).

While not critical for the current implementation, a utility function that follows redirects to their final destination could be useful both for testing and potentially for the main codebase.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f6b3179 and 087a79a.

📒 Files selected for processing (2)
  • daras_ai_v2/language_model.py (32 hunks)
  • tests/test_llm.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • daras_ai_v2/language_model.py
🧰 Additional context used
🧬 Code Graph Analysis (1)
tests/test_llm.py (1)
daras_ai_v2/language_model.py (1)
  • LargeLanguageModels (94-807)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: test (3.10.12, 1.8.3)
🔇 Additional comments (1)
tests/test_llm.py (1)

4-20: Good test implementation for model redirection!

This test function thoroughly checks the three key requirements for the deprecation redirect logic:

  1. All deprecated models have a redirect target
  2. Redirect targets exist in the enum
  3. Redirect targets are not themselves deprecated

This will prevent circular redirects and ensure all deprecated models have valid replacements.

@nikochiko nikochiko merged commit 0aa1764 into master May 12, 2025
8 checks passed
@nikochiko nikochiko deleted the redirect-deprecated-llms branch May 12, 2025 12:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants