You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Tokens used by a response are now logged to new input_tokens and output_tokens integer columns and a token_details JSON string column, for the default OpenAI models and models from other plugins that implement this feature. #610
llm prompt now takes a -u/--usage flag to display token usage at the end of the response.
llm logs -u/--usage shows token usage information for logged responses.
llm prompt ... --async responses are now logged to the database. #641
llm.get_models() and llm.get_async_models() functions, documented here. #640
response.usage() and async response await response.usage() methods, returning a Usage(input=2, output=1, details=None) dataclass. #644
response.on_done(callback) and await response.on_done(callback) methods for specifying a callback to be executed when a response has completed, documented here. #653