Skip to content

Commit

Permalink
substudy v0.6.5: Run AI requests concurrently, and cache
Browse files Browse the repository at this point in the history
  • Loading branch information
emk committed Mar 30, 2024
1 parent c862449 commit 3ee84df
Show file tree
Hide file tree
Showing 3 changed files with 10 additions and 2 deletions.
2 changes: 1 addition & 1 deletion Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

8 changes: 8 additions & 0 deletions substudy/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,14 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),

## [Unreleased]

## [0.6.5] - 2024-03-30

### Added

- Run more transcription and translation requests in parallel. This greatly reduces the time needed to work with large media files.
- Cache AI API requests. Calling an AI model is slow and costs money. Making the same calls over and over again is a waste of time and money,
especially when we successfully process 99.5% of a large media file. So now we cache recent successful requests. So if you need to re-run a incomplete translation, it should be much faster and cheaper. (Cache files are stored wherever your OS thinks they should be stored. On Linux, this is `~/.cache/substudy`.)

## [0.6.4] - 2024-03-24

### Added
Expand Down
2 changes: 1 addition & 1 deletion substudy/Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[package]

name = "substudy"
version = "0.6.4"
version = "0.6.5"
authors = ["Eric Kidd <[email protected]>"]
license = "Apache-2.0"
edition = "2021"
Expand Down

0 comments on commit 3ee84df

Please sign in to comment.