Releases: ngxson/wllama
Releases · ngxson/wllama
1.9.0
1.8.1
What's Changed
HeapFS allow us to save more memory while loading model. It also prevent doing memcpy, so loading model will be a bit faster.
- Make the
config
parameter of theloadModelFromUrl
function optional by @felladrin in #32 - Remove prebuilt esm by @ngxson in #33
- Improve error handling on abort() by @ngxson in #34
- add tool for debugging memory by @ngxson in #37
- sync to upstream llama.cpp source code by @ngxson in #46
Full Changelog: 1.8.0...1.8.1
1.8.0
What's Changed
- Docs & demo address changed from
ngxson.github.io
togithub.ngxson.com
. This allows adding COOP/COEP headers (required to run multi-thread examples) - Add download progress callback by @ngxson in #13
- Free buffer after uploaded to worker by @ngxson in #14
- Correct number of pthread pool size by @ngxson in #21
- Build docs on CI by @ngxson in #24
- fix OOM on iOS by @ngxson in #23
- Add
abortSignal
forcreateCompletion
by @ngxson in #26 - Sync upstream llama.cpp source code by @ngxson in #27
- Better exception handling by @ngxson in #29
New Contributors
- @felladrin made their first contribution in #15
Full Changelog: https://github.com/ngxson/wllama/commits/1.8.0