Releases: parkervg/blendsql
Releases · parkervg/blendsql
v0.0.19
What's Changed
mypy
fixes, usingsingledispatch
to route model generation behavior by @parkervg in #24- Feature/infer options arg by @parkervg in #25
TransformersVisionModel
,ImageCaption
Ingredient,default_model
behavior by @parkervg and @zvs08 in #26
Example demonstrating new features:
ingredients = {ImageCaption.from_args(model=vision_model), LLMMap}
res = blend(
query="""
SELECT "Name",
{{ImageCaption('parks::Image')}} as "Image Description",
{{
LLMMap(
question='Size in km2?',
context='parks::Area'
)
}} as "Size in km" FROM parks
WHERE "Location" = 'Alaska'
ORDER BY "Size in km" DESC LIMIT 1
""",
db=db,
default_model=text_model,
ingredients=ingredients,
)
Full Changelog: v0.0.18...v0.0.19
v0.0.18
What's Changed
- Improve Logging by @parkervg in #17
- Joiner heuristic integration by @zvs08 in #18
- Adding
LazyTables
, Only Materializing CTEs if we use them by @parkervg in #19 - Adding
benchmark/
directory by @parkervg in #20 - Adding DuckDB + Pandas Integrations by @parkervg in #23
New Contributors
Full Changelog: v0.0.17...v0.0.18
v0.0.17
What's Changed
- Swapped from Guidance to Outlines for constrained decoding functionality (#15)
- Support for Ollama models! 🎉 (16d0c68)
- Improved documentation (#15)
- Simplified the introductory example in the README (1ead608)
- Fixed a weird pattern in
Program
wheremodel
was being referenced via a class attribute; made this an explicit argument (ccf0ecf)
Full Changelog: v0.0.15...v0.0.17
v0.0.15
v0.0.14
v0.0.13
What's Changed
This release only updates the model caching capability.
- Close #8, solving some
diskcache
model caching bugs - Add
caching
boolean argument toModel
class (d7deeea)- Allows user to toggle caching behavior on/off
- Test cases for caching (9d15c57)
Full Changelog: v0.0.12...v0.0.13
v0.0.12
What's Changed
- Rename
LLM
toModel
(259a32e)- Now, import is
from blendsql.models import Model
- Now, import is
- Adding disckache cache for Model predictions (3c757a7)
- If the model class and the arguments passed to
predict()
are identical to something in cache, just return, don't process it again
- If the model class and the arguments passed to
- Better documentation, examples showing new ingredient integration (7a65153)
Full Changelog: v0.0.11...v0.0.12