Releases: explodinggradients/ragas
v0.0.8
Main
- Critique metrics by @shahules786 in #70
What's Changed
- fix: created new class for MetricWithLLM by @jjmachan in #71
- chore: support unit testing on python 3.11 by @jjmachan in #69
- Critique metrics by @shahules786 in #70
- feat: add validation step by @jjmachan in #72
- fix: n_swapped check for generate by @jjmachan in #73
Full Changelog: 0.0.7...v0.0.8
0.0.7
0.0.6
Main
- Context Relevancy v2 - measures how relevant is the retrieved context to the prompt. This is done using a combination of OpenAI models and cross-encoder models. To improve the score one can try to optimize the amount of information present in the retrieved context.
What's Changed
- added analytics by @jjmachan in #58
- Context Relevancy v2 by @shahules786 in #59
- doc: added numpy style documentation to
context_relavency
by @jjmachan in #62 - updated docs by @shahules786 in #64
- fix: error in handling device for tensors by @jjmachan in #61
- chore: renamed files and added tqdm by @jjmachan in #65
Full Changelog: 0.0.5...0.0.6
0.0.5
0.0.4
Important feats
- Rename metrics by @shahules786 in #48
- feat: open usage tracking by @jjmachan in #52
What's Changed
- Update README.md by @jjmachan in #42
- Hotfix: Update Readme [ SpellingError ] by @MANISH007700 in #43
- Update metrics.md by @shahules786 in #45
- added discord server by @jjmachan in #47
- Rename metrics by @shahules786 in #48
- feat: open usage tracking by @jjmachan in #52
- docs: moved quickstart by @jjmachan in #54
- docs: fix quickstart and readme by @jjmachan in #55
New Contributors
- @MANISH007700 made their first contribution in #43
Full Changelog: v0.0.3...0.0.4
v0.0.3
v0.0.3 is a major design change
We have added 3 new metrics that help you answer how factually correct is your generated answers, how relevant are the answers to the question and how relevant are the contexts returned form the retriever to the questions. This gives you a sense of the performance of both you generation and retrieval steps. We also have a "ragas_score" which is unified score to give a single metric about your pipelines.
checkout the quickstart to see how it works: https://github.com/explodinggradients/ragas/blob/main/examples/quickstart.ipynb
v0.0.3rc1
0.0.2
What's Changed
- Update README by @shahules786 in #27
- Added evaluation benchmark by @shahules786 in #29
- Fix bert score by @shahules786 in #30
- Update Readme by @shahules786 in #31
- fix: lazyloading of model used in metrics to speed up import by @jjmachan in #32
Full Changelog: 0.0.1...0.0.2
0.0.1
MVP release!!!
What's Changed
- fix: added a spacy check before installation by @jjmachan in #22
- fix: remove incorrect version by @jjmachan in #23
- Update Metrics info and minor bug fixes by @shahules786 in #24
- Update README by @shahules786 in #25
- fix: some fixes and readme by @jjmachan in #26
Full Changelog: 0.0.1a6...0.0.1
0.0.1a8
What's Changed
- fix: added a spacy check before installation by @jjmachan in #22
- fix: remove incorrect version by @jjmachan in #23
- Update Metrics info and minor bug fixes by @shahules786 in #24
- Update README by @shahules786 in #25
Full Changelog: 0.0.1a6...0.0.1a8