Skip to content

Commit

Permalink
outlines -> guidance
Browse files Browse the repository at this point in the history
  • Loading branch information
parkervg committed Sep 1, 2024
1 parent 5b12e8a commit 82cfbc1
Show file tree
Hide file tree
Showing 9 changed files with 17 additions and 25 deletions.
5 changes: 2 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ For in-depth descriptions of the above queries, check out our [documentation](ht
- Easily extendable to [multi-modal usecases](./examples/vqa-ingredient.ipynb) 🖼️
- Smart parsing optimizes what is passed to external functions 🧠
- Traverses abstract syntax tree with [sqlglot](https://github.com/tobymao/sqlglot) to minimize LLM function calls 🌳
- Constrained decoding with [outlines](https://github.com/outlines-dev/outlines) 🚀
- Constrained decoding with [guidance](https://github.com/guidance-ai/guidance) 🚀
- LLM function caching, built on [diskcache](https://grantjenks.com/docs/diskcache/) 🔑

## Quickstart
Expand Down Expand Up @@ -246,5 +246,4 @@ Special thanks to those below for inspiring this project. Definitely recommend c
- As far as I can tell, the first publication to propose unifying model calls within SQL
- Served as the inspiration for the [vqa-ingredient.ipynb](./examples/vqa-ingredient.ipynb) example
- The authors of [Grammar Prompting for Domain-Specific Language Generation with Large Language Models](https://arxiv.org/abs/2305.19234)
- The maintainers of the [Outlines](https://github.com/outlines-dev/outlines) library for powering the constrained decoding capabilities of BlendSQL
- Paper at https://arxiv.org/abs/2307.09702
- The maintainers of the [Guidance](https://github.com/guidance-ai/guidance) library for powering the constrained decoding capabilities of BlendSQL
3 changes: 0 additions & 3 deletions benchmark/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,6 @@

from blendsql import blend
from blendsql.models import TransformersLLM
import outlines.caching

outlines.caching.clear_cache()

MODEL = TransformersLLM("HuggingFaceTB/SmolLM-135M", caching=False)
NUM_ITER_PER_QUERY = 5
Expand Down
2 changes: 1 addition & 1 deletion blendsql/blend.py
Original file line number Diff line number Diff line change
Expand Up @@ -915,7 +915,7 @@ def blend(
For example, in `{{LLMMap('convert to date', 'w::listing date')}} <= '1960-12-31'`
We can infer the output format should look like '1960-12-31' and both:
1) Put this string in the `example_outputs` kwarg
2) If we have a LocalModel, pass the '\d{4}-\d{2}-\d{2}' pattern to outlines.generate.regex
2) If we have a LocalModel, pass the '\d{4}-\d{2}-\d{2}' pattern to guidance
table_to_title: Optional mapping from table name to title of table.
Useful for datasets like WikiTableQuestions, where relevant info is stored in table title.
schema_qualify: Optional bool, determines if we run qualify_columns() from sqlglot
Expand Down
5 changes: 4 additions & 1 deletion blendsql/ingredients/generate.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,9 @@ def generate(model: Model, *args, **kwargs) -> str:
def generate_openai(
model: OpenaiLLM, prompt, max_tokens: Optional[int], stop_at: List[str], **kwargs
) -> str:
"""This function only exists because of a bug in guidance
https://github.com/guidance-ai/guidance/issues/881
"""
client = model.model_obj.engine.client
return (
client.chat.completions.create(
Expand All @@ -36,7 +39,7 @@ def generate_ollama(
model: OllamaLLM, prompt, options: Optional[Collection[str]] = None, **kwargs
) -> str:
"""Helper function to work with Ollama models,
since they're not recognized in the Outlines ecosystem.
since they're not recognized natively in the guidance ecosystem.
"""
if options:
raise NotImplementedError(
Expand Down
2 changes: 1 addition & 1 deletion blendsql/models/_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ def _setup(self, *args, **kwargs) -> None:
@abstractmethod
def _load_model(self, *args, **kwargs) -> ModelObj:
"""Logic for instantiating the model class goes here.
Will most likely be an outlines model object,
Will most likely be an guidance model object,
but in some cases (like OllamaLLM) we make an exception.
"""
...
Expand Down
16 changes: 5 additions & 11 deletions blendsql/models/remote/_openai.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ class AzureOpenaiLLM(RemoteModel):
env: Path to directory of .env file, or to the file itself to load as a dotfile.
Should either contain the variable `OPENAI_API_KEY`,
or all of `TENANT_ID`, `CLIENT_ID`, `CLIENT_SECRET`
config: Optional outlines.models.openai.OpenAIConfig to use in loading model
config: Optional dict to use in loading model
caching: Bool determining whether we access the model's cache
Examples:
Expand All @@ -64,14 +64,11 @@ class AzureOpenaiLLM(RemoteModel):
```
```python
from blendsql.models import AzureOpenaiLLM
from outlines.models.openai import OpenAIConfig
model = AzureOpenaiLLM(
"gpt-3.5-turbo",
env="..",
config=OpenAIConig(
temperature=0.7
)
config={"temperature": 0.7}
)
```
"""
Expand Down Expand Up @@ -124,7 +121,7 @@ class OpenaiLLM(RemoteModel):
model_name_or_path: Name of the OpenAI model to use
env: Path to directory of .env file, or to the file itself to load as a dotfile.
Should contain the variable `OPENAI_API_KEY`
config: Optional outlines.models.openai.OpenAIConfig to use in loading model
config: Optional argument mapping to use in loading model
caching: Bool determining whether we access the model's cache
Examples:
Expand All @@ -134,14 +131,11 @@ class OpenaiLLM(RemoteModel):
```
```python
from blendsql.models import OpenaiLLM
from outlines.models.openai import OpenAIConfig
model = AzureOpenaiLLM(
model = OpenaiLLM(
"gpt-3.5-turbo",
env="..",
config=OpenAIConig(
temperature=0.7
)
config={"temperature": 0.7}
)
```
"""
Expand Down
5 changes: 2 additions & 3 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ For in-depth descriptions of the above queries, check out our [documentation](ht
- Easily extendable to [multi-modal usecases](./examples/vqa-ingredient.ipynb) 🖼️
- Smart parsing optimizes what is passed to external functions 🧠
- Traverses abstract syntax tree with [sqlglot](https://github.com/tobymao/sqlglot) to minimize LLM function calls 🌳
- Constrained decoding with [outlines](https://github.com/outlines-dev/outlines) 🚀
- Constrained decoding with [guidance](https://github.com/guidance-ai/guidance) 🚀
- LLM function caching, built on [diskcache](https://grantjenks.com/docs/diskcache/) 🔑

<hr>
Expand All @@ -161,5 +161,4 @@ Special thanks to those below for inspiring this project. Definitely recommend c
- As far as I can tell, the first publication to propose unifying model calls within SQL
- Served as the inspiration for the [vqa-ingredient.ipynb](./examples/vqa-ingredient.ipynb) example
- The authors of [Grammar Prompting for Domain-Specific Language Generation with Large Language Models](https://arxiv.org/abs/2305.19234)
- The maintainers of the [Outlines](https://github.com/outlines-dev/outlines) library for powering the constrained decoding capabilities of BlendSQL
- Paper at https://arxiv.org/abs/2307.09702
- The maintainers of the [Guidance](https://github.com/guidance-ai/guidance) library for powering the constrained decoding capabilities of BlendSQL
2 changes: 1 addition & 1 deletion docs/reference/blenders/blenders.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ hide:

We use the term "blender" to describe the model which receives the prompts used to perform each ingredient function within a BlendSQL script.

We enable integration with many existing LLMs by building on top of [`outlines` models](https://outlines-dev.github.io/outlines/reference/).
We enable integration with many existing LLMs by building on top of [`guidance` models](https://github.com/guidance-ai/guidance).

Certain models may be better geared towards some BlendSQL tasks than others, so choose carefully!

Expand Down
2 changes: 1 addition & 1 deletion docs/reference/blenders/ollama.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ hide:

!!! Note

We consider Ollama models 'remote', since we're unable to access the underlying logits via outlines. As a result, we can only use Ollama for traditional generation, and not constrained generation (such as via the `options` arg in [LLMQA](../ingredients/LLMQA.md))
We consider Ollama models 'remote', since we're unable to access the underlying logits. As a result, we can only use Ollama for traditional generation, and not constrained generation (such as via the `options` arg in [LLMQA](../ingredients/LLMQA.md))

## OllamaLLM

Expand Down

0 comments on commit 82cfbc1

Please sign in to comment.