Skip to content

Commit

Permalink
Merge pull request #3 from allegro/open-source-documentation
Browse files Browse the repository at this point in the history
README.md updated and examples/introduction.ipynb added
  • Loading branch information
riccardo-alle authored Feb 19, 2024
2 parents a0a66ba + 7406d9f commit 056082a
Show file tree
Hide file tree
Showing 8 changed files with 494 additions and 54 deletions.
60 changes: 51 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,30 +3,65 @@
___
## About

`llm-wrapper` is a versatile and powerful library designed to streamline the process of querying large language models, offering a user-friendly experience.
The` llm-wrapper` package is designed to simplify interactions with the underlying models by providing the following features:
llm-wrapper is a versatile and powerful library designed to streamline the process of querying Large Language Models
(LLMs) 🤖💬

* **Simple and User-Friendly Interface**: The module offers an intuitive and easy-to-use interface, making it straightforward to work with the model.
Developed by the Allegro engineers, llm-wrapper is based on popular libraries like transformers, pydantic, and langchain. It takes care
of the boring boiler-plate code you write around your LLM applications, quickly enabling you to prototype ideas, and eventually helping you to scale up
for production use-cases!

* **Asynchronous Querying**: Requests to the model are processed asynchronously by default, ensuring efficient and non-blocking interactions.
Among the llm-wrapper most notable features, you will find:

* **Automatic Retrying Mechanism**: The module includes an automatic retrying mechanism, which helps handle transient errors and ensures that queries to the model are robust.
* **😊 Simple and User-Friendly Interface**: The module offers an intuitive and easy-to-use interface, making it straightforward to work with the model.

* **Error Handling and Management**: Errors that may occur during interactions with the model are handled and managed gracefully, providing informative error messages and potential recovery options.
* **🔀 Asynchronous Querying**: Requests to the model are processed asynchronously by default, ensuring efficient and non-blocking interactions.

* **Output Parsing**: The module simplifies the process of defining the model's output format as well as parsing and working with it, allowing you to easily extract the information you need.
* **🔄 Automatic Retrying Mechanism** : The module includes an automatic retrying mechanism, which helps handle transient errors and ensures that queries to the model are robust.

* **🛠️ Error Handling and Management**: Errors that may occur during interactions with the model are handled and managed gracefully, providing informative error messages and potential recovery options.

* **⚙️ Output Parsing**: The module simplifies the process of defining the model's output format as well as parsing and working with it, allowing you to easily extract the information you need.

___

## Documentation

Are you interested in using `llm-wrapper` in your project? Consult the [Official Documentation](# TODO: open-source)!
Full documentation available at **[llm-wrapper.allegro.tech](https://llm-wrapper.allegro.tech/)**

Get familiar with llm-wrapper 🚀: [introductory jupyter notebook](https://github.com/allegro/llm-wrapper/blob/main/examples/introduction.ipynb)

___

## Quickstart

Install the package via pip:

```
pip install llm-wrapper
```

Configure endpoint credentials and start querying the model!

```python
from llm_wrapper.models import AzureOpenAIModel
from llm_wrapper.domain.configuration import AzureOpenAIConfiguration

configuration = AzureOpenAIConfiguration(
api_key="<OPENAI_API_KEY>",
base_url="<OPENAI_API_BASE>",
api_version="<OPENAI_API_VERSION>",
deployment="<OPENAI_API_DEPLOYMENT_NAME>",
model_name="<OPENAI_API_MODEL_NAME>"
)

gpt_model = AzureOpenAIModel(config=configuration)
gpt_response = gpt_model.generate("Plan me a 3-day holiday trip to Italy")
```
___

## Local Development

### Installation
### Installation from the source

We assume that you have python `3.10.*` installed on your machine.
You can set it up using [pyenv](https://github.com/pyenv/pyenv#installationbrew)
Expand Down Expand Up @@ -59,6 +94,13 @@ In order to execute tests, run:
make tests
```

### Updating the documentation

Run `mkdocs serve` to serve a local instance of the documentation.

Modify the content of `docs` directory to update the documentation. The updated content will be deployed
via the github action `.github/workflows/docs.yml`

### Make a new release

When a new version of `llm-wrapper` is ready to be released, do the following operations:
Expand Down
45 changes: 0 additions & 45 deletions docs/tutorial/index.md

This file was deleted.

File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Loading

0 comments on commit 056082a

Please sign in to comment.