ChainLite combines LangChain and LiteLLM to provide an easy-to-use and customizable interface for large language model applications.
* Logo is generated using DALL·E 3.
ChainLite has been tested with Python 3.10. To install, do the following:
-
Install ChainLite via pip:
pip install chainlite
or
pip install https://github.com/stanford-oval/chainlite.git
-
Copy
llm_config.yaml
to your project and follow the instructions there to update it with your own configuration.
Before you can use Chainlite, you can call the following function to load the configuration file. If you don't, ChainLite will use llm_config.yaml
in the current directory (the directory you are running your script from) by default.
from chainlite import load_config_file
load_config_file("./llm_config.yaml") # The path should be relative to the directory you run the script from, usually the root directory of your project
Make sure the corresponding API keys are set in environment variables with the name you specified in the configuration file, e.g. OPENAI_API_KEY
etc.
Then you can use the following functions in your code:
llm_generation_chain(
template_file: str,
engine: str,
max_tokens: int,
temperature: float = 0.0,
stop_tokens: Optional[List[str]] = None,
top_p: float = 0.9,
output_json: bool = False,
pydantic_class: Any = None,
engine_for_structured_output: Optional[str] = None,
template_blocks: Optional[list[tuple[str, str]]] = None,
keep_indentation: bool = False,
progress_bar_desc: Optional[str] = None,
additional_postprocessing_runnable: Optional[Runnable] = None,
tools: Optional[list[Callable]] = None,
force_tool_calling: bool = False,
return_top_logprobs: int = 0,
bind_prompt_values: Optional[dict] = None,
force_skip_cache: bool = False,
) # returns a LangChain chain the accepts inputs and returns a string as output
load_config_from_file(config_file: str)
pprint_chain() # can be used to print inputs or outputs of a LangChain chain.
write_prompt_logs_to_file(log_file: Optional[str]) # writes all instructions, inputs and outputs of all your LLM API calls to a jsonl file. Good for debugging or collecting data using LLMs
get_total_cost() # returns the total cost of all LLM API calls you have made. Resets each time you run your code.
joke.prompt
with a 1-shot example:
# instruction
Tell a joke about the input topic. The format of the joke should be a question and response, separated by a line break.
{# This is a comment, and will be ignored anywhere in a .prompt file. Other than block definitions and comments, '#' is allowed and is treated as a normal character. #}
# distillation instruction
Tell a joke.
# input
Physics
# output
Why don't scientists trust atoms?
Because they make up everything!
# input
{{ topic }}
main.py
:
from chainlite import load_config_file
load_config_file("./chainlite_config.yaml")
async def tell_joke(topic: str):
response = await llm_generation_chain(
template_file="joke.prompt",
engine="gpt-35-turbo",
max_tokens=100,
).ainvoke({"topic": topic})
print(response)
asyncio.run(tell_joke("Life as a PhD student")) # prints "Why did the PhD student bring a ladder to the library?\nTo take their research to the next level!"
write_prompt_logs_to_file("llm_input_outputs.jsonl")
Then you will have llm_input_outputs.jsonl
:
{"template_name": "joke.prompt", "instruction": "Tell a joke.", "input": "Life as a PhD student", "output": "Why did the PhD student bring a ladder to the library?\nTo take their research to the next level!"}
For more examples, see tests/test_llm_generate.py
The chainlite_config.yaml
file allows you to customize the behavior of ChainLite. Modify the file to set your preferences for the LangChain and LiteLLM integrations.
If you are using VSCode, you can install this extension and switch .prompt
files to use the "Jinja Markdown" syntax highlighting.
We welcome contributions! Please follow these steps to contribute:
- Fork the repository.
- Create a new branch for your feature or bugfix.
- Commit your changes.
- Push the branch to your forked repository.
- Create a pull request with a detailed description of your changes.
ChainLite is licensed under the Apache-2.0 License. See the LICENSE file for more information.
For any questions or inquiries, please open an issue on the GitHub Issues page.
Thank you for using ChainLite!