The Blog Assistant CLI is a powerful tool
+that helps you streamline your blogging workflow.
+It can generate blog summaries,
+apply semantic line breaks (SEMBR),
+and even create social media posts for LinkedIn, Patreon, and Twitter.
+This tutorial will guide you through the usage of this tool.
+
Summarize Command
+
The summarize command is used to generate a blog summary, title, and tags.
+Here's how to use it:
+
+
Run the command summarize in your terminal: llamabot blog summarize
+
You will be prompted to paste your blog post.
+
The tool will then generate a blog title,
+apply SEMBR to your summary,
+and provide you with relevant tags.
+
+
The output will look something like this:
+
Here is your blog title:
+[Generated Blog Title]
+
+Applying SEMBR to your summary...
+
+Here is your blog summary:
+[Generated Blog Summary with SEMBR]
+
+Here are your blog tags:
+[Generated Blog Tags]
+
+
Social Media Command
+
The social_media command is used to generate social media posts.
+Here's how to use it:
+
+
Run the command social_media [platform] in your terminal,
+where [platform] is either linkedin, patreon, or twitter: llamabot blog social-media linkedin.
+
You will be prompted to paste your blog post.
+
The tool will then generate a social media post for the specified platform.
+
+
For LinkedIn and Twitter,
+the generated post will be copied to your clipboard.
+For Patreon,
+the tool will display the post in the terminal.
+
SEMBR Command
+
The sembr command is used to apply semantic line breaks to a blog post.
+Here's how to use it:
+
+
Run the command sembr in your terminal: llamabot blog sembr
+
You will be prompted to paste your blog post.
+
The tool will then apply semantic line breaks to your post
+and copy the result to your clipboard.
+
+
With these commands,
+you can streamline your blogging workflow
+and ensure your content is optimized for readability and engagement.
+Happy blogging!
In this tutorial, we will explore the Git subcommand for the LlamaBot CLI.
+This command-line interface (CLI) provides a set of tools
+to automate and enhance your Git workflow,
+in particular, the ability to automatically generate commit messages.
+
Setup
+
The llamabot prepare message hook requires that you have llamabot >=0.0.77.
+You will also need an OpenAI API key
+(until we have enabled and tested locally-hosted language models).
+Be sure to setup and configure LlamaBot
+by executing the following two configuration commands
+and following the instructions there.
+
llamabotconfigureapi-key
+
+
and
+
llamabotconfiguredefault-model
+
+
For the default model, we suggest using a GPT-4 variant.
+It is generally of higher quality than GPT-3.5.
+If you are concerned with cost,
+the GPT-3.5-turbo variant with 16K context window
+has anecdotally worked well.
+
Install the Commit Message Hook
+
Once you have configured llamabot, the next thing you need to do is install the prepare-msg-hook within your git repository.
+This is a git hook that allows you to run commands after the pre-commit hooks are run
+but before your editor of the commit message is opened.
+To install the hook, simply run:
+
llamabotgitinstall-commit-message-hook
+
+
This command will check if the current directory is a Git repository root.
+If it is not, it raises a RuntimeError.
+If it is, it writes a script to the prepare-commit-msg file in the .git/hooks directory
+and changes the file's permissions to make it executable.
+
Auto-Compose a Commit Message
+
The llamabot git compose-commit command autowrites a commit message based on the diff.
+It first gets the diff using the get_git_diff function.
+It then generates a commit message using the commitbot, which is a LlamaBot SimpleBot.
+If any error occurs during this process,
+it prints the error message and prompts the user to write their own commit message,
+allowing for a graceful fallback to default behaviour.
+This can be useful, for example, if you don't have an internet connection
+and cannot connect to the OpenAI API,
+but still need to commit code.
+
This command never needs to be explicitly called.
+Rather, it is called behind-the-scenes within the prepare-msg-hook.
+
Conclusion
+
The llamabot git CLI provides a set of tools to automate and enhance your Git workflow.
+It provides an automatic commit message writer based on your repo's git diff.
+By using llamabot git, you can streamline your Git workflow and focus on writing code.
In this tutorial, we will walk through the configuration process for LlamaBot, a Python-based bot that uses the OpenAI API. The configuration process involves setting up the API key and selecting the default model for the bot.
+
Setting up the API Key
+
The first step in configuring LlamaBot is to set up the API key. This is done by invoking:
+
llamabotconfigureapi-key
+
+
The user will be prompted to enter their OpenAI API key. The key will be hidden as you type it, and you will be asked to confirm it. Once confirmed, the key will be stored as an environment variable, OPENAI_API_KEY.
+
Configuring the Default Model
+
The next step in the configuration process is to select the default model for LlamaBot. This is done by invoking:
+
llamabotconfiguredefault-model
+
+
LlamaBot will first load the environment variables from the .env file located at llamabotrc_path. It then retrieves a list of available models from the OpenAI API, filtering for those that include 'gpt' in their ID. For this reason, it is important to set your OpenAI API key before configuring the default model.
+
The function then displays the list of available models and prompts you to select one. As you type, the function will suggest completions based on the available models. The last model in the list is provided as the default option.
+
Once you have entered a valid model ID, the function stores it as an environment variable, DEFAULT_LANGUAGE_MODEL.
+
Conclusion
+
By following these steps, you can easily configure LlamaBot to use your OpenAI API key and your chosen default model. Remember to keep your API key secure, and to choose a model that best suits your needs. Happy coding!
Welcome to the Llamabot Python CLI tutorial! In this tutorial, we will explore the various commands available in the Llamabot Python CLI and learn how to use them effectively. The Llamabot Python CLI is a powerful tool for generating module-level and function docstrings, as well as generating code based on a given description.
+
Prerequisites
+
Before we begin, make sure you have the Llamabot Python CLI installed on your system. You can install it using pip:
+
pipinstall-Ullamabot
+
+
Once installed, you can access the CLI using the llamabot python command.
+
Commands
+
The Llamabot Python CLI provides the following commands:
+
+
module-docstrings: Generate module-level docstrings for a given module file.
+
generate-docstrings: Generate function docstrings for a specific function in a module file.
+
code-generator: Generate code based on a given description.
+
test-writer: Write tests for a given object.
+
+
Let's dive into each command and see how they can be used.
+
1. module-docstrings
+
The module-docstrings command generates module-level docstrings for a given module file. It takes the following arguments:
+
+
module_fpath: Path to the module to generate docstrings for.
+
dirtree_context_path: (Optional) Path to the directory to use as the context for the directory tree. Defaults to the parent directory of the module file.
In this tutorial, we have covered the various commands available in the Llamabot Python CLI and learned how to use them effectively. With these commands, you can easily generate module-level and function docstrings, generate code based on a given description, and write tests for your code. Happy coding!
Chatting with a Code Repository: Llamabot CLI Guide
+
Welcome to the guide on using the Llamabot CLI for interacting with code repositories. This tool facilitates engaging conversations with your codebase, leveraging the power of AI to understand and read documentation within a repo. Let’s get started on how to utilize this tool.
+
Getting Started
+
Before diving into the commands, ensure you have Llamabot CLI installed. Install it via pip:
+
pipinstall-Ullamabot
+
+
After installation, access the CLI with the llamabot repo command.
+
Key Commands
+
Llamabot CLI offers several commands:
+
+
chat: Engage in a conversation with your code repository.
+
+
Chat with Your Repository
+
The chat command allows you to interact with your code repository in a conversational manner.
--checkout: Branch or tag to use (default: "main").
+
--source-file-extensions: File types to include in the conversation.
+
--model-name: AI model to use for generating responses.
+
+
Once you have executed this command, LlamaBot will automatically clone the repository to a temporary directory, embed the files as specified by the source-file extensions, and fire up LlamaBot's usual command line-based chat interface.
+
Conclusion
+
This guide covers the essential aspects of the Llamabot CLI, a tool designed to enhance your coding experience through AI-powered conversations about a code repository. Embrace these capabilities to make your coding more efficient and insightful. Happy coding!
In this tutorial, we will walk through the Llamabot Zotero CLI, a command-line interface for interacting with your Zotero library. This tool allows you to chat with a paper, retrieve keys, and download papers from your Zotero library.
+
Prerequisites
+
Before we start, make sure you have llamabot installed in your environment:
+
pipinstall-Ullamabot
+
+
Getting Started
+
First, we need to configure the Llamabot Zotero CLI environment variables. This is done using the configure command. You will be prompted to enter your Zotero library ID, API key, and library type.
+
llamabotzoteroconfigure
+
+
Chatting with a Paper
+
To chat with a paper, use the chat command. You can specify the paper you want to chat about as an argument. If you don't provide a paper, you will be prompted to enter one.
+
llamabotzoterochat"The title of the paper"
+
+
If you want to specify a model, such as an Ollama model, you can do so directly at the command line too:
+
llamabotzoterochat"The title of the paper"--modelvicuna:7b-16k
+
+
If you want to synchronize your Zotero library before chatting, you can use the --sync option.
+
llamabotzoterochat"The title of the paper"--sync
+
+
Retrieving Keys
+
When you chat with a paper, the Llamabot Zotero CLI will retrieve the keys for the paper. These keys are unique identifiers for each paper in your Zotero library. The keys are displayed in the console.
+
Downloading Papers
+
After retrieving the keys, you can choose a paper to download. You will be prompted to choose a paper from the list of keys. The paper will be downloaded to a temporary directory.
+
Pleasechooseanoption:Thetitleofthepaper
+
+
Asking Questions
+
Once the paper is downloaded, you can start asking questions about the paper. The Llamabot Zotero CLI uses a QueryBot to answer your questions. Simply type your question at the prompt.
+
Askmeaquestion:Whatisthemainargumentofthepaper?
+
+
To exit the chat, type exit.
+
Askmeaquestion:exit
+
+
And that's it! You now know how to use the Llamabot Zotero CLI to chat with a paper, retrieve keys, download papers, and ask questions about a paper. Happy chatting!
Let's see how to use the ChatBot class to enable you to chat with Mistral inside a Jupyter notebook.
+
+
+
+
+
+
+
fromllamabotimportChatBot
+
+code_tester=ChatBot(
+"""
+You are a Python quality assurance developer who delivers high quality unit tests for code.
+You write tests using PyTest and not the built-in unittest library.
+Write the tests using test functions and not using classes and class methods
+Here is the code to write tests against:
+""",
+ session_name="code-tested",
+ model_name="mistral/mistral-medium",
+ stream_target="stdout",
+)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
code_tester(
+'''
+class ChatBot:
+ """Chat Bot that is primed with a system prompt, accepts a human message.
+
+ Automatic chat memory management happens.
+
+ h/t Andrew Giessel/GPT4 for the idea.
+ """
+
+ def __init__(self, system_prompt, temperature=0.0, model_name="gpt-4"):
+ """Initialize the ChatBot.
+
+ :param system_prompt: The system prompt to use.
+ :param temperature: The model temperature to use.
+ See https://platform.openai.com/docs/api-reference/completions/create#completions/create-temperature
+ for more information.
+ :param model_name: The name of the OpenAI model to use.
+ """
+ self.model = ChatOpenAI(
+ model_name=model_name,
+ temperature=temperature,
+ streaming=True,
+ verbose=True,
+ callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),
+ )
+ self.chat_history = [
+ SystemMessage(content="Always return Markdown-compatible text."),
+ SystemMessage(content=system_prompt),
+ ]
+
+ def __call__(self, human_message) -> Response:
+ """Call the ChatBot.
+
+ :param human_message: The human message to use.
+ :return: The response to the human message, primed by the system prompt.
+ """
+ self.chat_history.append(HumanMessage(content=human_message))
+ response = self.model(self.chat_history)
+ self.chat_history.append(response)
+ return response
+'''
+)
+
+
+
+
+
+
+
+
+<litellm.utils.CustomStreamWrapper object at 0x1140c2cd0>
+Here are the tests for the ChatBot class using PyTest and test functions:
+
importpytest
+fromchatbotimportChatBot,SystemMessage,HumanMessage
+fromopenai_callbackimportChatOpenAI
+
+deftest_chatbot_init():
+ system_prompt="You are a helpful assistant."
+ chatbot=ChatBot(system_prompt)
+ assertlen(chatbot.chat_history)==2
+ assertisinstance(chatbot.chat_history[0],SystemMessage)
+ assertisinstance(chatbot.chat_history[1],SystemMessage)
+ assertchatbot.chat_history[0].content=="Always return Markdown-compatible text."
+ assertchatbot.chat_history[1].content==system_prompt
+
+deftest_chatbot_call():
+ system_prompt="You are a helpful assistant."
+ chatbot=ChatBot(system_prompt)
+ human_message="What is the weather like today?"
+ response=chatbot(human_message)
+ assertlen(chatbot.chat_history)==4
+ assertisinstance(chatbot.chat_history[2],HumanMessage)
+ assertisinstance(chatbot.chat_history[3],ChatOpenAI.Response)
+ assertchatbot.chat_history[2].content==human_message
+ assertresponse==chatbot.chat_history[3]
+
+deftest_chatbot_call_multiple_times():
+ system_prompt="You are a helpful assistant."
+ chatbot=ChatBot(system_prompt)
+ human_message1="What is the weather like today?"
+ human_message2="What is the temperature outside?"
+ chatbot(human_message1)
+ chatbot(human_message2)
+ assertlen(chatbot.chat_history)==6
+ assertisinstance(chatbot.chat_history[2],HumanMessage)
+ assertisinstance(chatbot.chat_history[3],ChatOpenAI.Response)
+ assertisinstance(chatbot.chat_history[4],HumanMessage)
+ assertisinstance(chatbot.chat_history[5],ChatOpenAI.Response)
+ assertchatbot.chat_history[2].content==human_message1
+ assertchatbot.chat_history[4].content==human_message2
+
+deftest_chatbot_temperature():
+ system_prompt="You are a helpful assistant."
+ chatbot=ChatBot(system_prompt,temperature=0.5)
+ assertchatbot.model.temperature==0.5
+
+deftest_chatbot_model_name():
+ system_prompt="You are a helpful assistant."
+ chatbot=ChatBot(system_prompt,model_name="gpt-3.5-turbo")
+ assertchatbot.model.model_name=="gpt-3.5-turbo"
+
+Note that these tests assume that the `ChatOpenAI` class and its `Response` class are defined elsewhere in the codebase. Also, the tests do not actually call the OpenAI API, but rather assume that the `ChatOpenAI` class is a mock or stub that returns a canned response. If you want to test the actual API calls, you will need to set up a test environment with a valid API key and handle any rate limiting or other issues that may arise.
+
+
+
+
+
+
+AIMessage(content='', role='assistant')
+
+
+
+
+
+
+
+
+
+
As you can see, ChatBot keeps track of conversation memory/history automatically.
+We can even access any item in the conversation by looking at the conversation history.
+
+
+
+
+
+
+
The __repr__ of a chatbot will simply print out the entire history:
+
+
+
+
+
+
+
code_tester
+
+
+
+
+
+
+
+
+[Human]
+
+class ChatBot:
+ """Chat Bot that is primed with a system prompt, accepts a human message.
+
+ Automatic chat memory management happens.
+
+ h/t Andrew Giessel/GPT4 for the idea.
+ """
+
+ def __init__(self, system_prompt, temperature=0.0, model_name="gpt-4"):
+ """Initialize the ChatBot.
+
+ :param system_prompt: The system prompt to use.
+ :param temperature: The model temperature to use.
+ See https://platform.openai.com/docs/api-reference/completions/create#completions/create-temperature
+ for more information.
+ :param model_name: The name of the OpenAI model to use.
+ """
+ self.model = ChatOpenAI(
+ model_name=model_name,
+ temperature=temperature,
+ streaming=True,
+ verbose=True,
+ callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),
+ )
+ self.chat_history = [
+ SystemMessage(content="Always return Markdown-compatible text."),
+ SystemMessage(content=system_prompt),
+ ]
+
+ def __call__(self, human_message) -> Response:
+ """Call the ChatBot.
+
+ :param human_message: The human message to use.
+ :return: The response to the human message, primed by the system prompt.
+ """
+ self.chat_history.append(HumanMessage(content=human_message))
+ response = self.model(self.chat_history)
+ self.chat_history.append(response)
+ return response
+
+ def __repr__(self):
+ """Return a string representation of the ChatBot.
+
+ :return: A string representation of the ChatBot.
+ """
+ representation = ""
+
+ for message in self.chat_history:
+ if isinstance(message, SystemMessage):
+ prefix = "[System]
+"
+ elif isinstance(message, HumanMessage):
+ prefix = "[Human]
+"
+ elif isinstance(message, AIMessage):
+ prefix = "[AI]
+"
+
+ representation += f"{prefix}{message.content}" + "
+
+"
+ return representation
+
+ def panel(self, show: bool = True):
+ """Create a Panel app that wraps a LlamaBot.
+
+ :param show: Whether to show the app.
+ If False, we return the Panel app directly.
+ If True, we call `.show()` on the app.
+ :return: The Panel app, either showed or directly.
+ """
+
+ text_input = pn.widgets.TextAreaInput(placeholder="Start chatting...")
+ chat_history = pn.Column(*[])
+ send_button = pn.widgets.Button(name="Send", button_type="primary")
+
+ def b(event):
+ """Button click handler.
+
+ :param event: The button click event.
+ """
+ chat_messages = []
+ for message in self.chat_history:
+ if isinstance(message, SystemMessage):
+ pass
+ elif isinstance(message, HumanMessage):
+ chat_markdown = pn.pane.Markdown(f"Human: {message.content}")
+ chat_messages.append(chat_markdown)
+ elif isinstance(message, AIMessage):
+ chat_markdown = pn.pane.Markdown(f"Bot: {message.content}")
+ chat_messages.append(chat_markdown)
+
+ chat_messages.append(pn.pane.Markdown(f"Human: {text_input.value}"))
+ bot_reply = pn.pane.Markdown("Bot: ")
+ chat_messages.append(bot_reply)
+ chat_history.objects = chat_messages
+ markdown_handler = PanelMarkdownCallbackHandler(bot_reply)
+ self.model.callback_manager.set_handler(markdown_handler)
+ self(text_input.value)
+ text_input.value = ""
+
+ send_button.on_click(b)
+ input_pane = pn.Row(text_input, send_button)
+ output_pane = pn.Column(chat_history, scroll=True, height=500)
+
+ main = pn.Row(input_pane, output_pane)
+ app = pn.template.FastListTemplate(
+ site="ChatBot",
+ title="ChatBot",
+ main=main,
+ main_max_width="768px",
+ )
+ if show:
+ return app.show()
+ return app
+
+
+
+[AI]
+Here are some tests for the ChatBot class using PyTest:
+
importpytest
+fromyour_moduleimportChatBot,SystemMessage,HumanMessage
+
+deftest_init():
+ system_prompt="You are a helpful assistant."
+ chatbot=ChatBot(system_prompt)
+ assertlen(chatbot.chat_history)==2
+ assertisinstance(chatbot.chat_history[0],SystemMessage)
+ assertisinstance(chatbot.chat_history[1],SystemMessage)
+ assertchatbot.chat_history[0].content=="Always return Markdown-compatible text."
+ assertchatbot.chat_history[1].content==system_prompt
+
+deftest_call():
+ system_prompt="You are a helpful assistant."
+ chatbot=ChatBot(system_prompt)
+ human_message="What's the weather like today?"
+ response=chatbot(human_message)
+ assertlen(chatbot.chat_history)==4
+ assertisinstance(chatbot.chat_history[2],HumanMessage)
+ assertisinstance(chatbot.chat_history[3],response.__class__)
+ assertchatbot.chat_history[2].content==human_message
+
+deftest_repr():
+ system_prompt="You are a helpful assistant."
+ chatbot=ChatBot(system_prompt)
+ human_message="What's the weather like today?"
+ chatbot(human_message)
+ expected_repr=(
+ "[System]\n"
+ "Always return Markdown-compatible text.\n\n"
+ "[System]\n"
+ "You are a helpful assistant.\n\n"
+ "[Human]\n"
+ "What's the weather like today?\n\n"
+ "[AI]\n"
+ )
+ assertrepr(chatbot)==expected_repr
+
+deftest_panel():
+ system_prompt="You are a helpful assistant."
+ chatbot=ChatBot(system_prompt)
+ app=chatbot.panel()
+ assertisinstance(app,type(pn.template.FastListTemplate()))
+
+Note that the `test_panel` function assumes that the `pn` module is available in the test environment. If it is not, you may need to install it or mock it out for testing purposes.
+
+Also note that the `test_call` function assumes that the `response` object has a `__class__` attribute that can be used to check its type. If this is not the case, you may need to modify the test to use a different method of checking the type of the response object.
+
+Finally, note that these tests are not exhaustive and may not cover all possible edge cases or error conditions. You may want to add additional tests to ensure that the `ChatBot` class is working correctly in all scenarios.
+
+
+
+
+
+
+
+
+
+
+
On the other hand, accessing the .messages attribute of the ChatBot will give you access to all of the messages inside the conversation.
+
+
+
+
+
+
+
code_tester.messages
+
+
+
+
+
+
+
+
+[HumanMessage(content='\nclass ChatBot:\n """Chat Bot that is primed with a system prompt, accepts a human message.\n\n Automatic chat memory management happens.\n\n h/t Andrew Giessel/GPT4 for the idea.\n """\n\n def __init__(self, system_prompt, temperature=0.0, model_name="gpt-4"):\n """Initialize the ChatBot.\n\n :param system_prompt: The system prompt to use.\n :param temperature: The model temperature to use.\n See https://platform.openai.com/docs/api-reference/completions/create#completions/create-temperature\n for more information.\n :param model_name: The name of the OpenAI model to use.\n """\n self.model = ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=True,\n verbose=True,\n callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n )\n self.chat_history = [\n SystemMessage(content="Always return Markdown-compatible text."),\n SystemMessage(content=system_prompt),\n ]\n\n def __call__(self, human_message) -> Response:\n """Call the ChatBot.\n\n :param human_message: The human message to use.\n :return: The response to the human message, primed by the system prompt.\n """\n self.chat_history.append(HumanMessage(content=human_message))\n response = self.model(self.chat_history)\n self.chat_history.append(response)\n return response\n\n def __repr__(self):\n """Return a string representation of the ChatBot.\n\n :return: A string representation of the ChatBot.\n """\n representation = ""\n\n for message in self.chat_history:\n if isinstance(message, SystemMessage):\n prefix = "[System]\n"\n elif isinstance(message, HumanMessage):\n prefix = "[Human]\n"\n elif isinstance(message, AIMessage):\n prefix = "[AI]\n"\n\n representation += f"{prefix}{message.content}" + "\n\n"\n return representation\n\n def panel(self, show: bool = True):\n """Create a Panel app that wraps a LlamaBot.\n\n :param show: Whether to show the app.\n If False, we return the Panel app directly.\n If True, we call `.show()` on the app.\n :return: The Panel app, either showed or directly.\n """\n\n text_input = pn.widgets.TextAreaInput(placeholder="Start chatting...")\n chat_history = pn.Column(*[])\n send_button = pn.widgets.Button(name="Send", button_type="primary")\n\n def b(event):\n """Button click handler.\n\n :param event: The button click event.\n """\n chat_messages = []\n for message in self.chat_history:\n if isinstance(message, SystemMessage):\n pass\n elif isinstance(message, HumanMessage):\n chat_markdown = pn.pane.Markdown(f"Human: {message.content}")\n chat_messages.append(chat_markdown)\n elif isinstance(message, AIMessage):\n chat_markdown = pn.pane.Markdown(f"Bot: {message.content}")\n chat_messages.append(chat_markdown)\n\n chat_messages.append(pn.pane.Markdown(f"Human: {text_input.value}"))\n bot_reply = pn.pane.Markdown("Bot: ")\n chat_messages.append(bot_reply)\n chat_history.objects = chat_messages\n markdown_handler = PanelMarkdownCallbackHandler(bot_reply)\n self.model.callback_manager.set_handler(markdown_handler)\n self(text_input.value)\n text_input.value = ""\n\n send_button.on_click(b)\n input_pane = pn.Row(text_input, send_button)\n output_pane = pn.Column(chat_history, scroll=True, height=500)\n\n main = pn.Row(input_pane, output_pane)\n app = pn.template.FastListTemplate(\n site="ChatBot",\n title="ChatBot",\n main=main,\n main_max_width="768px",\n )\n if show:\n return app.show()\n return app\n\n', role='user'),
+ AIMessage(content='Here are some tests for the ChatBot class using PyTest:\n```python\nimport pytest\nfrom your_module import ChatBot, SystemMessage, HumanMessage\n\ndef test_init():\n system_prompt = "You are a helpful assistant."\n chatbot = ChatBot(system_prompt)\n assert len(chatbot.chat_history) == 2\n assert isinstance(chatbot.chat_history[0], SystemMessage)\n assert isinstance(chatbot.chat_history[1], SystemMessage)\n assert chatbot.chat_history[0].content == "Always return Markdown-compatible text."\n assert chatbot.chat_history[1].content == system_prompt\n\ndef test_call():\n system_prompt = "You are a helpful assistant."\n chatbot = ChatBot(system_prompt)\n human_message = "What\'s the weather like today?"\n response = chatbot(human_message)\n assert len(chatbot.chat_history) == 4\n assert isinstance(chatbot.chat_history[2], HumanMessage)\n assert isinstance(chatbot.chat_history[3], response.__class__)\n assert chatbot.chat_history[2].content == human_message\n\ndef test_repr():\n system_prompt = "You are a helpful assistant."\n chatbot = ChatBot(system_prompt)\n human_message = "What\'s the weather like today?"\n chatbot(human_message)\n expected_repr = (\n "[System]\\n"\n "Always return Markdown-compatible text.\\n\\n"\n "[System]\\n"\n "You are a helpful assistant.\\n\\n"\n "[Human]\\n"\n "What\'s the weather like today?\\n\\n"\n "[AI]\\n"\n )\n assert repr(chatbot) == expected_repr\n\ndef test_panel():\n system_prompt = "You are a helpful assistant."\n chatbot = ChatBot(system_prompt)\n app = chatbot.panel()\n assert isinstance(app, type(pn.template.FastListTemplate()))\n```\nNote that the `test_panel` function assumes that the `pn` module is available in the test environment. If it is not, you may need to install it or mock it out for testing purposes.\n\nAlso note that the `test_call` function assumes that the `response` object has a `__class__` attribute that can be used to check its type. If this is not the case, you may need to modify the test to use a different method of checking the type of the response object.\n\nFinally, note that these tests are not exhaustive and may not cover all possible edge cases or error conditions. You may want to add additional tests to ensure that the `ChatBot` class is working correctly in all scenarios.', role='assistant')]
+
+
+
+
+
+
+
+
+
+
You can even access any arbitrary message.
+
+
+
+
+
+
+
print(code_tester.messages[-1].content)
+
+
+
+
+
+
+
+
+Here are some tests for the ChatBot class using PyTest:
+
importpytest
+fromyour_moduleimportChatBot,SystemMessage,HumanMessage
+
+deftest_init():
+ system_prompt="You are a helpful assistant."
+ chatbot=ChatBot(system_prompt)
+ assertlen(chatbot.chat_history)==2
+ assertisinstance(chatbot.chat_history[0],SystemMessage)
+ assertisinstance(chatbot.chat_history[1],SystemMessage)
+ assertchatbot.chat_history[0].content=="Always return Markdown-compatible text."
+ assertchatbot.chat_history[1].content==system_prompt
+
+deftest_call():
+ system_prompt="You are a helpful assistant."
+ chatbot=ChatBot(system_prompt)
+ human_message="What's the weather like today?"
+ response=chatbot(human_message)
+ assertlen(chatbot.chat_history)==4
+ assertisinstance(chatbot.chat_history[2],HumanMessage)
+ assertisinstance(chatbot.chat_history[3],response.__class__)
+ assertchatbot.chat_history[2].content==human_message
+
+deftest_repr():
+ system_prompt="You are a helpful assistant."
+ chatbot=ChatBot(system_prompt)
+ human_message="What's the weather like today?"
+ chatbot(human_message)
+ expected_repr=(
+ "[System]\n"
+ "Always return Markdown-compatible text.\n\n"
+ "[System]\n"
+ "You are a helpful assistant.\n\n"
+ "[Human]\n"
+ "What's the weather like today?\n\n"
+ "[AI]\n"
+ )
+ assertrepr(chatbot)==expected_repr
+
+deftest_panel():
+ system_prompt="You are a helpful assistant."
+ chatbot=ChatBot(system_prompt)
+ app=chatbot.panel()
+ assertisinstance(app,type(pn.template.FastListTemplate()))
+
+Note that the `test_panel` function assumes that the `pn` module is available in the test environment. If it is not, you may need to install it or mock it out for testing purposes.
+
+Also note that the `test_call` function assumes that the `response` object has a `__class__` attribute that can be used to check its type. If this is not the case, you may need to modify the test to use a different method of checking the type of the response object.
+
+Finally, note that these tests are not exhaustive and may not cover all possible edge cases or error conditions. You may want to add additional tests to ensure that the `ChatBot` class is working correctly in all scenarios.
+
+
This notebook shows how to use the ImageBot API to generate images from text.
+Underneath the hood, it uses the OpenAI API.
+This bot can be combined with other bots (e.g. SimpleBot) to create rich content.
fromllamabotimportQueryBot
+frompyprojrootimporthere
+
+# If you're prototyping with your own PDF, uncomment the following code and use it instead of the saved index path:
+# bot = QueryBot(
+# "You are a bot that reads a PDF book and responds to questions about that book.",
+# document_paths=[pdf_fname],
+# collection_name="machine-directed-evolution-paper",
+# model_name="mistral/mistral-medium",
+# )
+
+bot=QueryBot(
+ "You are a bot that reads a PDF book and responds to questions about that book.",
+ collection_name="machine-directed-evolution-paper",
+ model_name="mistral/mistral-medium",
+)
+
+
+
+
+
+
+
+
prompt="I'd like to use the workflow of this paper to educate colleagues. What are the main talking points I should use?"
+bot(prompt)
+
+
+
+
+
+
+
+
prompt="My colleagues are interested in evolving another enzyme. However, they may be unaware of how machine learning approaches will help them there. Based on this paper, what can I highlight that might overcome their lack of knowledge?"
+bot(prompt)
+
+
+
+
+
+
+
+
prompt="What data from the paper helped show this point, 'Machine-directed evolution is an efficient strategy for enzyme engineering, as it can help navigate enzyme sequence space more effectively and reduce the number of enzyme variants to be measured en route to a desirable enzyme under realistic process conditions.'?"
+bot(prompt)
+
+
+
+
+
+
+
+
prompt="How can I succinctly present the SGM vs. EPPCR results to my colleagues? Or in other words, how would Richard Feynman present these results?"
+bot(prompt)
+
+
+
+
+
+
+
+
Using SimpleBot below should prove that we are indeed querying a book
+and not just relying on the LLM's training set.
+
+
+
+
+
+
+
fromllamabotimportSimpleBot
+
+
+sbot=SimpleBot("You are a bot that responds to human questions.")
+sbot(prompt)
+
This shows how to build a blog Q&A bot using the text contents of Eric Ma's blog.
+
+
+
+
+
+
+
Setup: Download blog data
+
+
+
+
+
+
+
importtempfile
+frompathlibimportPath
+
+# Create a temporary directory
+temp_dir=tempfile.TemporaryDirectory(dir="/tmp")
+
+
+repo_url="https://github.com/duckdb/duckdb-web"
+# Clone the repository into the temporary directory
+repo=git.Repo.clone_from(repo_url,temp_dir.name)
+
+# Set the root directory to the cloned repository
+root_dir=Path(temp_dir.name)
+
fromslugifyimportslugify
+bot=QueryBot(
+ system_prompt="You are an expert in the code repository given to you.",
+ collection_name=slugify(repo_url),
+ document_paths=source_files,
+)
+
+
+
+
+
+
+
+
bot("Give me an example of lambda functions in DuckDB.")
+
+
+
+
+
+
+
+
bot("What is your view on building a digital portfolio?")
+
+
+
+
+
+
+
+
bot("What were your experiences with the SciPy conference?")
+
+
+
+
+
+
+
+
bot("What tutorials did you attend at the SciPy conference in 2023?")
+
+
+
+
+
+
+
+
LlamaBot Code Query
+
+
+
+
+
+
+
fromnumpyimportsource
+fromllamabot.file_finderimportrecursive_find
+frompyprojrootimporthere
+
+source_python_files=recursive_find(root_dir=here()/"llamabot",file_extension=".py")
+
+codebot=QueryBot(
+ "You are an expert in code Q&A.",
+ collection_name="llamabot",
+ document_paths=source_python_files,
+ model_name="gpt-4-1106-preview",
+)
+
+
+
+
+
+
+
+
codebot("How do I find all the files in a directory?")
+
+
+
+
+
+
+
+
codebot("Which Bot do I use to chat with my documents?")
+
+
+
+
+
+
+
+
codebot("Explain to me the architecture of SimpleBot.")
+
+
+
+
+
+
+
+
codebot("What are the CLI functions available in LlamaBot?")
+
One challenge I've found when working with prompts is recording what I get back when I try out different prompts.
+Copying and pasting is clearly not what I'd like to do.
+So I decided to write some functionality into Llamabot that lets us do recording of prompts
+and the responses returned by GPT.
+
Here's how to use it.
+
+
+
+
+
+
+
fromllamabotimportSimpleBot,PromptRecorder
+
+
+
+
+
+
+
+
bot=SimpleBot("You are a bot.")
+
+
+
+
+
+
+
+
# Try three different prompts.
+
+prompt1=(
+ "You are a fitness coach who responds in 25 words or less. How do I gain muscle?"
+)
+prompt2="You are an expert fitness coach who responds in 100 words or less. How do I gain muscle?"
+prompt3="You are an expert fitness coach who responds in 25 words or less and will not give lethal advice. How do I gain muscle?"
+
+recorder=PromptRecorder()
+
prompt4="You are an expert fitness coach who responds in 25 words or less, and you help people who only have access to body weight exercises. How do I gain muscle?"
+
+withrecorder:
+ bot(prompt4)
+
+"# Automatically write awesome commit messages\n\nAs a data scientist, I work with Git.\n\nIf you're anyt..."
+
+
+
+
+
+
+
+
+
+
And we'd like to create a function that takes in the text and gives us a draft LinkedIn post,
+complete with emojis,
+that is designed to entice others to read the blog post.
+LLaMaBot's SimpleBot lets us build that function easily.
+
+
+
+
+
+
+
fromllamabotimportSimpleBot
+
+system_prompt="""You are a LinkedIn post generator bot.
+A human will give you the text of a blog post that they've authored,
+and you will compose a LinkedIn post that advertises it.
+The post is intended to hook a reader into reading the blog post.
+The LinkedIn post should be written with one line per sentence.
+Each sentence should begin with an emoji appropriate to that sentence.
+The post should be written in professional English and in first-person tone for the human.
+"""
+
+linkedin=SimpleBot(
+ system_prompt=system_prompt,
+ stream_target="stdout",# this is the default!
+)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Note that SimpleBot by default will always stream.
+All that you need to configure is where you want to stream to.
+
+
+
+
+
+
+
With linkedin, we can now pass in the blog text and - voila! - get back a draft LinkedIn post.
+
+
+
+
+
+
+
linkedin_post=linkedin(blog_text)
+
+
+
+
+
+
+
+
+🚀 Excited to share my latest blog post on crafting meaningful commit messages!
+👀 Are your commit messages lacking detail and clarity?
+🤔 Ever wished for a tool that could automatically generate informative commit messages for you?
+🌟 Introducing my CLI tool within `llamabot` that crafts commit messages according to the Conventional Commits specification.
+🔍 With an OpenAI API key, GPT-4-32k will write a commit message that provides detailed insights into the changes made.
+🎉 The benefits of using meaningful commit messages are manifold - from improving collaboration to aiding in debugging and issue resolution.
+🔗 Check out the full blog post to learn more about the impact of meaningful commit messages and how to install `llamabot`! #Git #DataScience #Productivity
+
+
+
+
+
+
+
+
+
+
Now, you can edit it to your hearts content! :-)
+
+
+
+
+
+
+
Next up, we have streaming that is compatible with Panel's Chat interface,
+which expects the text to be returned in its entirety as it is being built up.
If you have an Ollama server running, you can hit the API using SimpleBot.
+The pre-requisite is that you have already run ollama pull <modelname>
+to download the model to the Ollama server.
+
+
+
+
+
+
+
+
print(system_prompt)
+
+
+
+
+
+
+
+
importos
+fromdotenvimportload_dotenv
+
+load_dotenv()
+
+linkedin_ollama=SimpleBot(
+ model_name="ollama/mistral",# Specifying Ollama via the model_name argument is necessary!s
+ system_prompt=system_prompt,
+ stream_target="stdout",# this is the default!
+ api_base=f"http://{os.getenv('OLLAMA_SERVER')}:11434",
+)
+linkedin_post=linkedin_ollama(blog_text)
+
LlamaBot implements a Pythonic interface to LLMs,
+making it much easier to experiment with LLMs in a Jupyter notebook
+and build Python apps that utilize LLMs.
+All models supported by LiteLLM are supported by LlamaBot.
+
Install LlamaBot
+
To install LlamaBot:
+
pipinstallllamabot
+
+
Get access to LLMs
+
Option 1: Using local models with Ollama
+
LlamaBot supports using local models through Ollama.
+To do so, head over to the Ollama website and install Ollama.
+Then follow the instructions below.
+
Option 2: Use an API provider
+
OpenAI
+
If you have an OpenAI API key, then configure LlamaBot to use the API key by running:
If you have a Mistral API key, then configure LlamaBot to use the API key by running:
+
exportMISTRAL_API_KEY="your-api-key-goes-here"
+
+
Other API providers
+
Other API providers will usually specify an environment variable to set.
+If you have an API key, then set the environment variable accordingly.
+
How to use
+
SimpleBot
+
The simplest use case of LlamaBot
+is to create a SimpleBot that keeps no record of chat history.
+This is effectively the same as a stateless function
+that you program with natural language instructions rather than code.
+This is useful for prompt experimentation,
+or for creating simple bots that are preconditioned on an instruction to handle texts
+and are then called upon repeatedly with different texts.
+For example, to create a Bot that explains a given chunk of text
+like Richard Feynman would:
+
fromllamabotimportSimpleBot
+
+feynman=SimpleBot("You are Richard Feynman. You will be given a difficult concept, and your task is to explain it back.",model_name="gpt-3.5-turbo")
+
+
Now, feynman is callable on any arbitrary chunk of text and will return a rephrasing of that text in Richard Feynman's style (or more accurately, according to the style prescribed by the prompt).
+For example:
+
feynman("Enzyme function annotation is a fundamental challenge, and numerous computational tools have been developed. However, most of these tools cannot accurately predict functional annotations, such as enzyme commission (EC) number, for less-studied proteins or those with previously uncharacterized functions or multiple activities. We present a machine learning algorithm named CLEAN (contrastive learning–enabled enzyme annotation) to assign EC numbers to enzymes with better accuracy, reliability, and sensitivity compared with the state-of-the-art tool BLASTp. The contrastive learning framework empowers CLEAN to confidently (i) annotate understudied enzymes, (ii) correct mislabeled enzymes, and (iii) identify promiscuous enzymes with two or more EC numbers—functions that we demonstrate by systematic in silico and in vitro experiments. We anticipate that this tool will be widely used for predicting the functions of uncharacterized enzymes, thereby advancing many fields, such as genomics, synthetic biology, and biocatalysis.")
+
+
This will return something that looks like:
+
Alright, let's break this down.
+
+Enzymes are like little biological machines that help speed up chemical reactions in our
+bodies. Each enzyme has a specific job, or function, and we use something called an
+Enzyme Commission (EC) number to categorize these functions.
+
+Now, the problem is that we don't always know what function an enzyme has, especially if
+it's a less-studied or new enzyme. This is where computational tools come in. They try
+to predict the function of these enzymes, but they often struggle to do so accurately.
+
+So, the folks here have developed a new tool called CLEAN, which stands for contrastive
+learning–enabled enzyme annotation. This tool uses a machine learning algorithm, which
+is a type of artificial intelligence that learns from data to make predictions or
+decisions.
+
+CLEAN uses a method called contrastive learning. Imagine you have a bunch of pictures of
+cats and dogs, and you want to teach a machine to tell the difference. You'd show it
+pairs of pictures, some of the same animal (two cats or two dogs) and some of different
+animals (a cat and a dog). The machine would learn to tell the difference by contrasting
+the features of the two pictures. That's the basic idea behind contrastive learning.
+
+CLEAN uses this method to predict the EC numbers of enzymes more accurately than
+previous tools. It can confidently annotate understudied enzymes, correct mislabeled
+enzymes, and even identify enzymes that have more than one function.
+
+The creators of CLEAN have tested it with both computer simulations and lab experiments,
+and they believe it will be a valuable tool for predicting the functions of unknown
+enzymes. This could have big implications for fields like genomics, synthetic biology,
+and biocatalysis, which all rely on understanding how enzymes work.
+
+
If you want to use an Ollama model hosted locally,
+then you would use the following syntax:
+
fromllamabotimportSimpleBot
+bot=SimpleBot(
+ "You are Richard Feynman. You will be given a difficult concept, and your task is to explain it back.",
+ model_name="ollama/llama2:13b"
+)
+
+
Simply specify the model_name keyword argument
+and provide a model name from the Ollama library of models
+prefixed by ollama/.
+All you need to do is make sure Ollama is running locally;
+see the Ollama documentation for more details.
+(The same can be done for the ChatBot and QueryBot classes below!)
+
Chat Bot
+
To experiment with a Chat Bot in the Jupyter Notebook,
+we also provide the ChatBot interface.
+This interface automagically keeps track of chat history
+for as long as your Jupyter session is alive.
+Doing so allows you to use your own local Jupyter Notebook as a chat interface.
+
For example:
+
fromllamabotimportChatBot
+
+feynman=ChatBot("You are Richard Feynman. You will be given a difficult concept, and your task is to explain it back.",session_name="feynman_chat")
+feynman("Enzyme function annotation is a fundamental challenge, and numerous computational tools have been developed. However, most of these tools cannot accurately predict functional annotations, such as enzyme commission (EC) number, for less-studied proteins or those with previously uncharacterized functions or multiple activities. We present a machine learning algorithm named CLEAN (contrastive learning–enabled enzyme annotation) to assign EC numbers to enzymes with better accuracy, reliability, and sensitivity compared with the state-of-the-art tool BLASTp. The contrastive learning framework empowers CLEAN to confidently (i) annotate understudied enzymes, (ii) correct mislabeled enzymes, and (iii) identify promiscuous enzymes with two or more EC numbers—functions that we demonstrate by systematic in silico and in vitro experiments. We anticipate that this tool will be widely used for predicting the functions of uncharacterized enzymes, thereby advancing many fields, such as genomics, synthetic biology, and biocatalysis.")
+
+
With the chat history available, you can ask a follow-up question:
+
feynman("Is there a simpler way to rephrase the text such that a high schooler would understand it?")
+
+
And your bot will work with the chat history to respond.
+
QueryBot
+
The final bot provided is a QueryBot.
+This bot lets you query a collection of documents.
+To use it, you have two options:
+
+
Pass in a list of paths to text files, or
+
Pass in a session name of a previously instantiated QueryBot that model. (This will load the previously-computed text index into memory.)
+
+
As an illustrative example:
+
fromllamabotimportQueryBot
+frompathlibimportPath
+
+blog_index=Path("/path/to/index.json")
+bot=QueryBot(system_message="You are an expert on Eric Ma's blog.",session_name="eric_ma_blog")# this loads my previously-embedded blog text.
+# alternatively:
+# bot = QueryBot(system_message="You are an expert on Eric Ma's blog.", session_name="eric_ma_blog", document_paths=[Path("/path/to/blog/post1.txt"), Path("/path/to/blog/post2.txt"), ...])
+result=bot("Do you have any advice for me on career development?")
+display(Markdown(result.response))
+
+
ImageBot
+
With the release of the OpenAI API updates,
+as long as you have an OpenAI API key,
+you can generate images with LlamaBot:
+
fromllamabotimportImageBot
+
+bot=ImageBot()
+# Within a Jupyter notebook:
+url=bot("A painting of a dog.")
+
+# Or within a Python script
+filepath=bot("A painting of a dog.")
+
+# Now, you can do whatever you need with the url or file path.
+
+
If you're in a Jupyter Notebook,
+you'll see the image show up magically as part of the output cell as well.
+
CLI Demos
+
Llamabot comes with CLI demos of what can be built with it and a bit of supporting code.
+
Here is one where I expose a chatbot directly at the command line using llamabot chat:
+
+
+
And here is another one where llamabot is used as part of the backend of a CLI app
+to chat with one's Zotero library using llamabot zotero chat:
+
+
+
And finally, here is one where I use llamabot's SimpleBot to create a bot
+that automatically writes commit messages for me.
+
+
+
Contributing
+
New features
+
New features are welcome!
+These are early and exciting days for users of large language models.
+Our development goals are to keep the project as simple as possible.
+Features requests that come with a pull request will be prioritized;
+the simpler the implementation of a feature (in terms of maintenance burden),
+the more likely it will be approved.
+
Bug reports
+
Please submit a bug report using the issue tracker.
This tutorial was written by GPT4 and edited by a human.
+
+
The doc processor is a Python script designed to preprocess documents
+by loading them from various file formats
+and splitting them into smaller sub-documents.
+It works in two main steps:
+
(1) Loading documents:
+The magic_load_doc function is used to load a document from a file.
+It automatically detects the file format based on the file extension
+and uses the appropriate loader to read the content.
+Supported file formats include PDF, DOCX, PPTX, XLSX, Markdown, and IPYNB.
+If the file format is not recognized, it is treated as a plain text file.
+
(2) Splitting documents:
+The split_document function is used to split a document
+into smaller sub-documents using a token text splitter.
+You can specify the maximum length of each sub-document (chunk_size)
+and the number of tokens to overlap between each sub-document (chunk_overlap).
+The function returns a list of sub-documents.
+
To use the doc processor,
+simply import the required functions and call them with the appropriate parameters.
+For example:
+
fromllamabot.doc_processorimportmagic_load_doc,split_document
+
+# Load a document from a file
+file_path="path/to/your/document.pdf"
+documents=magic_load_doc(file_path)
+
+# Split the document into sub-documents
+chunk_size=2000
+chunk_overlap=0
+sub_documents=[split_document(doc,chunk_size,chunk_overlap)fordocindocuments]
+
+
This will give you a list of sub-documents that can be further processed inside QueryBot.
In this tutorial, we will explore a module that provides functions for file handling in Python. The module contains three main functions:
+
+
recursive_find(root_dir: Path, file_extension: str) -> List[Path]: Find all files in a given path with a given extension.
+
check_in_git_repo(path) -> bool: Check if a given path is in a git repository.
+
read_file(path: Path) -> str: Read a file.
+
+
Let's dive into each function and see how they can be used.
+
1. Finding Files Recursively
+
The recursive_find function allows you to find all files with a specific extension within a given directory and its subdirectories. This can be useful when you want to process all files of a certain type in a project.
+
Usage
+
To use the recursive_find function, you need to provide two arguments:
+
+
root_dir: The directory in which to search for files.
+
file_extension: The file extension to search for. For example, use ".py" for Python files, not "py".
+
+
Here's an example of how to use the recursive_find function:
This will output a list of Path objects representing all the Python files found in the specified directory and its subdirectories.
+
2. Checking if a Path is in a Git Repository
+
The check_in_git_repo function allows you to check if a given path is part of a git repository. This can be useful when you want to ensure that your code is only executed within a version-controlled environment.
+
Usage
+
To use the check_in_git_repo function, you need to provide one argument:
+
+
path: The path to check.
+
+
Here's an example of how to use the check_in_git_repo function:
This will output True if the specified path is part of a git repository, and False otherwise.
+
3. Reading a File
+
The read_file function allows you to read the contents of a file. This can be useful when you want to process the contents of a file, such as analyzing code or parsing data.
+
Usage
+
To use the read_file function, you need to provide one argument:
+
+
path: The path to the file to be read.
+
+
Here's an example of how to use the read_file function:
This will output the contents of the specified file.
+
Conclusion
+
In this tutorial, we have explored a module that provides functions for file handling in Python. By using these functions, you can easily find files with specific extensions, check if a path is part of a git repository, and read the contents of a file. These functions can be combined to create powerful file processing pipelines in your Python projects.
This tutorial was written by GPT4 and edited by a human.
+
+
The prompt recorder is a class named PromptRecorder that helps in recording prompts and responses. It works as a context manager, allowing you to record prompts and responses within a specific context. Here's a brief overview of how it works:
+
+
+
The PromptRecorder class is initialized with an empty list called prompts_and_responses to store the prompts and responses.
+
+
+
When entering the context manager using the with statement, the __enter__() method is called, which sets the current instance of the PromptRecorder as the active recorder in the prompt_recorder_var context variable.
+
+
+
To log a prompt and response, the log() method is called with the prompt and response as arguments. This method appends the prompt and response as a dictionary to the prompts_and_responses list.
+
+
+
The autorecord() function is provided to be called within every bot. It checks if there is an active PromptRecorder instance in the context and logs the prompt and response using the log() method.
+
+
+
When exiting the context manager, the __exit__() method is called, which resets the prompt_recorder_var context variable to None and prints a message indicating that the recording is complete.
+
+
+
The PromptRecorder class also provides methods to represent the recorded data in different formats, such as a string representation (__repr__()), an HTML representation (_repr_html_()), a pandas DataFrame representation (dataframe()), and a panel representation (panel()).
+
+
+
By using the PromptRecorder class as a context manager, you can easily record prompts and responses within a specific context and then analyze or display the recorded data in various formats.
This new version introduces experiments with Llamahub document loaders, updates workspace settings, and makes Panel an official dependency. It also includes a new Panel example and a correction in the docstring for accuracy.
+
New Features
+
+
Experiments with Llamahub document loaders have been added to enhance functionality (8faea4) (Eric Ma)
+
Workspace settings have been updated to improve user experience (3b5522) (Eric Ma)
+
Panel has been made an official dependency to streamline the software's requirements (781906) (Eric Ma)
+
A new Panel example has been added to provide users with more comprehensive usage scenarios (2209cd) (Eric Ma)
+
+
Bug Fixes
+
+
The docstring has been edited for correctness to ensure accurate documentation (0ee1ca) (Eric Ma)
This version introduces automatic recording of prompts, improves the recording process, and verifies its functionality. It also includes a cleanup of notebooks and adds loguru as a dependency.
+
New Features
+
+
Added automatic recording of prompts (2c956f) (Eric Ma)
+
Improved automatic recording of prompts (50779c) (Eric Ma)
+
Verified that the recorder works (aa428a) (Eric Ma)
+
Added loguru as a dependency (23ee02) (Eric Ma)
+
+
Bug Fixes
+
+
Fixed return type (3986b7) (Eric Ma)
+
+
Other Changes
+
+
Cleaned up notebooks (3c5a74) (Eric Ma)
+
Reorganized notebook structure (4bee78) (Eric Ma)
+
Enabled context manager for recording prompt-response pairs (e6a8b4) (Eric Ma)
+
Settled on a stuff-the-text-into-prompt pattern rather than synthesizing and refining response. This makes things faster (7020d0) (Eric Ma)
+
Enabled arbitrary loading of documents, not just text files (e4223c) (Eric Ma)
+
Switched to using servable for feynman example (e50e45) (Eric Ma)
+
Disabled test mode. A different way to make mock API calls work will be found (7a7beb) (Eric Ma)
+
More experiments with llamahub loaders (4b2871) (Eric Ma)
This new version includes an update to the example about querying PDFs and a bug fix related to query nodes. The version has been bumped from 0.0.12 to 0.0.13.
+
New Features
+
+
The example about querying PDFs has been updated to provide more clarity and better understanding (8169ce) (Eric Ma)
+
+
Bug Fixes
+
+
Fixed a bug where query nodes were hard-coded to 3, which limited the flexibility of the system. Now, the number of query nodes is dynamic (6802ac) (Eric Ma)
This new version includes several enhancements and bug fixes to improve the overall performance and user experience of the Llamabot. The Python environment and llama_index versions have been pinned for stability. The chatbot panel app now uses a width of 600 pixels for better UI. The Querybot system message now applies SEMBR for improved readability.
+
New Features
+
+
The chatbot panel app now uses a width of 600 pixels for a more user-friendly interface (7e2f05) (Eric Ma)
+
Faux chat history of length 6000 tokens is now used as context for further responses in chatbot, enhancing the chatbot's response accuracy (02ef9d) (Eric Ma)
+
SEMBR has been applied on the Querybot system message for improved readability (b3c53c) (Eric Ma)
+
+
Bug Fixes
+
+
Fixed UI for a smoother user experience (733759) (Eric Ma)
+
+
Deprecations
+
+
The version of the Python environment has been pinned to 3.9 to ensure compatibility and stability (5db1ae) (Eric Ma)
+
The version of llama_index has been pinned for stability (930cbb) (Eric Ma)
+
Temporarily settled on an older version of langchain for the time being (af0938) (Eric Ma)
+
+
Refactors
+
+
Refactored Querybot to allow loading of documents later, enhancing the flexibility of the system (3103d9) (Eric Ma)
This new version includes updates to the versions of bokeh used in the project. The bokeh version has been pinned to ensure compatibility and stability of the project.
+
New Features
+
+
No new features were added in this version.
+
+
Bug Fixes
+
+
The versions of bokeh used in the project have been updated to ensure compatibility and stability. (96a89e) (Eric Ma)
+
+
Deprecations
+
+
The bokeh version has been pinned to less than or equal to 3.1.0 to prevent potential issues with future versions. (2a93d9) (Eric Ma)
This new version includes a variety of enhancements and new features, including the addition of a coding bot panel app, a refactor of the CLI, and updates to the notebooks and tutorial materials. The version also includes the integration of pyzotero to dependencies and the beginning of a prototype for coding and diffbots in a library of bots.
+
New Features
+
+
Added a coding bot panel app (a139420) (Eric Ma)
+
Refactored the CLI for better performance and usability (b067193) (Eric Ma)
+
Updated the scratch notebook (c0a4e2b) (Eric Ma)
+
Added a code tutorial notebook to help users understand how to use the application (ae42c64) (Eric Ma)
+
Added instructions in the codingbot source file to provide more context for the code tutorial bot (79236f) (Eric Ma)
+
Added more notebooks to save work and enhance the user experience (891e570) (Eric Ma)
+
Added pyzotero to dependencies to enhance the functionality of the application (1d55eb2) (Eric Ma)
+
Began the prototype of coding and diffbots in a library of bots to expand the application's capabilities (f26c789) (Eric Ma)
+
Updated the blog text demo for better user understanding (ae9c539) (Eric Ma)
+
Updated base bots to use the updated llamaindex for improved performance (baad16c) (Eric Ma)
+
+
Bug Fixes
+
+
Fixed the placement of noqa DAR101 for better code quality (d73b14f) (Eric Ma)
This new version includes improvements to the basecallbackmanager and the pinned version of langchain.
+
New Features
+
+
The basecallbackmanager now requires an explicit handlers argument, enhancing clarity and reducing potential for errors (169f72) (Eric Ma)
+
+
Bug Fixes
+
+
No bug fixes in this release.
+
+
Deprecations
+
+
The version of langchain used in the project has been pinned, ensuring stability and compatibility with this version of the software (631c29) (Eric Ma)
This new version includes several important updates to improve the functionality and security of the Llamabot application.
+
New Features
+
+
All websocket origins are now allowed, enhancing the connectivity and compatibility of the application (a685f0) (Eric Ma)
+
+
Bug Fixes
+
+
Fixed the issue with the Querybot index retriever by updating it with the refactored Llamaindex code. This should improve the accuracy and efficiency of the index retriever (12a87b) (Eric Ma)
+
+
Deprecations
+
+
The opening of the app has been disabled. This is a temporary measure for security reasons and will be addressed in future updates (e6e1a4) (Eric Ma)
This version includes several enhancements and new features, including the addition of new modules and functions, improvements to documentation, and minor updates to notebooks.
+
New Features
+
+
Added pyperclip to environment spec and requirements (e57f7f) (Eric Ma)
+
Added additional config path for llamabot's OpenAI API key (713579) (Eric Ma)
+
Added function for getting function source from a .py file (07e402) (Eric Ma)
+
Added get_valid_input for user prompts (b001d2) (Eric Ma)
+
Initial commit of llamabot's python bot (e5c67d) (Eric Ma)
+
Added read_file to the file_finder.py source file (5b44d5) (Eric Ma)
This version includes a number of new features, improvements, and bug fixes. The main focus of this release was to enhance the testing environment, improve code generation, and add new functionalities.
+
New Features
+
+
Added float_to_top = true for isort in pyproject.toml config (0c58f8) (Eric Ma)
+
Added tests for doc_processor (d679a0) (Eric Ma)
+
Modified prompt to ensure that docstring indentation is done correctly (1b1ade) (Eric Ma)
+
Added functions replace_object_in_file and insert_docstring (adf44a) (Eric Ma)
+
Added dummy module for experimentation purposes (f27a71) (Eric Ma)
+
Added validation of chunk_overlap value (9d2070) (Eric Ma)
+
Added tests for get_valid_input (0a984b) (Eric Ma)
This new version includes a significant change in the underlying parsing library, moving from astunparse to astor. This change is expected to improve the performance and reliability of the Llamabot.
+
New Features
+
+
The version of Llamabot has been updated from 0.0.32 to 0.0.33 (2e4c61) (Eric Ma)
+
+
Bug Fixes
+
+
No bug fixes in this release.
+
+
Deprecations
+
+
The use of astunparse has been deprecated and replaced with astor for better performance and reliability (326c59) (Eric Ma)
This version introduces several new features and improvements to the LlamaBot CLI, including the addition of git diff display and commit message generation, a repr method for the Dummy class, and handling for no staged changes in commit_message. It also includes several refactors and a documentation update.
+
New Features
+
+
Added git diff display and commit message generation functionality to the LlamaBot CLI. This feature imports the get_git_diff function from llamabot.code_manipulation, creates a SimpleBot instance for commit message generation, defines a commit_message function with a text.prompt decorator, and calls commitbot with the generated commit message. (1a6104) (Eric Ma)
+
Added a repr method to the Dummy class in dummy.py. This provides a string representation of the object, making it easier to inspect and debug instances of the Dummy class. (ae3e7c) (Eric Ma)
+
Updated commit_message function in cli/git.py to check for staged changes before generating a commit message. If no staged changes are found, a message is printed and the function returns. The get_git_diff function in code_manipulation.py was also updated to return an empty string if there are no staged changes. (ed7a3d) (Eric Ma)
+
+
Bug Fixes
+
+
Fixed typos in the llamabot CLI git module. Changes include renaming git.app to git.gitapp in llamabot/cli/init.py, adding missing parentheses to decorators in llamabot/cli/git.py, and replacing "docstring" with "commit message" in the user prompt. (860930) (Eric Ma)
+
Refactored Typer app and command decorators in git.py. The app was renamed to gitapp for better context, and decorators were updated to use the new gitapp variable. (f7af8b) (Eric Ma)
+
+
Deprecations
+
+
Removed the unnecessary hello command from the git.py file in the llamabot CLI. This simplifies the codebase and focuses on the core functionality. (8f0b9d) (Eric Ma)
+
+
Documentation
+
+
Added a detailed explanation of the Conventional Commits specification to the git.py file. This outlines the various commit types, scopes, and footers, as well as their correlation with Semantic Versioning. This information will help users understand the importance of following the Conventional Commits specification when crafting their commit messages. (ca9b1c) (Eric Ma)
This new version introduces an autocommit option to the commit_message function in llamabot/cli/git.py. This feature allows for automatic committing of changes using the generated commit message when the autocommit parameter is set to True.
+
New Features
+
+
Added an autocommit option to the commit_message function in llamabot/cli/git.py. When set to True, changes are automatically committed using the generated commit message. (5c202a) (Eric Ma)
This new version includes a refactor of the CLI to improve code readability and understanding.
+
New Features
+
No new features were added in this version.
+
Bug Fixes
+
No bug fixes were made in this version.
+
Refactors
+
+
The commit_message function in llamabot/cli/git.py has been renamed to commit to better reflect its purpose of committing staged changes (ae7b10) (Eric Ma)
This new version introduces automatic push to origin after commit and adds pytest-mock to the pr-tests workflow.
+
New Features
+
+
Automatic push to origin after commit has been added. This feature simplifies the workflow and ensures that changes are synced with the remote repository. (7a0151) (Eric Ma)
+
Pytest-mock has been added to the pr-tests workflow. This enables mocking in test cases. (ab1cb0) (Eric Ma)
This new version includes several updates to the documentation and build process, as well as the addition of an all-contributors section. The version also includes a fix to the build command.
+
New Features
+
+
Added an all-contributors section to the documentation (9a65f2) (Eric Ma)
+
Added an all-contributors badge to the project (71be05) (Eric Ma)
+
Added all-contributors configuration to the project (7ae0d2) (Eric Ma)
+
Updated the all-contributors specification to correct the project name and owner (3e841c) (Eric Ma)
+
Switched to using docs/index.md for all-contributors (0aa0fe) (Eric Ma)
+
+
Bug Fixes
+
+
Fixed the build command for the project (6e8e65) (Eric Ma)
+
+
Deprecations
+
+
Temporarily removed the badge from the project (4bef48) (Eric Ma)
+
+
Other Changes
+
+
Removed unnecessary whitespace from the project (943ba4) (Eric Ma)
+
Reformatted the allcontributors.rc file (8ae972) (Eric Ma)
+
Updated the badge for the project (6c0850) (Eric Ma)
+
Updated the table in the documentation (047481) (Eric Ma)
+
Bumped the project version from 0.0.3 to 0.0.4 (943e5a) (Eric Ma)
This new version introduces a handy version command to the llamabot CLI and updates the bumpversion configuration.
+
New Features
+
+
A new version command has been added to the llamabot CLI. This command prints the current version of the application. (a88028) (Eric Ma)
+
The .bumpversion.cfg file has been updated to include llamabot/version.py for version updates. This ensures that the version number is updated consistently across the application. (a88028) (Eric Ma)
This new version introduces an enhancement to the get_valid_input function and a new feature that allows users to manually edit the generated commit message using their system's default text editor.
+
New Features
+
+
Manual commit message editing option has been added. Users can now manually edit the generated commit message using their system's default text editor. This is done by creating a temporary file with the generated message, opening it in the editor, and reading the edited message back into the script. The 'm' option is added to the user input prompt to trigger manual editing. (37baea) (Eric Ma)
+
+
Enhancements
+
+
The get_valid_input function in cli/utils has been refactored for better input validation. A valid_inputs parameter has been added to the function, the input prompt has been updated to include valid_inputs, and the input validation now checks against valid_inputs. The error message has also been updated to display valid_inputs options. (b32986) (Eric Ma)
This new version includes several enhancements to the code workflow and code manipulation features, as well as an update to the default model_name in various bot classes.
+
New Features
+
+
Added new code cells and autoreload in code_workflow.ipynb. This includes the addition of new empty code cells for future implementation, a placeholder in one of the cells, autoreload magic commands for a better development experience, and the importation and demonstration of the get_dependencies function usage (5f6880) (Eric Ma)
+
Introduced the get_dependencies function to retrieve a list of dependencies for a specified object in a source file. Also fixed the return type annotation for the get_git_diff function and added a test case for the get_dependencies function in test_code_manipulation.py (2d816f) (Eric Ma)
+
Updated the default model_name parameter value from "gpt-4" to "gpt-4-32k" in the constructors of ChatBot, QueryBot, and SimpleBot classes (c93ba3) (Eric Ma)
+
+
Bug Fixes
+
+
No bug fixes in this release.
+
+
Deprecations
+
+
No deprecations in this release.
+
+
Refactors
+
+
Reorganized imports and improved test generation. This includes moving the get_valid_input import to the top of llamabot/cli/git.py, adding the get_dependencies import to llamabot/cli/python.py, and updating the tests function in llamabot/prompt_library/coding.py to include dependent source files for better test generation (f75202) (Eric Ma)
This new version includes a variety of enhancements and new features, including the addition of new notebooks, improvements to the Zotero and QueryBot functionalities, and the integration of Google Calendar API.
+
New Features
+
+
Added blogging assistant and gcal notebooks for blog tagger, summarizer, and Google Calendar related tasks. Also, updated existing notebooks for cache and Zotero with new features and improvements (2378760) (Eric Ma)
+
Implemented updates to all attendees on event creation and update in Google Calendar (57f80de) (Eric Ma)
This new version includes a refactor of the get_git_diff function in the code_manipulation module. The default value for the repo_path parameter has been changed to improve the function's usability.
+
New Features
+
+
No new features were added in this version.
+
+
Bug Fixes
+
+
No bug fixes were made in this version.
+
+
Refactors
+
+
The default value of the repo_path parameter in the get_git_diff function has been changed from here() to None. Additionally, a conditional check has been added to set repo_path to here() if it is None. This change makes the function more flexible and easier to use. (96e69b) (Eric Ma)
This new version includes a significant refactor of the tutorialbot, improving its flexibility and maintainability.
+
New Features
+
+
The tutorialbot has been refactored from a SimpleBot instance to a function that returns a SimpleBot instance. This change enhances the flexibility of the bot, allowing for more diverse use cases. (d85426) (Eric Ma)
This new version introduces improvements to the progress reporting in the chat_paper function of the Zotero feature.
+
New Features
+
+
Improved progress reporting in the chat_paper function of the Zotero feature. The changes include moving the retrieverbot response and paper_key retrieval outside of the progress context, adding progress tasks for embedding Zotero library, downloading paper, and initializing docbot, and wrapping relevant sections of code with progress context (da0fc0) (Eric Ma)
This new version introduces a tutorial for the Zotero CLI feature of Llamabot and refactors the tutorial generation process for improved code readability and maintainability.
+
New Features
+
+
A comprehensive tutorial for using the Llamabot Zotero CLI has been added. This tutorial includes sections on prerequisites, configuration, syncing Zotero items, and chatting with a paper, with examples and explanations provided for each step. (711011) (Eric Ma)
+
+
Bug Fixes
+
+
No bug fixes in this release.
+
+
Deprecations
+
+
No deprecations in this release.
+
+
Refactors
+
+
The tutorial generation process has been updated. Now, the tutorialbot is instantiated before calling the module_tutorial_writer, which improves code readability and maintainability. (99f487) (Eric Ma)
This new version introduces the QueryBot prototype and its corresponding tests. It also includes improvements in documentation and example notebooks. The version also includes some housekeeping changes like ignoring certain files and directories.
+
New Features
+
+
QueryBot prototype added to the project. This is a new feature that allows users to interact with the bot using queries. (c190e03) (Eric Ma)
+
Tests for QueryBot have been added to ensure its proper functioning. (78a791d) (Eric Ma)
+
A new example on how to build a simple panel app has been added. This will help users understand how to create their own apps. (7e928b7) (Eric Ma)
+
A notebook chatbot example has been added to provide a practical example of how to use the chatbot in a notebook environment. (7e96304) (Eric Ma)
+
A simplebot notebook has been added to the project. This notebook provides a simple example of a bot. (0121db5) (Eric Ma)
+
+
Bug Fixes
+
+
The chat notebook example is now properly executed. This fix ensures that the example runs as expected. (60803dd) (Eric Ma)
+
+
Deprecations
+
+
Notebook execution has been disabled. This change is made to prevent automatic execution of notebooks. (89c39c1) (Eric Ma)
+
+
Other Changes
+
+
The project version has been bumped from 0.0.4 to 0.0.5. (e94a28) (Eric Ma)
+
Docstrings have been added to the project for better code understanding and readability. (2eb8c62) (Eric Ma)
+
The directory 'data/' is now ignored by Git. This prevents unnecessary tracking of changes in this directory. (4252cd4) (Eric Ma)
+
The 'mknotebooks' has been moved to the pip section. (e5f0e9d) (Eric Ma)
+
Temporary markdown files created by 'mknotebooks' are now ignored by Git. This prevents unnecessary tracking of these temporary files. (1e4821d) (Eric Ma)
+
The README file has been updated twice to provide the latest information about the project. (b3e02e2, 32f32db) (Eric Ma)
This new version introduces enhanced functionality to the chat_paper function and the get_key prompt in zotero.py, adds a streaming option to the QueryBot class in querybot.py, and removes a debugging print statement in doc_processor.py.
+
New Features
+
+
The chat_paper function in zotero.py now supports multiple paper keys, provides a list of paper titles for the user to choose from, and displays a summary of the selected paper (1c47a8) (Eric Ma)
+
The get_key prompt in zotero.py has been updated to return a list of keys instead of a single key, improving the user experience (1c47a8) (Eric Ma)
+
A new 'stream' parameter has been added to the QueryBot class in querybot.py, allowing users to choose whether to stream the chatbot or not. By default, 'stream' is set to True (01ada0) (Eric Ma)
+
+
Bug Fixes
+
+
A print statement used for debugging purposes has been removed from the doc_processor.py file (796ac2) (Eric Ma)
This new version introduces several enhancements to the Zotero integration in the llamabot project, improving performance, user interaction, and error handling. It also includes important bug fixes and documentation updates.
+
New Features
+
+
Added a sync option to the ZoteroLibrary class, improving performance by reducing unnecessary queries to Zotero when the library can be loaded from a local file (a3ea1b) (Eric Ma)
+
Integrated the standalone sync command from zotero.py into the chat command and refactored ZoteroLibrary and ZoteroItem classes to handle synchronization and downloading of Zotero items (a75308) (Eric Ma)
+
Updated the guidelines for writing commit messages in the git.py file (a98ba93) (Eric Ma)
+
Added support for accessing nested keys in the ZoteroItem class (216abc) (Eric Ma)
+
Improved task progress visibility and command help in the Zotero integration (895079) (Eric Ma)
+
Enhanced the chat function in zotero.py with an interactive prompt and an exit command (bf043b) (Eric Ma)
+
Updated file handling in ZoteroItem class, including a fallback to write an abstract.txt file when no PDF is available (8b9fa4) (Eric Ma)
+
Simplified progress task handling and improved output formatting in the Zotero integration (26dc67) (Eric Ma)
+
Improved user interaction and error handling in Zotero integration, including persistent progress display, better progress tracking, real-time streaming, and continuous interaction (347a08) (Eric Ma)
+
Ensured that the get_key function in zotero.py strictly returns JSON format (34b82d) (Eric Ma)
+
Enhanced Zotero library and item classes, including faster lookup, better PDF handling, and improved functionality and usability (a813c5) (Eric Ma)
+
+
Bug Fixes
+
+
Corrected file writing in ZoteroItem class, ensuring that the abstractNote data is correctly written to the file (42e6a5) (Eric Ma)
+
Fixed a typo in the file open method in the ZoteroItem class that was causing a runtime error (0a20e9) (Eric Ma)
This new version includes an important bug fix and updates to the tutorial content for the Llamabot Zotero CLI.
+
New Features
+
+
The tutorial content for the Llamabot Zotero CLI has been updated to provide a more accurate and user-friendly guide. Changes include rewording the introduction, updating the prerequisites section, removing the section on syncing Zotero items, and adding sections on various topics such as chatting with a paper, retrieving keys, downloading papers, and asking questions (fab7d3) (Eric Ma)
+
+
Bug Fixes
+
+
The field declaration for 'zot' in ZoteroLibrary class has been changed to use default_factory instead of default. This ensures that the load_zotero function is called when a new instance of ZoteroLibrary is created, rather than at import time (c65618) (Eric Ma)
This new version introduces significant improvements to the chat recording and saving mechanism of the Llamabot. It also includes a minor refactor in the Zotero module.
+
New Features
+
+
Added chat recording and saving functionality. This feature includes the addition of case-converter to the project dependencies, the importation of date and snakecase from datetime and caseconverter respectively, the addition of PromptRecorder to record the chat, modification of the chat function to record and save the chat with a filename in snakecase format prefixed with the current date, and the addition of a save method in PromptRecorder to save the recorded chat to a specified path (22738e) (Eric Ma)
+
Improved the chat recording and saving mechanism. The creation of the save path was moved to the beginning of the chat function, the save path now includes the date and the snakecased user choice, the save path is printed to the console when the user exits the chat, the save function now coerces the path argument to a pathlib.Path object for compatibility, and the save function is now called with the save path instead of a string for flexibility and ease of use (c44562) (Eric Ma)
+
+
Bug Fixes
+
+
No bug fixes in this release.
+
+
Deprecations
+
+
Removed the temperature parameter from the QueryBot instantiation in the chat function of the Zotero module. This simplifies the QueryBot configuration and does not affect the functionality of the bot (663594) (Eric Ma)
This new version introduces a chat command to LlamaBot CLI, adds a logging option to the ChatBot class, and updates the documentation with new usage examples and a CLI demos section.
+
New Features
+
+
Added a chat command to LlamaBot CLI. This new command allows users to interact with the ChatBot and includes an option to save the chat to a markdown file. The filename for the saved chat is generated based on the current date and time. The chat command will exit if the user types "exit" or "quit". (baa4d64) (Eric Ma)
+
Added a logging option to the ChatBot class. This new parameter is a boolean that determines whether to log the chat history and token budget. This feature provides more flexibility for users who want to monitor the chat history and token budget during the bot operation. (6550cf3) (Eric Ma)
+
Updated the documentation's index file with new usage examples. These include a new example of exposing a chatbot directly at the command line using llamabot chat, an updated description and command for using llamabot as part of the backend of a CLI app to chat with Zotero library, and a new example of using llamabot's SimpleBot to create a bot that automatically writes commit messages. (274a779) (Eric Ma)
+
Introduced a new section in the documentation, specifically in the index.md file. The section is titled "CLI Demos" and provides examples of what can be built with Llamabot and some supporting code. It also includes an embedded asciicast for a more interactive demonstration. (ce7e734) (Eric Ma)
+
Added an asciicast script to the documentation index file. This will provide users with a visual guide or tutorial. (e332f0a) (Eric Ma)
This new version brings a number of improvements to the user interface, streamlines the handling of user prompts and Zotero library, and introduces new features such as a document chat bot functionality. It also includes several bug fixes and refactoring of the code for better performance and readability.
+
New Features
+
+
Document chat bot functionality has been added. This feature allows users to chat with a document by providing a path to the document (005a10) (Eric Ma)
+
The 'textual' package has been added to the dependencies, enhancing the functionality of the codebase (9b53aa) (Eric Ma)
+
A new Jupyter notebook, patreon_ghostwriter.ipynb, has been introduced in the scratch_notebooks directory. The notebook includes code for a bot that can generate Patreon posts based on provided talking points (849497) (Eric Ma)
+
+
Bug Fixes
+
+
Fixed a bug in ZoteroLibrary where items were not being loaded from JSONL file (7e9ea4) (Eric Ma)
+
+
Refactors
+
+
User prompts have been streamlined for consistency across modules, and Zotero library handling has been improved (7e9ea4) (Eric Ma)
+
CLI prompts and exit handling have been streamlined (3c4cc3) (Eric Ma)
+
Instructions for writing commit messages in git.py have been improved for clarity and user-friendliness (942005) (Eric Ma)
+
A function has been renamed to ensure_work_email_on_calendly_events to make it more generic (841c78) (Eric Ma)
+
+
Environment and Dependencies
+
+
Python version has been updated from 3.9 to 3.11, and pre-commit has been removed from dependencies (8f880f) (Eric Ma)
+
Python version has been downgraded from 3.11 to 3.9 to ensure compatibility with existing libraries, and version constraint on bokeh has been removed to use the latest version (0e8bff) (Eric Ma)
This new version introduces several enhancements to the QueryBot class, adds a language inference function to the embed_repo.ipynb notebook, and provides a command line interface for interacting with a code repository. It also includes progress bars for file hashing and document splitting processes, an option to ignore directories when displaying the directory tree, and support for multiple documents for indexing. Lastly, a comprehensive tutorial on how to install, configure, and use LlamaBot is added.
+
New Features
+
+
Added caching option and improved document handling in QueryBot. This includes changes to the make_or_load_index function, exit_if_asked function, ZOTERO_JSON_DIR, ZoteroLibrary class, and magic_load_doc function. Also, updates were made to the zotero.ipynb notebook to reflect these changes (579f162) (Eric Ma)
+
Added language inference function and updated execution counts in embed_repo.ipynb notebook. This enhances the functionality of the notebook by allowing it to infer the programming languages used in a repository and providing a more detailed view of the repository's structure (b795e72) (Eric Ma)
+
Added CLI for interacting with code repository. This is part of ongoing efforts to improve the usability of the LlamaBot project (042ae26) (Eric Ma)
+
Added progress bars to file hashing and document splitting in the QueryBot module. This provides a visual indication of progress when processing large numbers of documents, improving user experience (4634185) (Eric Ma)
+
Added directory ignore option to show_directory_tree. This allows specifying a list of directory names to ignore when displaying the directory tree (271ccde) (Eric Ma)
+
Added support for multiple documents for indexing in QueryBot. This includes changes to the doc_paths parameter and the make_or_load_index function (c813522) (Eric Ma)
+
Added LlamaBot tutorial documentation. This provides a comprehensive tutorial on how to install, configure, and use LlamaBot (9e25fb5) (Eric Ma)
+
+
Bug Fixes
+
+
No bug fixes in this release.
+
+
Deprecations
+
+
The change in how Zotero library data is stored and handled may break existing code that relies on the old JSONL format (579f162) (Eric Ma)
This new version includes an important bug fix that improves the compatibility of the ZoteroLibrary with other components. The output format of the ZoteroLibrary has been changed from JSONL to JSON.
+
New Features
+
+
None in this release
+
+
Bug Fixes
+
+
Changed the output format of the ZoteroLibrary from JSONL to JSON for better compatibility with other components (9b7757) (Eric Ma)
This new version introduces a significant refactor of the retriever initialization and cache handling in the Llamabot application. It also includes minor changes in the Zotero chat function and the zotero notebook.
+
New Features
+
+
Refactored the retriever initialization and cache handling in the Llamabot application. This includes the removal of direct import and usage of VectorIndexRetriever in querybot.py, the addition of a method to get the retriever from the index, and the definition of CACHE_DIR as a constant in querybot.py and init.py. The get_persist_dir has been refactored to use the CACHE_DIR constant, and a clear_cache command has been added in init.py to clear the Llamabot cache. The default value of the sync option in the zotero.py chat function has been changed, and the doc_paths argument in the retrieverbot initialization in zotero.py has been updated. Directory creation in zotero.ipynb has been commented out, and code has been added to list json files in the ZOTERO_JSON_DIR in zotero.ipynb. (49645b) (Eric Ma)
This new version introduces caching to improve performance and enhances the paper selection process for a more interactive experience.
+
New Features
+
+
Enabled use of cache in the chat function to improve performance (013dae) (Eric Ma)
+
Enhanced the paper selection process to handle single paper scenario and provide a more interactive selection process for multiple papers (013dae) (Eric Ma)
This new version includes a significant refactor of the querybot's faux chat history construction for improved clarity and functionality.
+
New Features
+
+
The faux chat history construction in querybot has been updated for better clarity and functionality. The VectorIndexRetriever has been replaced with the index.as_retriever method, a system message has been added to the faux chat history, the last four responses from the chat history are now included in the faux chat history, and the order of faux chat history construction has been adjusted for better clarity (47a35d) (Eric Ma)
This new version introduces a blog assistant functionality to llamabot and specifies a minimum Python version in the pyproject.toml file.
+
New Features
+
+
Blog assistant functionality has been added to llamabot. This new feature can summarize and tag a blog post. It includes the addition of a new 'blog' module, a new 'blog' command to the CLI, and the creation of several new files in the CLI and prompt_library directories. This enhancement provides users with a tool to automatically summarize and tag their blog posts. (265962) (Eric Ma)
+
The pyproject.toml file now requires a minimum Python version of 3.10. This change ensures compatibility with the latest features and security updates. (a664df) (Eric Ma)
This new version focuses on improving the configuration process of LlamaBot. It introduces a new feature that fetches the default language model from the configuration file. The LlamaBot tutorial has been updated to provide detailed instructions on how to set up the OpenAI API key and select the default model. Additionally, the configuration command has been moved to a separate module for better code organization.
+
New Features
+
+
The LlamaBot tutorial now focuses on the configuration process, providing detailed instructions on how to set up the OpenAI API key and select the default model. The sections on installation, version checking, and chatting with LlamaBot have been removed. (87dfef) (Eric Ma)
+
Introduced a new feature where the default language model is now fetched from the configuration file. This change affects the ChatBot, QueryBot, and SimpleBot classes where the model_name parameter in their constructors now defaults to the value returned by the default_language_model function from the config module. (d531cb) (Eric Ma)
+
The configuration command has been moved from the main init.py file to a new configure.py module. This change improves the organization of the code and makes it easier to maintain. A new command for setting the default model has been added to the configure module. (2bffdaf) (Eric Ma)
This new version introduces several enhancements to the blog assistant CLI and blogging prompts, adds token budgeting for different models in the chatbot, and updates blogging and Patreon notebooks. A new notebook for semantic line breaks has also been added.
+
New Features
+
+
Blogging and Patreon notebooks have been updated with new code cells and existing ones have been improved. A new notebook, sembr.ipynb, has been added with code for semantic line breaks. These changes improve the functionality and expand the capabilities of the notebooks (a34a02) (Eric Ma)
+
Token budgeting for different models has been added to the chatbot. This feature allows for more flexible token budgeting depending on the model used (cc7ab8) (Eric Ma)
+
Several enhancements have been made to the blog assistant CLI and blogging prompts. The summarize_and_tag function has been renamed to summarize and now also returns the blog title. A new social_media function has been added to generate social media posts for LinkedIn, Patreon, and Twitter. The blog_tagger_and_summarizer prompt has been renamed to blog_title_tags_summary and now also returns the blog title. New prompts compose_linkedin_post, compose_patreon_post, and compose_twitter_post have been added to generate social media posts. A new BlogInformation model has been added to represent blog information (453e5d) (Eric Ma)
This new version introduces the Semantic Line Breaks (SEMBR) functionality to the blog summary and a new command. It enhances the readability and maintainability of the blog posts by applying a consistent line break strategy.
+
New Features
+
+
Added SEMBR functionality to blog summary in the summarize function (faa08e) (Eric Ma)
+
Introduced a new command sembr that allows users to apply SEMBR to their blog posts (faa08e) (Eric Ma)
+
Implemented SEMBR functionality using a new sembr_bot in the prompt_library (faa08e) (Eric Ma)
+
Created a new file prompt_library/sembr.py to handle the SEMBR prompts and bot creation (faa08e) (Eric Ma)
This new version introduces enhancements to the social media post generation, updates to the testing matrix for Python versions, and a new GitHub workflow for daily testing of PyPI packages.
+
New Features
+
+
Enhanced social media post generation. The update refactors the social media content generation to handle different platforms more effectively, adds JSON schema to standardize the return format, improves the handling of Patreon posts, and copies the post text to the clipboard for platforms other than Patreon. (07f90e) (Eric Ma)
+
Introduced a new GitHub workflow for daily testing of PyPI packages. The workflow runs on the main branch and uses a matrix strategy to test on Python versions 3.9, 3.10, and 3.11. (fce17c) (Eric Ma)
+
+
Bug Fixes
+
+
Updated the python versions used in the test-pypi-package workflow. The versions have been updated from 3.10 to 3.10.12 and from 3.11 to 3.11.4. This ensures that the package is tested against the latest patch versions of Python. (e9ec8d) (Eric Ma)
+
+
Deprecations
+
+
Removed Python version 3.12 from the testing matrix in the GitHub Actions workflow for testing the PyPI package. This change is made to focus on the more stable and widely used versions of Python. (b90b8c) (Eric Ma)
+
Updated the python versions used in the testing matrix of the test-pypi-package workflow. The version 3.9 has been removed and version 3.12 has been added. This ensures our package remains compatible with the latest python versions. (70e4dc) (Eric Ma)
This new version introduces several enhancements to the LLaMaBot project, including the addition of a 'prompts' section to the pyproject.toml file, improved error handling for missing packages, a new Jupyter notebook for LLaMaBot demo, and updates to the Google Calendar integration. The version also includes several code refactoring and documentation updates for better readability and maintainability.
+
New Features
+
+
Added a 'prompts' section to the pyproject.toml file (82d9e8) (Eric Ma)
+
Introduced error handling for the import of the outlines package in various modules of the llamabot prompt library (a569ca) (Eric Ma)
+
Added a new Jupyter notebook demonstrating the usage of LLaMaBot (fdd17f) (Eric Ma)
+
Updated Google Calendar integration with new features and improvements (170271) (Eric Ma)
+
Added a tutorial for the Blog Assistant CLI in the documentation (620da7) (Eric Ma)
+
+
Bug Fixes
+
+
Pinned the version of mkdocs to 1.4.3 in the environment.yml file to ensure consistent documentation builds across different environments (ee7e7e) (Eric Ma)
+
+
Deprecations
+
+
Removed the outlines package from the project dependencies in pyproject.toml file (25bccb) (Eric Ma)
+
Removed all the files related to Google API, which are superseded by the gcsa package (eb0c50) (Eric Ma)
+
+
Other Improvements
+
+
Improved type annotations and code organization in the llamabot module (a1b391) (Eric Ma)
+
Updated cron schedule in test-pypi-package workflow for better server load distribution (efd390) (Eric Ma)
+
Added explanation for stateless function in the documentation (9a6b4e) (Eric Ma)
+
Improved readability of the documentation by applying semantic line breaks and changing code block to text block (604277, 3583ba) (Eric Ma)
This new version introduces extended installation options for the llamabot package and adds two new Jupyter notebooks to the project. The installation now includes all optional dependencies, ensuring full feature availability during testing. The new notebooks provide code for language model configuration and OpenAI API setup.
+
New Features
+
+
Extended llamabot installation to include all optional dependencies, improving the thoroughness of the testing process and ensuring all package features are working as expected (e6e1e3) (Eric Ma)
+
Added two new Jupyter notebooks: multiscale_embeddings.ipynb and outlines_backend_prototype.ipynb. The first notebook provides code for loading and configuring language models, creating and loading indices, retrieving and scoring nodes, and building queries. The second notebook provides code for setting up the OpenAI API and generating completions (044439) (Eric Ma)
This new version includes several enhancements and updates to improve the functionality and consistency of the LlamaBot.
+
New Features
+
+
Added 'llama-index' to the list of dependencies to enhance the functionality of the bot (196cdc) (Eric Ma)
+
Updated the __call__ method of QueryBot for better performance and efficiency (b1840c) (Eric Ma)
+
+
Bug Fixes
+
+
Replaced a complex version with a simplified one to fix performance issues (f17655) (Eric Ma)
+
Ensured the return of strings is consistent across all functions to fix inconsistency issues (88d62a) (Eric Ma)
+
+
Deprecations
+
+
Changed the default argument of return_sources to True. This might affect the behavior of functions that rely on the previous default value (a03db6) (Eric Ma)
This new version introduces a more streamlined and reliable process for releasing Python packages, with several enhancements to the GitHub Actions workflows. It also includes a new feature for similarity search in the QueryBot class and some minor bug fixes.
+
New Features
+
+
Added a project description and linked the README.md file to the project configuration (92002ba) (Eric Ma)
+
Updated the pypi-publish action used in the GitHub Actions workflow for releasing the Python package to ensure stability and reliability of the release process (b8ecf9f) (Eric Ma)
+
Separated the installation of the 'build' and 'wheel' packages in the GitHub Actions workflow for releasing a Python package to make the installation steps more explicit and easier to understand (005280e) (Eric Ma)
+
Added the 'build' package to the python setup step in the GitHub Actions workflow for releasing a python package (62af643) (Eric Ma)
+
Simplified the python package build process in the GitHub workflow to use the build module instead of setup.py (321e282) (Eric Ma)
+
Set the default release type to 'patch' in the release-python-package workflow to prevent accidental major or minor releases (b339f88) (Eric Ma)
+
Added a new step in the GitHub Actions workflow for releasing the Python package that configures the Git user name and email (f8f6ab4) (Eric Ma)
+
Changed the GitHub workflow from running tests on different Python versions to publishing the Python package to PyPI (628b91f) (Eric Ma)
+
Introduced a new GitHub workflow for releasing Python packages that includes steps for running tests, bumping version numbers, building and publishing the package, and creating a release in the GitHub repository (2f28ab7) (Eric Ma)
+
Added a new method 'retrieve' in the QueryBot class for retrieving source nodes associated with a query using similarity search (a08d0f0) (Eric Ma)
+
Added the ability to manually trigger the test-pypi-package workflow from the GitHub Actions UI (7611052) (Eric Ma)
+
+
Bug Fixes
+
+
Disabled the deadline for the ghostwriter test in the Python prompt library to prevent Hypothesis from failing the test due to it taking too long to run (b960ced) (Eric Ma)
This new version includes several updates to the GitHub Actions workflow for releasing the Python package. The git configuration has been updated for better readability and specific use by the GitHub Actions user. The secret used for the user password in the release workflow has been changed for correct deployment. The git configuration now includes the credential helper and GitHub token for authentication when pushing changes. The versions of actions/checkout and actions/setup-python have been upgraded for better performance and security.
+
New Features
+
+
Added credential helper and GitHub token to git configuration for authentication when pushing changes (5ed538) (Eric Ma)
+
Upgraded actions/checkout from v2 to v3 and actions/setup-python from v2 to v3 for better performance and security (8af512) (Eric Ma)
+
+
Bug Fixes
+
+
Changed the secret used for the user password in the GitHub Actions release workflow for correct deployment (f96f6d) (Eric Ma)
+
+
Chores
+
+
Updated git configuration and push command in the GitHub Actions workflow for better readability (250e87) (Eric Ma)
+
+
Please note that the publishing of the package was temporarily commented out in this version (ec6cb5) (Eric Ma).
This new version includes several enhancements to the Zotero module, improvements to the QueryBot, and updates to the pre-commit hooks. It also introduces a new Jupyter notebook for outlines models and enables package publishing to PyPI.
+
New Features
+
+
Added code to retrieve the title of a specific article from the Zotero library using the article's unique identifier (5921df) (Eric Ma)
+
Added support for default similarity top ks in QueryBot based on the OPENAI_DEFAULT_MODEL environment variable (ae392f) (Eric Ma)
+
Enhanced the ZoteroLibrary class by adding an articles_only filter and a key_title_map function (85a223) (Eric Ma)
+
Improved the get_key function documentation in the Zotero module (89b6bc) (Eric Ma)
+
Streamlined the paper selection process in the Zotero CLI by introducing a new PaperTitleCompleter for more efficient paper selection (1122e6) (Eric Ma)
+
Improved handling of similarity_top_k in QueryBot and refactored index creation (acc6e8) (Eric Ma)
+
Added 'sh' dependency to environment.yml and pyproject.toml files (5e23f9) (Eric Ma)
+
Added execution of pre-commit hooks before committing changes (82979d) (Eric Ma)
+
Added a new class, PaperTitleCompleter, to provide completion suggestions for paper titles in the Zotero module (3fac26) (Eric Ma)
+
Updated pre-commit config and notebooks (b077aa) (Eric Ma)
+
Extended the ruff pre-commit hook to also check python and jupyter files (4ae772) (Eric Ma)
+
Added nltk as a transitive dependency via llama_index in the environment.yml file (2bd392) (Eric Ma)
+
Introduced a new pre-commit hook, ruff, to the .pre-commit-config.yaml file (c7c5bc) (Eric Ma)
+
Enabled package publishing to PyPI (baca5c) (Eric Ma)
+
+
Bug Fixes
+
+
Fixed assertion in test_magic_load_doc_txt function (ef4b3e) (Eric Ma)
+
+
Refactors
+
+
Simplified the docstring in the doc_processor module and modified the document loading (fab218) (Eric Ma)
+
Replaced 'index' with 'vector_index' in QueryBot class and refactored related methods (cfb284) (Eric Ma)
This new version includes an update to the commitbot feature, which now uses a more efficient model for generating commit messages.
+
New Features
+
+
The commitbot has been updated to use the gpt-3.5-turbo-16k-0613 model. This model provides the same quality of commit messages as the previous model but at a fraction of the cost (ce91d6b) (Eric Ma)
This new version includes an update to the pip installation in the test workflow and the addition of a new dependency, beartype==0.15.0.
+
New Features
+
+
Added beartype==0.15.0 to the list of dependencies in pyproject.toml (8c4db1) (Eric Ma)
+
+
Bug Fixes
+
+
Updated pip installation in the test-pypi-package.yaml workflow to use the python -m pip install command instead of pipx to ensure the correct version of pip is used for installing the llamabot[all] package (2e860a) (Eric Ma)
This new version includes several enhancements to the CLI module of LlamaBot. The improvements focus on automating the process of writing commit messages and ensuring consistency. The version also includes codebase improvements such as the removal of unnecessary comments.
+
New Features
+
+
A new command autowrite_commit_message has been added to the git.py file in the llamabot/cli directory. This command automatically generates a commit message based on the diff and writes it to the .git/COMMIT_EDITMSG file. Error handling has also been included in case any exceptions occur during the process. (185613) (Eric Ma)
+
A new command install_commit_message_hook has been added to the Git subcommand for LlamaBot CLI. This command installs a commit message hook that runs the commit message through the bot, automating the process of writing commit messages and ensuring consistency. (d1254e) (Eric Ma)
+
+
Bug Fixes
+
+
No bug fixes in this release.
+
+
Deprecations
+
+
Unnecessary comments in git.py have been removed to improve the codebase. (ecf9c0) (Eric Ma)
This new version includes several enhancements to the CLI module and the Llamabot model. It also includes a bug fix for the autowrite_commit_message function.
+
New Features
+
+
Help messages for subcommands have been added to the CLI module. This will provide users with more information on how to use each command. (f4de87) (Eric Ma)
+
The model_chat_token_budgets in Llamabot have been updated. New models have been added to the dictionary and token budgets for existing models have been updated. (52522b) (Eric Ma)
+
+
Bug Fixes
+
+
The autowrite_commit_message function in the CLI module has been fixed. Print statements have been replaced with echo for consistent output and error messages are now written to stderr instead of stdout. (a66ead) (Eric Ma)
+
+
Deprecations
+
+
The unused 'apps' subcommand has been removed from the CLI module. This subcommand was not being used and has been safely removed. (0ea7b3) (Eric Ma)
This new version introduces several enhancements to the release workflow, including the addition of release notes generation and the configuration of the OPENAI_API_KEY. It also includes improvements to the llamabot CLI and the documentation.
+
New Features
+
+
Added fetch-depth parameter to the checkout action in the release-python-package workflow. This allows the action to fetch the entire history of the repository. (c25fe84) (Eric Ma)
+
Upgraded the GitHub Actions checkout step to use version 4 and enabled the fetch-tags option. This ensures that all tags are fetched during the checkout process. (dadcf60) (Eric Ma)
+
Added a new step in the release-python-package workflow to configure the OPENAI_API_KEY using llamabot. This is necessary for the successful generation of release notes. (6c17c10) (Eric Ma)
+
Added OPENAI_API_KEY to environment variables in configure.py. This allows the application to access the OpenAI API key from the environment variables, improving security. (8df3cda) (Eric Ma)
+
Updated the GitHub Actions workflow for releasing a new version of the Python package to include the release notes in the body of the GitHub release. (07150dc) (Eric Ma)
+
Introduced a bot for converting git remote URL to HTTPS URL. This enhances the functionality of the release notes notebook. (85009ad) (Eric Ma)
+
Added release notes generation to the GitHub workflow for releasing the Python package. (3d28e12) (Eric Ma)
+
Introduced a new feature to the llamabot CLI, a command for generating release notes. This automates the process of generating release notes. (df181dd) (Eric Ma)
+
Allowed setting default model by name in the configure.py file of the llamabot CLI. This provides more flexibility in setting the default model. (d223c43) (Eric Ma)
+
Added a new Jupyter notebook 'release-notes.ipynb' in the 'scratch_notebooks' directory. The notebook contains code for generating release notes from git commit logs. (9ab58a5) (Eric Ma)
+
Added the ability to specify the model name via an environment variable. This allows for more flexibility when deploying the bot in different environments. (127b6c9) (Eric Ma)
This new version includes several improvements to the release workflow and bug fixes. The release notes handling has been updated and simplified, and several bugs in the GitHub Actions workflow have been fixed.
+
New Features
+
+
Release notes handling in the GitHub workflow has been updated. The workflow now copies the release notes to a temporary location before creating a release in the GitHub repository. This ensures that the release notes are correctly included in the release (d9ab5b) (Eric Ma)
+
The source of the release notes in the GitHub Actions workflow for releasing a Python package has been changed. Instead of using an environment variable, it now reads from a markdown file in the docs/releases directory. The filename is based on the version number (3958ff) (Eric Ma)
+
+
Bug Fixes
+
+
A bug in the GitHub Actions workflow for releasing a Python package has been fixed. The copy command used to copy the release notes was incorrect and has been fixed (7cda28) (Eric Ma)
+
The file path for the release notes in the release-python-package GitHub workflow has been corrected. The version number now correctly includes a 'v' prefix when reading the markdown file (e03626) (Eric Ma)
+
The path for the release notes in the GitHub Actions workflow has been corrected. The previous path was causing issues in the workflow execution. The path has been updated to correctly point to the release notes file (75978b) (Eric Ma)
+
+
Deprecations
+
+
The step of copying release notes to a temporary location has been removed and the original file is directly referenced in the release action. This simplifies the workflow and reduces unnecessary operations (eb2aef) (Eric Ma)
This version introduces a new prompt decorator and tests, improves the release workflow, fixes bugs in the GitHub Actions workflow, and removes the dependency on the 'outlines' package.
+
New Features
+
+
A new prompt decorator has been added in the scratch_notebooks directory, enhancing the functionality of functions by adding a prompt feature. Tests have been included to ensure the decorator works as expected with different types of function arguments (d023f22) (Eric Ma).
+
Tests for blogging prompts in the prompt_library directory have been added. These tests validate the output of different blogging prompt functions (d023f22) (Eric Ma).
+
The release notes handling in the GitHub workflow has been updated. The workflow now copies the release notes to a temporary location before creating a release in the GitHub repository (3884962) (Eric Ma).
+
The source of the release notes in the GitHub Actions workflow for releasing a Python package has been changed. It now reads from a markdown file in the docs/releases directory (3884962) (Eric Ma).
+
The file path for the release notes in the release-python-package GitHub workflow has been corrected. The version number now correctly includes a 'v' prefix when reading the markdown file (3884962) (Eric Ma).
+
The path for the release notes in the GitHub Actions workflow has been corrected. The previous path was causing issues in the workflow execution. The path has been updated to correctly point to the release notes file (3884962) (Eric Ma).
+
+
Bug Fixes
+
+
The file path for the release notes in the release-python-package GitHub workflow was incorrect and has been fixed (3884962) (Eric Ma).
+
+
Deprecations
+
+
The step of copying release notes to a temporary location has been removed and the original file is directly referenced in the release action. This simplifies the workflow and reduces unnecessary operations (3884962) (Eric Ma).
+
The 'outlines' package was removed from the dependencies in the environment.yml and pyproject.toml files (af23aae) (Eric Ma).
+
+
Refactors
+
+
The use of the outlines package has been replaced with a custom prompt_manager module across multiple files in the llamabot project. The prompt_manager provides a prompt decorator that turns Python functions into Jinja2-templated prompts, similar to the functionality provided by outlines. This refactor removes the dependency on the outlines package, simplifying the project's dependencies and potentially improving maintainability (dbe78e4) (Eric Ma).
This version includes several improvements to the ChatBot, QueryBot, and SimpleBot classes, including new parameters for additional configuration options and improved code readability. It also simplifies the pip install command used in the release-python-package GitHub workflow and removes unnecessary clutter from the codebase.
+
New Features
+
+
Added streaming and verbose parameters to the ChatBot class initialization method, providing more flexibility in controlling the chat history streaming and verbosity during the bot initialization (a69c0f) (Eric Ma)
+
+
Bug Fixes
+
+
Simplified the pip install command used in the release-python-package GitHub workflow. The previous command attempted to install all optional dependencies, which is not necessary for writing release notes. The new command only installs the package itself (2dffac) (Eric Ma)
+
+
Refactors
+
+
Updated parameter names and descriptions in ChatBot, QueryBot, and SimpleBot for consistency and clarity. Added 'streaming' and 'verbose' parameters to SimpleBot for additional configuration options. Improved code readability by breaking up long lines and comments (6c0b37) (Eric Ma)
+
Removed a large block of commented out code from the prompt_manager.py file, improving readability and reducing clutter in the codebase (7f4b0a) (Eric Ma)
+
+
Other
+
+
Bumped version from 0.0.79 to 0.0.80 (385221) (github-actions)
This new version primarily focuses on improving code readability and maintainability. It also introduces a new feature to handle different numbers of tags in the git log when writing release notes.
+
New Features
+
+
Added conditions to handle different numbers of tags in git log (645a36) (Eric Ma)
+
+
Improvements
+
+
Reformatted code in multiple files for better readability (871316) (Eric Ma)
+
Added newline at the end of the release notes file (871316) (Eric Ma)
+
Improved handling of cases with no tags or only one tag in the git repository (871316) (Eric Ma)
+
Removed unnecessary comments from llamabot/panel_utils.py and tests/cli/test_cli_utils.py (871316) (Eric Ma)
+
Reformatted docstrings for better readability in multiple test files (871316) (Eric Ma)
+
Updated docstrings for test functions to be more descriptive in tests/test_file_finder.py and tests/test_recorder.py (871316) (Eric Ma)
This new version introduces more flexibility and control over the token budget and chunk sizes used in the chatbot. It also includes a new attribute to store the model name used by the bot and a bug fix to ensure multiple document paths are handled correctly.
+
New Features
+
+
Added support for response_tokens and history_tokens parameters in the QueryBot class. These parameters allow the user to specify the number of tokens to use for the response and history in the chatbot. Also, a chunk_sizes parameter has been added to the make_or_load_vector_index function to specify a list of chunk sizes to use for the LlamaIndex TokenTextSplitter (a1de812) (Eric Ma)
+
Introduced a new attribute 'model_name' to both QueryBot and SimpleBot classes. This attribute will be used to store the name of the model used by the bot (d5d684) (Eric Ma)
+
+
Bug Fixes
+
+
Modified the doc_paths parameter in the chat function of the llamabot/cli/doc.py file to receive a list of doc_paths, ensuring that the function can handle multiple document paths correctly (c763327) (Eric Ma)
+
Changed the variable name in the chat function from doc_path to doc_paths for better clarity and consistency (11111e) (Eric Ma)
This new version introduces enhancements to the QueryBot class, adds a notebook for evaluating multiscale embeddings, and updates the funding configuration.
+
New Features
+
+
A new notebook for evaluating multiscale embeddings has been added. This notebook, "zotero_multiscale.ipynb", provides an in-depth look at the effectiveness of multiscale embeddings compared to single-scale embeddings in LlamaBot's QueryBot class. It includes an explanation of multiscale embeddings, the motivation behind using them, and the implementation details. It also includes code to load a document from a Zotero library, create instances of QueryBot with different chunk sizes, and test their performance on different prompts. (24f9b6) (Eric Ma)
+
The default chunk_sizes parameter in the QueryBot class has been updated to [2000]. This change ensures that the LlamaIndex TokenTextSplitter uses a chunk size of 2000 tokens by default. (f9d7f6) (Eric Ma)
+
The GitHub funding platform in FUNDING.yml has been updated to use an array instead of a single string to support multiple contributors. (da221f) (Eric Ma)
+
A new funding configuration file has been added to the project. This file includes supported funding model platforms such as GitHub and Patreon. (68c974) (Eric Ma)
This version introduces several enhancements and refactors to the Llamabot project. The changes include improvements to the codebase's flexibility and maintainability, updates to the documentation, and the addition of new features.
+
New Features
+
+
Added a new parameter model_name to the chat function in zotero.py, allowing users to specify the language model to use. (c03a13f) (Eric Ma)
+
Introduced a new Jupyter notebook 'ollama.ipynb' demonstrating the implementation of a simple chatbot named 'ollama' using the 'llamabot' library. (c4919b2) (Eric Ma)
+
Added a new .vscode/extensions.json file with a list of recommended extensions for Visual Studio Code. (964bafa) (Eric Ma)
+
Added a new file model_dispatcher.py in the llamabot/bot directory, which contains a function create_model that dispatches and creates the right model based on the model name. (3dee9ea) (Eric Ma)
+
Updated simplebot.py to use the create_model function from model_dispatcher.py instead of directly creating the model. (3dee9ea) (Eric Ma)
+
Added a prompt to the default_model function in configure.py that informs the user to run llamabot configure default-model to set the default model. (b7a50e5) (Eric Ma)
+
+
Refactors
+
+
Replaced the hardcoded model name "codellama" with the default language model from the config file in simplebot.py. (bfb47a2) (Eric Ma)
+
Moved model token constants to a new file model_tokens.py for better organization and maintainability. (f2a1f46) (Eric Ma)
+
Refactored QueryBot class in querybot.py to use create_model function from model_dispatcher.py for model creation. (f2a1f46) (Eric Ma)
+
Simplified model creation and token budget calculation in chatbot.py. (491ab6f) (Eric Ma)
+
Removed an unnecessary echo message that was instructing the user to set the default model in the default_model function of configure.py. (d3c3751) (Eric Ma)
+
+
Documentation
+
+
Added instructions on how to specify a model when using the chat command in zotero.md. (9b07f17) (Eric Ma)
+
Introduced a new tutorial file ollama.md providing a comprehensive guide on how to run a chatbot using llamabot and Ollama. (9b07f17) (Eric Ma)
+
Added a newline at the end of the release notes for versions v0.0.82, v0.0.83, and v0.0.84. (0001e76) (Eric Ma)
This version includes several enhancements and updates to the codebase, including the addition of new tutorials, refactoring of the code, and updates to the Python version used in the GitHub Actions workflow.
+
New Features
+
+
Added a tutorial for building a QueryBot chat interface with file upload functionality. This tutorial guides users on how to build a chat interface using the QueryBot and Panel libraries. (4b5799a) (Eric Ma)
+
Introduced a new tutorial in the documentation that guides users on how to create a simple chat interface using the SimpleBot class from the llamabot library and the Panel library. (efaef316) (Eric Ma)
+
Introduced a new Jupyter notebook 'panel-chat.ipynb' in the 'scratch_notebooks' directory. The notebook includes code for setting up a chat interface using the Panel library, and integrating it with a chatbot for interactive responses. (ba5d8009) (Eric Ma)
+
Introduced a new Jupyter notebook 'zotero-panel.ipynb' in the 'scratch_notebooks' directory. The notebook contains code for creating a Zotero panel with interactive widgets for configuring Zotero API key, library ID, and library type. (8f477ec6) (Eric Ma)
+
Introduced a new instance of SimpleBot named 'feynman' to the ollama notebook. The bot is tasked with explaining complex concepts, specifically in this case, the challenge of enzyme function annotation and the introduction of a machine learning algorithm named CLEAN. (7f844dca) (Eric Ma)
+
Added ".html": "UnstructuredReader" to EXTENSION_LOADER_MAPPING in doc_processor.py to enable processing of .html files. (45d6485c) (Eric Ma)
+
+
Bug Fixes
+
+
Updated the python version used in the GitHub workflow for code style checks to 3.11. (d10e7e18) (Eric Ma)
+
+
Refactor
+
+
Removed unused imports from querybot.py and updated make_or_load_vector_index function to take service_context as a parameter instead of creating it within the function. (935e3dad) (Eric Ma)
+
Removed the unused @validate_call decorator from the call method in querybot.py. (3f7e8c0b) (Eric Ma)
+
+
Documentation
+
+
Added instructions to the documentation on how to use local Ollama models with LlamaBot. It includes a Python code snippet demonstrating how to specify the model_name keyword argument when creating a SimpleBot instance. (57f12809) (Eric Ma)
+
Updated the documentation for LlamaBot. It introduces two options for getting access to language models: using local models with Ollama or using the OpenAI API. (fc42049c) (Eric Ma)
+
+
Chore
+
+
Updated the versions of pre-commit hooks for pre-commit-hooks, black, and ruff-pre-commit. It also replaces the darglint hook with pydoclint for better documentation linting. (9cc49022) (Eric Ma)
This new version introduces several enhancements and features to improve the flexibility and maintainability of the code. The major highlight of this release is the dynamic scraping of Ollama model names, which allows the code to adapt to changes in the Ollama model library. Additionally, the codebase has been updated to Python 3.10, and new models have been added to the llama_model_keywords list.
+
New Features
+
+
Dynamically scrape Ollama model names from the Ollama website. If the website cannot be reached, a static list of model names is used as a fallback. The function is cached using lru_cache to improve performance. (1f7e27) (Eric Ma)
+
Added a function to automatically update the list of Ollama models. A new Python script has been added to the hooks in the pre-commit configuration file. This script scrapes the Ollama AI library webpage to get the latest model names and writes them to a text file. (f22007) (Eric Ma)
+
Added the content.code.copy feature to the theme configuration in mkdocs.yaml. This feature allows users to easily copy code snippets from the documentation. (594d16) (Eric Ma)
+
Added beautifulsoup4, lxml, and requests to the environment.yml file. These packages are necessary for the automatic scraping of ollama models. (2737a9) (Eric Ma)
+
+
Bug Fixes
+
+
The method ollama_model_keywords() in model_dispatcher.py has been refactored. The dynamic scraping of model names from the Ollama website has been removed. Instead, the model names are now read from a static text file distributed with the package. This change simplifies the code and removes the dependency on the BeautifulSoup and requests libraries. (73d25) (Eric Ma)
+
+
Deprecations
+
+
The 'Commit release notes' step has been separated from the 'Write release notes' step in the release-python-package workflow. The 'pre-commit' package installation has been moved to the 'Commit release notes' step. (4613a) (Eric Ma)
+
+
Other Changes
+
+
The target Python version in the Black configuration has been updated from Python 3.9 to Python 3.10. (cfadb3) (Eric Ma)
+
Some of the existing models have been reordered and new ones have been added to the llama_model_keywords list in the model_dispatcher module. (22ade) (Eric Ma)
+
A newline has been added at the end of the v0.0.86 release notes file. This change is in line with the standard file formatting conventions. (c22810) (Eric Ma)
This new version brings updates to the ollama model names and sorting method, updates to dependencies, and a temporary fix to the openai version. It also includes enhancements to the model name handling in llamabot.
+
New Features
+
+
Updated ollama model names and implemented a new sorting method. The models are now sorted by newest. (a19004) (Eric Ma)
+
Enhanced model name handling in llamabot. The model names in ollama_model_names.txt have been reordered for better organization, and additional code cells have been added to ollama.ipynb for testing and demonstrating the use of PromptRecorder and SimpleBot. (57389f) (Eric Ma)
+
+
Bug Fixes
+
+
Temporarily limited the version of openai dependency to <=0.28.1 in pyproject.toml. This is due to an issue with OpenAI's update breaking a lot of LangChain. (1d881a) (Eric Ma)
+
+
Dependency Updates
+
+
Updated langchain and llama_index dependencies in pyproject.toml. The langchain version has been set to 0.0.330 and llama_index version set to 0.8.62. This ensures three-way compatibility with openai, langchain, and llama-index until langchain is upgraded to work with the openai Python API without error. (e3cf0d) (Eric Ma)
This version includes several refactoring changes, new features, and documentation updates. The main focus of this release was to improve the code organization and efficiency, and to update the usage of the OpenAI API.
+
New Features
+
+
Added a new test for the ImageBot class in the llamabot library. The test checks the behavior of the call method when it is invoked outside of a Jupyter notebook and no save path is provided. (0e23857) (Eric Ma)
+
Introduced a new Jupyter notebook under the docs/examples directory. The notebook demonstrates how to use the ImageBot API to generate images from text using the OpenAI API. (8779040) (Eric Ma)
+
Added ImageBot class to bot module for generating images based on prompts. (7174058) (Eric Ma)
+
Increased the default token budget from 2048 to 4096 and added token budget for the new "mistral" model. (7f13698) (Eric Ma)
+
+
Bug Fixes
+
+
Fixed the cache-downloads-key in the pr-tests.yaml workflow file. The key now includes a hash of the 'environment.yml' file to ensure cache is updated when the environment changes. (1c12ff5) (Eric Ma)
+
+
Refactors
+
+
Moved the initialization of the OpenAI client into the default_model function. (bd50b90) (Eric Ma)
+
Removed the direct access to the environment variable for the OpenAI API key in the client initialization. (7cb3d09) (Eric Ma)
+
Changed the way model list attributes are accessed in the configure.py file of the llamabot CLI. (4deb93f) (Eric Ma)
+
Extracted the filename generation logic, which was previously inside the ImageBot class, to a separate function named filename_bot. (aec4f3c) (Eric Ma)
+
Removed direct assignment of OpenAI API key in init.py and replaced direct model list retrieval from OpenAI with client's model list method. (66fbcec) (Eric Ma)
+
+
Documentation
+
+
Updated the docstring for the filename_bot function in the imagebot.py file. The updated docstring now includes parameter and return value descriptions. (c5dd51d) (Eric Ma)
+
+
Dependencies
+
+
Updated the micromamba version from '1.4.5-0' to '1.5.1-2' in the pr-tests.yaml workflow. (6341f35) (Eric Ma)
+
Updated dependencies versions including llama_index and langchain in environment.yml and pyproject.toml. (e9229cc) (Eric Ma)
+
+
Tests
+
+
Removed the deadline for the test_codebot_instance function in the python_prompt_library test suite to prevent potential timeout issues. (4a30e96) (Eric Ma)
+
Removed the deadline for the simple bot initialization test to prevent false negatives due to time constraints. (16ee108) (Eric Ma)
This new version includes several enhancements and new features, including the addition of a chatbot test, the integration of pytest-cov into the conda environment, and the successful implementation of streaming with SimpleBot. The chatbot UI prototype is now operational, and the code has been refactored for better organization and efficiency.
+
New Features
+
+
Added a test for the chatbot functionality (0cc812) (Eric Ma)
+
Integrated pytest-cov into the conda environment for better code coverage during testing (592297) (Eric Ma)
+
Confirmed that streaming works with SimpleBot, enhancing real-time communication capabilities (049c23) (Eric Ma)
+
Refactored panel markdown callback handler into panel_utils for better code organization (400bd0) (Eric Ma)
+
Developed a rudimentary prototype of the chatbot UI, paving the way for user interaction (0e0bd5) (Eric Ma)
+
Updated the simplebot panel example, providing a more comprehensive demonstration of the bot's capabilities (8515cf) (Eric Ma)
+
Refactored bot.py into individual .py files for better code management and readability (5e97ed) (Eric Ma)
+
Switched to Python version 3.10, taking advantage of the latest features and improvements in the language (f4c28f) (Eric Ma)
+
Ensured the presence of typer-cli, enhancing command line interface functionality (856fbc) (Eric Ma)
+
Added typer to optional dependencies, providing more flexibility in package installation (2e853e) (Eric Ma)
This release includes several new features, bug fixes, and improvements to the codebase.
+
New Features
+
+
Update default language model to Mistral and remove OpenAI API key warning (e74954b) (Eric Ma): The default language model used by the SimpleBot class has been updated to Mistral, which is a more cost-effective option compared to the previously used gpt-3.5-turbo-16k-0613 model. The OpenAI API key warning has also been removed, as the Mistral model does not require an API key.
+
Add API key support for QABot and SimpleBot (b5f8253) (Eric Ma): This commit adds support for providing API keys to the QABot and SimpleBot classes, allowing for secure access to external services. This enhancement improves the security and flexibility of the bot's functionality.
+
Update default language model environment variable (4bfd362) (Eric Ma): The default language model environment variable has been updated from OPENAI_DEFAULT_MODEL to DEFAULT_LANGUAGE_MODEL to align with the changes in the codebase.
+
Update default language model to gpt-3.5-turbo-1106 (c8f0893) (Eric Ma): The default language model used by the commitbot has been updated to "gpt-3.5-turbo-1106" for improved performance and cost efficiency.
+
Add logging for API key usage (3be39ad) (Eric Ma): Logging has been added to SimpleBot to log the usage of the API key for debugging and monitoring purposes.
+
Add model_name parameter to SimpleBot instance (6a78332) (Eric Ma): A new parameter, model_name, has been added to the SimpleBot instance in the llamabot/cli/git.py file. The model_name is set to "mistral/mistral-medium". This change allows for more flexibility and customization when using the SimpleBot.
+
Add new model name to ollama_model_names.txt (3110dc9) (Eric Ma): 'megadolphin' has been added to the list of model names in ollama_model_names.txt.
+
Add new model name and refactor test_docstore (17352b8) (Eric Ma): 'llama-pro' has been added to ollama_model_names.txt and the test_docstore function has been refactored to remove unused imports and the make_fake_document function.
+
Add Knowledge Graph bot (963cd63) (Eric Ma): A new feature has been added to the codebase, the Knowledge Graph bot (KGBot). The KGBot takes in a chunk of text and returns a JSON of triplets. It is tested with mistral-medium and uses the default language model. The bot is called with a query and returns a JSON of triplets.
+
Add QABot class to llamabot (21197c1) (Eric Ma): A new class, DocQABot, has been added to the qabot.py file. This bot is designed to use pre-computed questions and answers to generate a response. It includes methods for adding documents for the bot to query and for calling the QABot. This enhancement will improve the bot's ability to generate responses based on the provided documents.
+
Add DocumentStore class for LlamaBot (117baf7) (Eric Ma): A new feature has been added to the codebase, a DocumentStore class for LlamaBot. This class wraps around ChromaDB and provides methods to append and retrieve documents from the store. The DocumentStore class is defined in the newly created file llamabot/components/docstore.py.
+
Add top-level API for llamabot's components (b2cf9f0) (Eric Ma): A new file, __init__.py, has been added which serves as the top-level API for llamabot's components.
+
+
Bug Fixes
+
+
Fix logging of API key (932beec) (Eric Ma): The commit fixes the logging of the API key in the SimpleBot class to display the complete key instead of just the first 5 characters. This change improves the clarity and security of the logging information.
+
Fix environment variable retrieval in write_release_notes function (c627b18) (Eric Ma): This commit fixes an issue where the environment variable was not being retrieved correctly in the write_release_notes function.
+
Fix stream parameter not being passed to bot function (185f2bc) (Eric Ma): This commit fixes an issue where the stream parameter was not being passed to the bot function in the cli/git module.
This release includes several improvements and new features for the LlamaBot project.
+
New Features
+
+
API Server: Merged pull request #28, which introduces an API server for the LlamaBot project. (4ea160a, Eric Ma)
+
Mock Response and API Key Support: Added api_key and mock_response parameters to the SimpleBot constructor for OpenAI API key usage and testing with predefined responses. (2f6d1d9, Eric Ma)
+
Streaming Response Test: Implemented a new test case to verify that SimpleBot can stream responses correctly. (5ddb804, Eric Ma)
+
Delta Content Printing: The SimpleBot class now prints the delta content to the console after processing each message for better readability. (d657b4a, Eric Ma)
+
ChatBot UI Jupyter Notebook: Created a new Jupyter notebook for ChatBot UI development, including the setup of necessary classes and functions. (bb4397a, Eric Ma)
+
ChatUIMixin: Introduced a new ChatUIMixin class for easier integration of chat functionalities in LlamaBot components. (4209b18, Eric Ma)
+
Streamlined Message Handling and Typing: Simplified the message construction and typing in the SimpleBot class for improved readability and maintainability. (65e026c, Eric Ma)
+
Streaming Response for Chat Messages: Implemented streaming response functionality in the ChatBot class for better real-time interactivity. (1ebc356, Eric Ma)
+
Improved Response Streaming: Extracted streaming logic into a separate method and ensured consistent yielding of AIMessage instances in the SimpleBot class. (08636a7, Eric Ma)
+
Toggleable Streaming Responses: Added a stream parameter to the generate_response method in the SimpleBot class to control streaming behavior. (565aed7, Eric Ma)
+
Streaming Response Capability: Implemented a new stream_response method in the SimpleBot class for streaming responses incrementally. (2a8254c, Eric Ma)
+
Response Generation Extraction: Simplified the generate_response method in the SimpleBot class by extracting the response generation logic into a new _make_response function. (0ad9a1e, Eric Ma)
+
API Key Instructions: Added instructions for setting API keys for other providers in the documentation. (55ec13e, Eric Ma)
+
Standardized LlamaBot Naming Convention: Corrected the casing of 'LLaMaBot' to 'LlamaBot' throughout the index.md documentation and separated API provider configuration instructions into subsections for OpenAI and Mistral. (7fd2e13, Eric Ma)
+
New Model Names and CLI Options Refactoring: Added 'stablelm2' and 'duckdb-nsql' to the list of available models and refactored command-line interface arguments in serve.py to use Typer options instead of arguments. (e6a2122, Eric Ma)
+
FastAPI Endpoint for QueryBot: Implemented APIMixin to allow QueryBot to serve FastAPI endpoints and added a serve command to the CLI for starting a FastAPI server with QueryBot. (5edd84b, Eric Ma)
+
Improved System Prompt for QueryBot: Modified the system prompt in QueryBot to be more specific about the source of knowledge and clarified the response behavior when the repository does not contain the answer. (5f7ce51, Eric Ma)
+
LlamaBot CLI Usage Guide: Added a comprehensive guide for the LlamaBot CLI in the documentation, including installation instructions, key commands, and usage examples. (9f0b1c8, Eric Ma)
+
+
Bug Fixes
+
+
ImageBot Import Path Update: Changed the import path of AIMessage from langchain.schema to llamabot.components.messages to reflect the new module structure. (27904d0, Eric Ma)
+
Error Handling for Image URL Retrieval: Added an exception raise in the ImageBot.generate_image method to handle cases where no image URL is found in the response. (27904d0, Eric Ma)
+
Disabled Streaming in SimpleBot Tests: Passed stream=False when creating a SimpleBot instance in tests to ensure consistent behavior without relying on streaming features. (e559114, Eric Ma)
+
Ensured Non-Empty Strings in Bot Tests: Modified tests to generate non-empty strings for system_prompt and human_message using hypothesis strategies. (e8fed0a, Eric Ma)
+
+
Deprecations
+
+
Removed Unused Panel App Creation Code: Removed the create_panel_app function and its related imports from python.py as they are no longer used. (4469b35, Eric Ma)
+
Removed PanelMarkdownCallbackHandler Class: Removed the PanelMarkdownCallbackHandler class as it is no longer required in the llamabot project. (b7ef263, Eric Ma)
+
Removed Unused pytest Import and Obsolete Test: Removed an unused import of pytest in test_simplebot.py and deleted the test_simple_bot_stream_response function, which is no longer needed due to changes in the SimpleBot streaming response logic. (ed0756b, Eric Ma)
+
Removed model_dispatcher Module: The model_dispatcher.py module has been removed as part of a refactoring effort. This change simplifies the llamabot architecture by delegating model dispatch responsibilities to a new system or removing the need for such functionality entirely. (0887618, Eric Ma)
+
Removed api_key Command from configure.py: The api_key command was deprecated and has been removed to simplify configuration. Users should now set API keys directly via environment variables. (2752d7e, Eric Ma)
Fix ChatBot response mocking in unit test (7c02d18) (Eric Ma)
+
Correct dictionary access and message concatenation in SimpleBot (79d2929) (Eric Ma)
+
Replace pdfminer with pdfminer.six for better Python 3 support (79908b1) (Eric Ma)
+
Replace pdfreader with pdfminer for improved PDF processing (7910e3e) (Eric Ma)
+
+
Deprecations
+
+
Remove 'api' stream_target from ChatBot and change the expected return type for consumers of the ChatBot class (c11aace) (Eric Ma)
+
Replace 'stream' boolean parameter with 'stream_target' in ChatBot and SimpleBot constructors (1211115) (Eric Ma)
+
+
Please note that some breaking changes have been introduced in this release. Make sure to update your code accordingly. For more details, refer to the individual commit messages.
This release includes improvements to the autorecord function, enhanced chat command, and updates to Python kernel versions.
+
New Features
+
+
Autorecord function has been streamlined to record only the last message content, reducing data processing and potential performance issues (268590, Eric Ma)
+
The chat command in the CLI now includes a timestamped session name for better traceability and organization of chat sessions (268590, Eric Ma)
+
+
Bug Fixes
+
+
The Python kernel version in sembr notebook has been updated to 3.11.7 to ensure compatibility with the latest libraries and features (0ad4701, Eric Ma)
+
+
Deprecations
+
+
No deprecations in this release
+
+
Note: The commit 9153c5d is a refactoring commit that improves the readability and maintenance of the notebook code, but it does not introduce any new features or bug fixes. The commit b120061 and 31b1056 are related to version bumping and release notes, respectively. The merge commit ae66c86 is not associated with any new features, bug fixes, or deprecations.
Here are the release notes based on the provided commit log:
+
Version 0.2.5
+
This release includes a small fix to the plaintext_loader function in the doc_processor module. The file open mode was changed from "r" to "r+" to allow for additional operations on the file if needed in the future.
+
New Features
+
There are no new features in this release.
+
Bug Fixes
+
+
The file open mode in plaintext_loader function was changed from "r" (read-only) to "r+" (read and write). This allows for additional operations on the file if needed in the future. (8251fdc) (Eric Ma)
+
+
Deprecations
+
There are no deprecations in this release.
+
Note: The commit 48bb8c4 is related to version bump and does not introduce any new features or bug fixes. The commit faa971d is related to adding release notes and does not introduce any new features or bug fixes. Therefore, they are not included in the release notes.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/search/search_index.json b/search/search_index.json
new file mode 100644
index 000000000..e4044b095
--- /dev/null
+++ b/search/search_index.json
@@ -0,0 +1 @@
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"LlamaBot: A Pythonic bot interface to LLMs","text":"
LlamaBot implements a Pythonic interface to LLMs, making it much easier to experiment with LLMs in a Jupyter notebook and build Python apps that utilize LLMs. All models supported by LiteLLM are supported by LlamaBot.
"},{"location":"#get-access-to-llms","title":"Get access to LLMs","text":""},{"location":"#option-1-using-local-models-with-ollama","title":"Option 1: Using local models with Ollama","text":"
LlamaBot supports using local models through Ollama. To do so, head over to the Ollama website and install Ollama. Then follow the instructions below.
"},{"location":"#option-2-use-an-api-provider","title":"Option 2: Use an API provider","text":""},{"location":"#openai","title":"OpenAI","text":"
If you have an OpenAI API key, then configure LlamaBot to use the API key by running:
"},{"location":"#other-api-providers","title":"Other API providers","text":"
Other API providers will usually specify an environment variable to set. If you have an API key, then set the environment variable accordingly.
"},{"location":"#how-to-use","title":"How to use","text":""},{"location":"#simplebot","title":"SimpleBot","text":"
The simplest use case of LlamaBot is to create a SimpleBot that keeps no record of chat history. This is effectively the same as a stateless function that you program with natural language instructions rather than code. This is useful for prompt experimentation, or for creating simple bots that are preconditioned on an instruction to handle texts and are then called upon repeatedly with different texts. For example, to create a Bot that explains a given chunk of text like Richard Feynman would:
from llamabot import SimpleBot\n\nfeynman = SimpleBot(\"You are Richard Feynman. You will be given a difficult concept, and your task is to explain it back.\", model_name=\"gpt-3.5-turbo\")\n
Now, feynman is callable on any arbitrary chunk of text and will return a rephrasing of that text in Richard Feynman's style (or more accurately, according to the style prescribed by the prompt). For example:
feynman(\"Enzyme function annotation is a fundamental challenge, and numerous computational tools have been developed. However, most of these tools cannot accurately predict functional annotations, such as enzyme commission (EC) number, for less-studied proteins or those with previously uncharacterized functions or multiple activities. We present a machine learning algorithm named CLEAN (contrastive learning\u2013enabled enzyme annotation) to assign EC numbers to enzymes with better accuracy, reliability, and sensitivity compared with the state-of-the-art tool BLASTp. The contrastive learning framework empowers CLEAN to confidently (i) annotate understudied enzymes, (ii) correct mislabeled enzymes, and (iii) identify promiscuous enzymes with two or more EC numbers\u2014functions that we demonstrate by systematic in silico and in vitro experiments. We anticipate that this tool will be widely used for predicting the functions of uncharacterized enzymes, thereby advancing many fields, such as genomics, synthetic biology, and biocatalysis.\")\n
This will return something that looks like:
Alright, let's break this down.\n\nEnzymes are like little biological machines that help speed up chemical reactions in our\nbodies. Each enzyme has a specific job, or function, and we use something called an\nEnzyme Commission (EC) number to categorize these functions.\n\nNow, the problem is that we don't always know what function an enzyme has, especially if\nit's a less-studied or new enzyme. This is where computational tools come in. They try\nto predict the function of these enzymes, but they often struggle to do so accurately.\n\nSo, the folks here have developed a new tool called CLEAN, which stands for contrastive\nlearning\u2013enabled enzyme annotation. This tool uses a machine learning algorithm, which\nis a type of artificial intelligence that learns from data to make predictions or\ndecisions.\n\nCLEAN uses a method called contrastive learning. Imagine you have a bunch of pictures of\ncats and dogs, and you want to teach a machine to tell the difference. You'd show it\npairs of pictures, some of the same animal (two cats or two dogs) and some of different\nanimals (a cat and a dog). The machine would learn to tell the difference by contrasting\nthe features of the two pictures. That's the basic idea behind contrastive learning.\n\nCLEAN uses this method to predict the EC numbers of enzymes more accurately than\nprevious tools. It can confidently annotate understudied enzymes, correct mislabeled\nenzymes, and even identify enzymes that have more than one function.\n\nThe creators of CLEAN have tested it with both computer simulations and lab experiments,\nand they believe it will be a valuable tool for predicting the functions of unknown\nenzymes. This could have big implications for fields like genomics, synthetic biology,\nand biocatalysis, which all rely on understanding how enzymes work.\n
If you want to use an Ollama model hosted locally, then you would use the following syntax:
from llamabot import SimpleBot\nbot = SimpleBot(\n \"You are Richard Feynman. You will be given a difficult concept, and your task is to explain it back.\",\n model_name=\"ollama/llama2:13b\"\n)\n
Simply specify the model_name keyword argument and provide a model name from the Ollama library of models prefixed by ollama/. All you need to do is make sure Ollama is running locally; see the Ollama documentation for more details. (The same can be done for the ChatBot and QueryBot classes below!)
To experiment with a Chat Bot in the Jupyter Notebook, we also provide the ChatBot interface. This interface automagically keeps track of chat history for as long as your Jupyter session is alive. Doing so allows you to use your own local Jupyter Notebook as a chat interface.
For example:
from llamabot import ChatBot\n\nfeynman = ChatBot(\"You are Richard Feynman. You will be given a difficult concept, and your task is to explain it back.\", session_name=\"feynman_chat\")\nfeynman(\"Enzyme function annotation is a fundamental challenge, and numerous computational tools have been developed. However, most of these tools cannot accurately predict functional annotations, such as enzyme commission (EC) number, for less-studied proteins or those with previously uncharacterized functions or multiple activities. We present a machine learning algorithm named CLEAN (contrastive learning\u2013enabled enzyme annotation) to assign EC numbers to enzymes with better accuracy, reliability, and sensitivity compared with the state-of-the-art tool BLASTp. The contrastive learning framework empowers CLEAN to confidently (i) annotate understudied enzymes, (ii) correct mislabeled enzymes, and (iii) identify promiscuous enzymes with two or more EC numbers\u2014functions that we demonstrate by systematic in silico and in vitro experiments. We anticipate that this tool will be widely used for predicting the functions of uncharacterized enzymes, thereby advancing many fields, such as genomics, synthetic biology, and biocatalysis.\")\n
With the chat history available, you can ask a follow-up question:
feynman(\"Is there a simpler way to rephrase the text such that a high schooler would understand it?\")\n
And your bot will work with the chat history to respond.
The final bot provided is a QueryBot. This bot lets you query a collection of documents. To use it, you have two options:
Pass in a list of paths to text files, or
Pass in a session name of a previously instantiated QueryBot that model. (This will load the previously-computed text index into memory.)
As an illustrative example:
from llamabot import QueryBot\nfrom pathlib import Path\n\nblog_index = Path(\"/path/to/index.json\")\nbot = QueryBot(system_message=\"You are an expert on Eric Ma's blog.\", session_name=\"eric_ma_blog\") # this loads my previously-embedded blog text.\n# alternatively:\n# bot = QueryBot(system_message=\"You are an expert on Eric Ma's blog.\", session_name=\"eric_ma_blog\", document_paths=[Path(\"/path/to/blog/post1.txt\"), Path(\"/path/to/blog/post2.txt\"), ...])\nresult = bot(\"Do you have any advice\u00a0for me on career development?\")\ndisplay(Markdown(result.response))\n
With the release of the OpenAI API updates, as long as you have an OpenAI API key, you can generate images with LlamaBot:
from llamabot import ImageBot\n\nbot = ImageBot()\n# Within a Jupyter notebook:\nurl = bot(\"A painting of a dog.\")\n\n# Or within a Python script\nfilepath = bot(\"A painting of a dog.\")\n\n# Now, you can do whatever you need with the url or file path.\n
If you're in a Jupyter Notebook, you'll see the image show up magically as part of the output cell as well.
New features are welcome! These are early and exciting days for users of large language models. Our development goals are to keep the project as simple as possible. Features requests that come with a pull request will be prioritized; the simpler the implementation of a feature (in terms of maintenance burden), the more likely it will be approved.
"},{"location":"#contributors","title":"Contributors","text":"Rena Lu\ud83d\udcbb andrew giessel\ud83e\udd14 \ud83c\udfa8 \ud83d\udcbb Aidan Brewis\ud83d\udcbb Eric Ma\ud83e\udd14 \ud83c\udfa8 \ud83d\udcbb Mark Harrison\ud83e\udd14"},{"location":"cli/blog/","title":"Blog Assistant CLI Tutorial","text":"
The Blog Assistant CLI is a powerful tool that helps you streamline your blogging workflow. It can generate blog summaries, apply semantic line breaks (SEMBR), and even create social media posts for LinkedIn, Patreon, and Twitter. This tutorial will guide you through the usage of this tool.
The summarize command is used to generate a blog summary, title, and tags. Here's how to use it:
Run the command summarize in your terminal: llamabot blog summarize
You will be prompted to paste your blog post.
The tool will then generate a blog title, apply SEMBR to your summary, and provide you with relevant tags.
The output will look something like this:
Here is your blog title:\n[Generated Blog Title]\n\nApplying SEMBR to your summary...\n\nHere is your blog summary:\n[Generated Blog Summary with SEMBR]\n\nHere are your blog tags:\n[Generated Blog Tags]\n
"},{"location":"cli/blog/#social-media-command","title":"Social Media Command","text":"
The social_media command is used to generate social media posts. Here's how to use it:
Run the command social_media [platform] in your terminal, where [platform] is either linkedin, patreon, or twitter: llamabot blog social-media linkedin.
You will be prompted to paste your blog post.
The tool will then generate a social media post for the specified platform.
For LinkedIn and Twitter, the generated post will be copied to your clipboard. For Patreon, the tool will display the post in the terminal.
In this tutorial, we will explore the Git subcommand for the LlamaBot CLI. This command-line interface (CLI) provides a set of tools to automate and enhance your Git workflow, in particular, the ability to automatically generate commit messages.
The llamabot prepare message hook requires that you have llamabot >=0.0.77. You will also need an OpenAI API key (until we have enabled and tested locally-hosted language models). Be sure to setup and configure LlamaBot by executing the following two configuration commands and following the instructions there.
llamabot configure api-key\n
and
llamabot configure default-model\n
For the default model, we suggest using a GPT-4 variant. It is generally of higher quality than GPT-3.5. If you are concerned with cost, the GPT-3.5-turbo variant with 16K context window has anecdotally worked well.
"},{"location":"cli/git/#install-the-commit-message-hook","title":"Install the Commit Message Hook","text":"
Once you have configured llamabot, the next thing you need to do is install the prepare-msg-hook within your git repository. This is a git hook that allows you to run commands after the pre-commit hooks are run but before your editor of the commit message is opened. To install the hook, simply run:
llamabot git install-commit-message-hook\n
This command will check if the current directory is a Git repository root. If it is not, it raises a RuntimeError. If it is, it writes a script to the prepare-commit-msg file in the .git/hooks directory and changes the file's permissions to make it executable.
"},{"location":"cli/git/#auto-compose-a-commit-message","title":"Auto-Compose a Commit Message","text":"
The llamabot git compose-commit command autowrites a commit message based on the diff. It first gets the diff using the get_git_diff function. It then generates a commit message using the commitbot, which is a LlamaBot SimpleBot. If any error occurs during this process, it prints the error message and prompts the user to write their own commit message, allowing for a graceful fallback to default behaviour. This can be useful, for example, if you don't have an internet connection and cannot connect to the OpenAI API, but still need to commit code.
This command never needs to be explicitly called. Rather, it is called behind-the-scenes within the prepare-msg-hook.
The llamabot git CLI provides a set of tools to automate and enhance your Git workflow. It provides an automatic commit message writer based on your repo's git diff. By using llamabot git, you can streamline your Git workflow and focus on writing code.
In this tutorial, we will walk through the configuration process for LlamaBot, a Python-based bot that uses the OpenAI API. The configuration process involves setting up the API key and selecting the default model for the bot.
"},{"location":"cli/llamabot/#setting-up-the-api-key","title":"Setting up the API Key","text":"
The first step in configuring LlamaBot is to set up the API key. This is done by invoking:
llamabot configure api-key\n
The user will be prompted to enter their OpenAI API key. The key will be hidden as you type it, and you will be asked to confirm it. Once confirmed, the key will be stored as an environment variable, OPENAI_API_KEY.
"},{"location":"cli/llamabot/#configuring-the-default-model","title":"Configuring the Default Model","text":"
The next step in the configuration process is to select the default model for LlamaBot. This is done by invoking:
llamabot configure default-model\n
LlamaBot will first load the environment variables from the .env file located at llamabotrc_path. It then retrieves a list of available models from the OpenAI API, filtering for those that include 'gpt' in their ID. For this reason, it is important to set your OpenAI API key before configuring the default model.
The function then displays the list of available models and prompts you to select one. As you type, the function will suggest completions based on the available models. The last model in the list is provided as the default option.
Once you have entered a valid model ID, the function stores it as an environment variable, DEFAULT_LANGUAGE_MODEL.
By following these steps, you can easily configure LlamaBot to use your OpenAI API key and your chosen default model. Remember to keep your API key secure, and to choose a model that best suits your needs. Happy coding!
Welcome to the Llamabot Python CLI tutorial! In this tutorial, we will explore the various commands available in the Llamabot Python CLI and learn how to use them effectively. The Llamabot Python CLI is a powerful tool for generating module-level and function docstrings, as well as generating code based on a given description.
The module-docstrings command generates module-level docstrings for a given module file. It takes the following arguments:
module_fpath: Path to the module to generate docstrings for.
dirtree_context_path: (Optional) Path to the directory to use as the context for the directory tree. Defaults to the parent directory of the module file.
In this tutorial, we have covered the various commands available in the Llamabot Python CLI and learned how to use them effectively. With these commands, you can easily generate module-level and function docstrings, generate code based on a given description, and write tests for your code. Happy coding!
"},{"location":"cli/repo/","title":"Chatting with a Code Repository: Llamabot CLI Guide","text":"
Welcome to the guide on using the Llamabot CLI for interacting with code repositories. This tool facilitates engaging conversations with your codebase, leveraging the power of AI to understand and read documentation within a repo. Let\u2019s get started on how to utilize this tool.
--checkout: Branch or tag to use (default: \"main\").
--source-file-extensions: File types to include in the conversation.
--model-name: AI model to use for generating responses.
Once you have executed this command, LlamaBot will automatically clone the repository to a temporary directory, embed the files as specified by the source-file extensions, and fire up LlamaBot's usual command line-based chat interface.
This guide covers the essential aspects of the Llamabot CLI, a tool designed to enhance your coding experience through AI-powered conversations about a code repository. Embrace these capabilities to make your coding more efficient and insightful. Happy coding!
In this tutorial, we will walk through the Llamabot Zotero CLI, a command-line interface for interacting with your Zotero library. This tool allows you to chat with a paper, retrieve keys, and download papers from your Zotero library.
First, we need to configure the Llamabot Zotero CLI environment variables. This is done using the configure command. You will be prompted to enter your Zotero library ID, API key, and library type.
llamabot zotero configure\n
"},{"location":"cli/zotero/#chatting-with-a-paper","title":"Chatting with a Paper","text":"
To chat with a paper, use the chat command. You can specify the paper you want to chat about as an argument. If you don't provide a paper, you will be prompted to enter one.
llamabot zotero chat \"The title of the paper\"\n
If you want to specify a model, such as an Ollama model, you can do so directly at the command line too:
llamabot zotero chat \"The title of the paper\" --model vicuna:7b-16k\n
If you want to synchronize your Zotero library before chatting, you can use the --sync option.
llamabot zotero chat \"The title of the paper\" --sync\n
When you chat with a paper, the Llamabot Zotero CLI will retrieve the keys for the paper. These keys are unique identifiers for each paper in your Zotero library. The keys are displayed in the console.
After retrieving the keys, you can choose a paper to download. You will be prompted to choose a paper from the list of keys. The paper will be downloaded to a temporary directory.
Once the paper is downloaded, you can start asking questions about the paper. The Llamabot Zotero CLI uses a QueryBot to answer your questions. Simply type your question at the prompt.
Ask me a question: What is the main argument of the paper?\n
To exit the chat, type exit.
Ask me a question: exit\n
And that's it! You now know how to use the Llamabot Zotero CLI to chat with a paper, retrieve keys, download papers, and ask questions about a paper. Happy chatting!
Let's see how to use the ChatBot class to enable you to chat with Mistral inside a Jupyter notebook.
from llamabot import ChatBot\n\ncode_tester = ChatBot(\n\"\"\"\nYou are a Python quality assurance developer who delivers high quality unit tests for code.\nYou write tests using PyTest and not the built-in unittest library.\nWrite the tests using test functions and not using classes and class methods\nHere is the code to write tests against:\n\"\"\",\n session_name=\"code-tested\",\n model_name=\"mistral/mistral-medium\",\n stream_target=\"stdout\",\n)\n
code_tester(\n'''\nclass ChatBot:\n \"\"\"Chat Bot that is primed with a system prompt, accepts a human message.\n\n Automatic chat memory management happens.\n\n h/t Andrew Giessel/GPT4 for the idea.\n \"\"\"\n\n def __init__(self, system_prompt, temperature=0.0, model_name=\"gpt-4\"):\n \"\"\"Initialize the ChatBot.\n\n :param system_prompt: The system prompt to use.\n :param temperature: The model temperature to use.\n See https://platform.openai.com/docs/api-reference/completions/create#completions/create-temperature\n for more information.\n :param model_name: The name of the OpenAI model to use.\n \"\"\"\n self.model = ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=True,\n verbose=True,\n callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n )\n self.chat_history = [\n SystemMessage(content=\"Always return Markdown-compatible text.\"),\n SystemMessage(content=system_prompt),\n ]\n\n def __call__(self, human_message) -> Response:\n \"\"\"Call the ChatBot.\n\n :param human_message: The human message to use.\n :return: The response to the human message, primed by the system prompt.\n \"\"\"\n self.chat_history.append(HumanMessage(content=human_message))\n response = self.model(self.chat_history)\n self.chat_history.append(response)\n return response\n'''\n)\n
\n<litellm.utils.CustomStreamWrapper object at 0x1140c2cd0>\nHere are the tests for the ChatBot class using PyTest and test functions:\n
import pytest\nfrom chatbot import ChatBot, SystemMessage, HumanMessage\nfrom openai_callback import ChatOpenAI\n\ndef test_chatbot_init():\n system_prompt = \"You are a helpful assistant.\"\n chatbot = ChatBot(system_prompt)\n assert len(chatbot.chat_history) == 2\n assert isinstance(chatbot.chat_history[0], SystemMessage)\n assert isinstance(chatbot.chat_history[1], SystemMessage)\n assert chatbot.chat_history[0].content == \"Always return Markdown-compatible text.\"\n assert chatbot.chat_history[1].content == system_prompt\n\ndef test_chatbot_call():\n system_prompt = \"You are a helpful assistant.\"\n chatbot = ChatBot(system_prompt)\n human_message = \"What is the weather like today?\"\n response = chatbot(human_message)\n assert len(chatbot.chat_history) == 4\n assert isinstance(chatbot.chat_history[2], HumanMessage)\n assert isinstance(chatbot.chat_history[3], ChatOpenAI.Response)\n assert chatbot.chat_history[2].content == human_message\n assert response == chatbot.chat_history[3]\n\ndef test_chatbot_call_multiple_times():\n system_prompt = \"You are a helpful assistant.\"\n chatbot = ChatBot(system_prompt)\n human_message1 = \"What is the weather like today?\"\n human_message2 = \"What is the temperature outside?\"\n chatbot(human_message1)\n chatbot(human_message2)\n assert len(chatbot.chat_history) == 6\n assert isinstance(chatbot.chat_history[2], HumanMessage)\n assert isinstance(chatbot.chat_history[3], ChatOpenAI.Response)\n assert isinstance(chatbot.chat_history[4], HumanMessage)\n assert isinstance(chatbot.chat_history[5], ChatOpenAI.Response)\n assert chatbot.chat_history[2].content == human_message1\n assert chatbot.chat_history[4].content == human_message2\n\ndef test_chatbot_temperature():\n system_prompt = \"You are a helpful assistant.\"\n chatbot = ChatBot(system_prompt, temperature=0.5)\n assert chatbot.model.temperature == 0.5\n\ndef test_chatbot_model_name():\n system_prompt = \"You are a helpful assistant.\"\n chatbot = ChatBot(system_prompt, model_name=\"gpt-3.5-turbo\")\n assert chatbot.model.model_name == \"gpt-3.5-turbo\"\n
\nNote that these tests assume that the `ChatOpenAI` class and its `Response` class are defined elsewhere in the codebase. Also, the tests do not actually call the OpenAI API, but rather assume that the `ChatOpenAI` class is a mock or stub that returns a canned response. If you want to test the actual API calls, you will need to set up a test environment with a valid API key and handle any rate limiting or other issues that may arise.\n
\nAIMessage(content='', role='assistant')\n
As you can see, ChatBot keeps track of conversation memory/history automatically. We can even access any item in the conversation by looking at the conversation history.
The __repr__ of a chatbot will simply print out the entire history:
code_tester\n
\n[Human]\n\nclass ChatBot:\n \"\"\"Chat Bot that is primed with a system prompt, accepts a human message.\n\n Automatic chat memory management happens.\n\n h/t Andrew Giessel/GPT4 for the idea.\n \"\"\"\n\n def __init__(self, system_prompt, temperature=0.0, model_name=\"gpt-4\"):\n \"\"\"Initialize the ChatBot.\n\n :param system_prompt: The system prompt to use.\n :param temperature: The model temperature to use.\n See https://platform.openai.com/docs/api-reference/completions/create#completions/create-temperature\n for more information.\n :param model_name: The name of the OpenAI model to use.\n \"\"\"\n self.model = ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=True,\n verbose=True,\n callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n )\n self.chat_history = [\n SystemMessage(content=\"Always return Markdown-compatible text.\"),\n SystemMessage(content=system_prompt),\n ]\n\n def __call__(self, human_message) -> Response:\n \"\"\"Call the ChatBot.\n\n :param human_message: The human message to use.\n :return: The response to the human message, primed by the system prompt.\n \"\"\"\n self.chat_history.append(HumanMessage(content=human_message))\n response = self.model(self.chat_history)\n self.chat_history.append(response)\n return response\n\n def __repr__(self):\n \"\"\"Return a string representation of the ChatBot.\n\n :return: A string representation of the ChatBot.\n \"\"\"\n representation = \"\"\n\n for message in self.chat_history:\n if isinstance(message, SystemMessage):\n prefix = \"[System]\n\"\n elif isinstance(message, HumanMessage):\n prefix = \"[Human]\n\"\n elif isinstance(message, AIMessage):\n prefix = \"[AI]\n\"\n\n representation += f\"{prefix}{message.content}\" + \"\n\n\"\n return representation\n\n def panel(self, show: bool = True):\n \"\"\"Create a Panel app that wraps a LlamaBot.\n\n :param show: Whether to show the app.\n If False, we return the Panel app directly.\n If True, we call `.show()` on the app.\n :return: The Panel app, either showed or directly.\n \"\"\"\n\n text_input = pn.widgets.TextAreaInput(placeholder=\"Start chatting...\")\n chat_history = pn.Column(*[])\n send_button = pn.widgets.Button(name=\"Send\", button_type=\"primary\")\n\n def b(event):\n \"\"\"Button click handler.\n\n :param event: The button click event.\n \"\"\"\n chat_messages = []\n for message in self.chat_history:\n if isinstance(message, SystemMessage):\n pass\n elif isinstance(message, HumanMessage):\n chat_markdown = pn.pane.Markdown(f\"Human: {message.content}\")\n chat_messages.append(chat_markdown)\n elif isinstance(message, AIMessage):\n chat_markdown = pn.pane.Markdown(f\"Bot: {message.content}\")\n chat_messages.append(chat_markdown)\n\n chat_messages.append(pn.pane.Markdown(f\"Human: {text_input.value}\"))\n bot_reply = pn.pane.Markdown(\"Bot: \")\n chat_messages.append(bot_reply)\n chat_history.objects = chat_messages\n markdown_handler = PanelMarkdownCallbackHandler(bot_reply)\n self.model.callback_manager.set_handler(markdown_handler)\n self(text_input.value)\n text_input.value = \"\"\n\n send_button.on_click(b)\n input_pane = pn.Row(text_input, send_button)\n output_pane = pn.Column(chat_history, scroll=True, height=500)\n\n main = pn.Row(input_pane, output_pane)\n app = pn.template.FastListTemplate(\n site=\"ChatBot\",\n title=\"ChatBot\",\n main=main,\n main_max_width=\"768px\",\n )\n if show:\n return app.show()\n return app\n\n\n\n[AI]\nHere are some tests for the ChatBot class using PyTest:\n
import pytest\nfrom your_module import ChatBot, SystemMessage, HumanMessage\n\ndef test_init():\n system_prompt = \"You are a helpful assistant.\"\n chatbot = ChatBot(system_prompt)\n assert len(chatbot.chat_history) == 2\n assert isinstance(chatbot.chat_history[0], SystemMessage)\n assert isinstance(chatbot.chat_history[1], SystemMessage)\n assert chatbot.chat_history[0].content == \"Always return Markdown-compatible text.\"\n assert chatbot.chat_history[1].content == system_prompt\n\ndef test_call():\n system_prompt = \"You are a helpful assistant.\"\n chatbot = ChatBot(system_prompt)\n human_message = \"What's the weather like today?\"\n response = chatbot(human_message)\n assert len(chatbot.chat_history) == 4\n assert isinstance(chatbot.chat_history[2], HumanMessage)\n assert isinstance(chatbot.chat_history[3], response.__class__)\n assert chatbot.chat_history[2].content == human_message\n\ndef test_repr():\n system_prompt = \"You are a helpful assistant.\"\n chatbot = ChatBot(system_prompt)\n human_message = \"What's the weather like today?\"\n chatbot(human_message)\n expected_repr = (\n \"[System]\\n\"\n \"Always return Markdown-compatible text.\\n\\n\"\n \"[System]\\n\"\n \"You are a helpful assistant.\\n\\n\"\n \"[Human]\\n\"\n \"What's the weather like today?\\n\\n\"\n \"[AI]\\n\"\n )\n assert repr(chatbot) == expected_repr\n\ndef test_panel():\n system_prompt = \"You are a helpful assistant.\"\n chatbot = ChatBot(system_prompt)\n app = chatbot.panel()\n assert isinstance(app, type(pn.template.FastListTemplate()))\n
\nNote that the `test_panel` function assumes that the `pn` module is available in the test environment. If it is not, you may need to install it or mock it out for testing purposes.\n\nAlso note that the `test_call` function assumes that the `response` object has a `__class__` attribute that can be used to check its type. If this is not the case, you may need to modify the test to use a different method of checking the type of the response object.\n\nFinally, note that these tests are not exhaustive and may not cover all possible edge cases or error conditions. You may want to add additional tests to ensure that the `ChatBot` class is working correctly in all scenarios.\n\n
On the other hand, accessing the .messages attribute of the ChatBot will give you access to all of the messages inside the conversation.
code_tester.messages\n
\n[HumanMessage(content='\\nclass ChatBot:\\n \"\"\"Chat Bot that is primed with a system prompt, accepts a human message.\\n\\n Automatic chat memory management happens.\\n\\n h/t Andrew Giessel/GPT4 for the idea.\\n \"\"\"\\n\\n def __init__(self, system_prompt, temperature=0.0, model_name=\"gpt-4\"):\\n \"\"\"Initialize the ChatBot.\\n\\n :param system_prompt: The system prompt to use.\\n :param temperature: The model temperature to use.\\n See https://platform.openai.com/docs/api-reference/completions/create#completions/create-temperature\\n for more information.\\n :param model_name: The name of the OpenAI model to use.\\n \"\"\"\\n self.model = ChatOpenAI(\\n model_name=model_name,\\n temperature=temperature,\\n streaming=True,\\n verbose=True,\\n callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\\n )\\n self.chat_history = [\\n SystemMessage(content=\"Always return Markdown-compatible text.\"),\\n SystemMessage(content=system_prompt),\\n ]\\n\\n def __call__(self, human_message) -> Response:\\n \"\"\"Call the ChatBot.\\n\\n :param human_message: The human message to use.\\n :return: The response to the human message, primed by the system prompt.\\n \"\"\"\\n self.chat_history.append(HumanMessage(content=human_message))\\n response = self.model(self.chat_history)\\n self.chat_history.append(response)\\n return response\\n\\n def __repr__(self):\\n \"\"\"Return a string representation of the ChatBot.\\n\\n :return: A string representation of the ChatBot.\\n \"\"\"\\n representation = \"\"\\n\\n for message in self.chat_history:\\n if isinstance(message, SystemMessage):\\n prefix = \"[System]\\n\"\\n elif isinstance(message, HumanMessage):\\n prefix = \"[Human]\\n\"\\n elif isinstance(message, AIMessage):\\n prefix = \"[AI]\\n\"\\n\\n representation += f\"{prefix}{message.content}\" + \"\\n\\n\"\\n return representation\\n\\n def panel(self, show: bool = True):\\n \"\"\"Create a Panel app that wraps a LlamaBot.\\n\\n :param show: Whether to show the app.\\n If False, we return the Panel app directly.\\n If True, we call `.show()` on the app.\\n :return: The Panel app, either showed or directly.\\n \"\"\"\\n\\n text_input = pn.widgets.TextAreaInput(placeholder=\"Start chatting...\")\\n chat_history = pn.Column(*[])\\n send_button = pn.widgets.Button(name=\"Send\", button_type=\"primary\")\\n\\n def b(event):\\n \"\"\"Button click handler.\\n\\n :param event: The button click event.\\n \"\"\"\\n chat_messages = []\\n for message in self.chat_history:\\n if isinstance(message, SystemMessage):\\n pass\\n elif isinstance(message, HumanMessage):\\n chat_markdown = pn.pane.Markdown(f\"Human: {message.content}\")\\n chat_messages.append(chat_markdown)\\n elif isinstance(message, AIMessage):\\n chat_markdown = pn.pane.Markdown(f\"Bot: {message.content}\")\\n chat_messages.append(chat_markdown)\\n\\n chat_messages.append(pn.pane.Markdown(f\"Human: {text_input.value}\"))\\n bot_reply = pn.pane.Markdown(\"Bot: \")\\n chat_messages.append(bot_reply)\\n chat_history.objects = chat_messages\\n markdown_handler = PanelMarkdownCallbackHandler(bot_reply)\\n self.model.callback_manager.set_handler(markdown_handler)\\n self(text_input.value)\\n text_input.value = \"\"\\n\\n send_button.on_click(b)\\n input_pane = pn.Row(text_input, send_button)\\n output_pane = pn.Column(chat_history, scroll=True, height=500)\\n\\n main = pn.Row(input_pane, output_pane)\\n app = pn.template.FastListTemplate(\\n site=\"ChatBot\",\\n title=\"ChatBot\",\\n main=main,\\n main_max_width=\"768px\",\\n )\\n if show:\\n return app.show()\\n return app\\n\\n', role='user'),\n AIMessage(content='Here are some tests for the ChatBot class using PyTest:\\n```python\\nimport pytest\\nfrom your_module import ChatBot, SystemMessage, HumanMessage\\n\\ndef test_init():\\n system_prompt = \"You are a helpful assistant.\"\\n chatbot = ChatBot(system_prompt)\\n assert len(chatbot.chat_history) == 2\\n assert isinstance(chatbot.chat_history[0], SystemMessage)\\n assert isinstance(chatbot.chat_history[1], SystemMessage)\\n assert chatbot.chat_history[0].content == \"Always return Markdown-compatible text.\"\\n assert chatbot.chat_history[1].content == system_prompt\\n\\ndef test_call():\\n system_prompt = \"You are a helpful assistant.\"\\n chatbot = ChatBot(system_prompt)\\n human_message = \"What\\'s the weather like today?\"\\n response = chatbot(human_message)\\n assert len(chatbot.chat_history) == 4\\n assert isinstance(chatbot.chat_history[2], HumanMessage)\\n assert isinstance(chatbot.chat_history[3], response.__class__)\\n assert chatbot.chat_history[2].content == human_message\\n\\ndef test_repr():\\n system_prompt = \"You are a helpful assistant.\"\\n chatbot = ChatBot(system_prompt)\\n human_message = \"What\\'s the weather like today?\"\\n chatbot(human_message)\\n expected_repr = (\\n \"[System]\\\\n\"\\n \"Always return Markdown-compatible text.\\\\n\\\\n\"\\n \"[System]\\\\n\"\\n \"You are a helpful assistant.\\\\n\\\\n\"\\n \"[Human]\\\\n\"\\n \"What\\'s the weather like today?\\\\n\\\\n\"\\n \"[AI]\\\\n\"\\n )\\n assert repr(chatbot) == expected_repr\\n\\ndef test_panel():\\n system_prompt = \"You are a helpful assistant.\"\\n chatbot = ChatBot(system_prompt)\\n app = chatbot.panel()\\n assert isinstance(app, type(pn.template.FastListTemplate()))\\n```\\nNote that the `test_panel` function assumes that the `pn` module is available in the test environment. If it is not, you may need to install it or mock it out for testing purposes.\\n\\nAlso note that the `test_call` function assumes that the `response` object has a `__class__` attribute that can be used to check its type. If this is not the case, you may need to modify the test to use a different method of checking the type of the response object.\\n\\nFinally, note that these tests are not exhaustive and may not cover all possible edge cases or error conditions. You may want to add additional tests to ensure that the `ChatBot` class is working correctly in all scenarios.', role='assistant')]\n
You can even access any arbitrary message.
print(code_tester.messages[-1].content)\n
\nHere are some tests for the ChatBot class using PyTest:\n
import pytest\nfrom your_module import ChatBot, SystemMessage, HumanMessage\n\ndef test_init():\n system_prompt = \"You are a helpful assistant.\"\n chatbot = ChatBot(system_prompt)\n assert len(chatbot.chat_history) == 2\n assert isinstance(chatbot.chat_history[0], SystemMessage)\n assert isinstance(chatbot.chat_history[1], SystemMessage)\n assert chatbot.chat_history[0].content == \"Always return Markdown-compatible text.\"\n assert chatbot.chat_history[1].content == system_prompt\n\ndef test_call():\n system_prompt = \"You are a helpful assistant.\"\n chatbot = ChatBot(system_prompt)\n human_message = \"What's the weather like today?\"\n response = chatbot(human_message)\n assert len(chatbot.chat_history) == 4\n assert isinstance(chatbot.chat_history[2], HumanMessage)\n assert isinstance(chatbot.chat_history[3], response.__class__)\n assert chatbot.chat_history[2].content == human_message\n\ndef test_repr():\n system_prompt = \"You are a helpful assistant.\"\n chatbot = ChatBot(system_prompt)\n human_message = \"What's the weather like today?\"\n chatbot(human_message)\n expected_repr = (\n \"[System]\\n\"\n \"Always return Markdown-compatible text.\\n\\n\"\n \"[System]\\n\"\n \"You are a helpful assistant.\\n\\n\"\n \"[Human]\\n\"\n \"What's the weather like today?\\n\\n\"\n \"[AI]\\n\"\n )\n assert repr(chatbot) == expected_repr\n\ndef test_panel():\n system_prompt = \"You are a helpful assistant.\"\n chatbot = ChatBot(system_prompt)\n app = chatbot.panel()\n assert isinstance(app, type(pn.template.FastListTemplate()))\n
\nNote that the `test_panel` function assumes that the `pn` module is available in the test environment. If it is not, you may need to install it or mock it out for testing purposes.\n\nAlso note that the `test_call` function assumes that the `response` object has a `__class__` attribute that can be used to check its type. If this is not the case, you may need to modify the test to use a different method of checking the type of the response object.\n\nFinally, note that these tests are not exhaustive and may not cover all possible edge cases or error conditions. You may want to add additional tests to ensure that the `ChatBot` class is working correctly in all scenarios.\n\n
\n
"},{"location":"examples/chatbot_nb/#chatbots-in-a-jupyter-notebook","title":"ChatBots in a Jupyter Notebook","text":""},{"location":"examples/imagebot/","title":"Imagebot","text":"
This notebook shows how to use the ImageBot API to generate images from text. Underneath the hood, it uses the OpenAI API. This bot can be combined with other bots (e.g. SimpleBot) to create rich content.
# PDF Chatbot\n%load_ext autoreload\n%autoreload 2\n
# Download pre-built index.json file from Dropbox\nimport requests\n\nheaders = {\"user-agent\": \"Wget/1.16 (linux-gnu)\"} # <-- the key is here!\nr = requests.get(\n \"https://www.dropbox.com/s/wrixlu7e3noi43q/Ma%20et%20al.%20-%202021%20-%20Machine-Directed%20Evolution%20of%20an%20Imine%20Reductase%20f.pdf?dl=0\",\n stream=True,\n headers=headers,\n)\npdf_fname = \"/tmp/machine-directed-evolution.pdf\"\nwith open(pdf_fname, \"wb\") as f:\n for chunk in r.iter_content(chunk_size=1024):\n if chunk:\n f.write(chunk)\n
from llamabot import QueryBot\nfrom pyprojroot import here\n\n# If you're prototyping with your own PDF, uncomment the following code and use it instead of the saved index path:\n# bot = QueryBot(\n# \"You are a bot that reads a PDF book and responds to questions about that book.\",\n# document_paths=[pdf_fname],\n# collection_name=\"machine-directed-evolution-paper\",\n# model_name=\"mistral/mistral-medium\",\n# )\n\nbot = QueryBot(\n \"You are a bot that reads a PDF book and responds to questions about that book.\",\n collection_name=\"machine-directed-evolution-paper\",\n model_name=\"mistral/mistral-medium\",\n)\n
prompt = \"I'd like to use the workflow of this paper to educate colleagues. What are the main talking points I should use?\"\nbot(prompt)\n
prompt = \"My colleagues are interested in evolving another enzyme. However, they may be unaware of how machine learning approaches will help them there. Based on this paper, what can I highlight that might overcome their lack of knowledge?\"\nbot(prompt)\n
prompt = \"What data from the paper helped show this point, 'Machine-directed evolution is an efficient strategy for enzyme engineering, as it can help navigate enzyme sequence space more effectively and reduce the number of enzyme variants to be measured en route to a desirable enzyme under realistic process conditions.'?\"\nbot(prompt)\n
prompt = \"How can I succinctly present the SGM vs. EPPCR results to my colleagues? Or in other words, how would Richard Feynman present these results?\"\nbot(prompt)\n
Using SimpleBot below should prove that we are indeed querying a book and not just relying on the LLM's training set.
from llamabot import SimpleBot\n\n\nsbot = SimpleBot(\"You are a bot that responds to human questions.\")\nsbot(prompt)\n
from llamabot import QueryBot\nimport git\nfrom IPython.display import display, Markdown\n
import tempfile\nfrom pathlib import Path\n\n# Create a temporary directory\ntemp_dir = tempfile.TemporaryDirectory(dir=\"/tmp\")\n\n\nrepo_url = \"https://github.com/duckdb/duckdb-web\"\n# Clone the repository into the temporary directory\nrepo = git.Repo.clone_from(repo_url, temp_dir.name)\n\n# Set the root directory to the cloned repository\nroot_dir = Path(temp_dir.name)\n
from slugify import slugify\nbot = QueryBot(\n system_prompt=\"You are an expert in the code repository given to you.\",\n collection_name=slugify(repo_url),\n document_paths=source_files,\n)\n
bot(\"Give me an example of lambda functions in DuckDB.\")\n
bot(\"What is your view on building a digital portfolio?\")\n
bot(\"What were your experiences with the SciPy conference?\")\n
bot(\"What tutorials did you attend at the SciPy conference in 2023?\")\n
from numpy import source\nfrom llamabot.file_finder import recursive_find\nfrom pyprojroot import here\n\nsource_python_files = recursive_find(root_dir=here() / \"llamabot\", file_extension=\".py\")\n\ncodebot = QueryBot(\n \"You are an expert in code Q&A.\",\n collection_name=\"llamabot\",\n document_paths=source_python_files,\n model_name=\"gpt-4-1106-preview\",\n)\n
codebot(\"How do I find all the files in a directory?\")\n
codebot(\"Which Bot do I use to chat with my documents?\")\n
codebot(\"Explain to me the architecture of SimpleBot.\")\n
codebot(\"What are the CLI functions available in LlamaBot?\")\n
from llamabot.bot.qabot import DocQABot\n\ncodebot = DocQABot(\n collection_name=\"llamabot\",\n)\ncodebot.add_documents(document_paths=source_python_files)\n
codebot(\n \"Does LlamaBot provide a function to find all files recursively in a directory?\"\n)\n
\n
"},{"location":"examples/querybot/#eric-ma-qa","title":"Eric Ma Q&A","text":"
This shows how to build a blog Q&A bot using the text contents of Eric Ma's blog.
"},{"location":"examples/querybot/#setup-download-blog-data","title":"Setup: Download blog data","text":""},{"location":"examples/querybot/#llamabot-code-query","title":"LlamaBot Code Query","text":""},{"location":"examples/recorder/","title":"Recorder","text":"
from llamabot import SimpleBot, PromptRecorder\n
bot = SimpleBot(\"You are a bot.\")\n
# Try three different prompts.\n\nprompt1 = (\n \"You are a fitness coach who responds in 25 words or less. How do I gain muscle?\"\n)\nprompt2 = \"You are an expert fitness coach who responds in 100 words or less. How do I gain muscle?\"\nprompt3 = \"You are an expert fitness coach who responds in 25 words or less and will not give lethal advice. How do I gain muscle?\"\n\nrecorder = PromptRecorder()\n
with recorder:\n bot(prompt1)\n bot(prompt2)\n bot(prompt3)\n
recorder.prompts_and_responses\n
import pandas as pd\n\npd.DataFrame(recorder.prompts_and_responses)\n
prompt4 = \"You are an expert fitness coach who responds in 25 words or less, and you help people who only have access to body weight exercises. How do I gain muscle?\"\n\nwith recorder:\n bot(prompt4)\n
One challenge I've found when working with prompts is recording what I get back when I try out different prompts. Copying and pasting is clearly not what I'd like to do. So I decided to write some functionality into Llamabot that lets us do recording of prompts and the responses returned by GPT.
To run this, execute the following command from the repo root:
panel run docs/simple_panel.ipynb\n
"},{"location":"examples/simple_panel/#simplebot-apps","title":"SimpleBot Apps","text":""},{"location":"examples/simple_panel/#build-the-ui","title":"Build the UI","text":""},{"location":"examples/simplebot/","title":"Simplebot","text":"
%load_ext autoreload\n%autoreload 2\n
Let's say we have the text of a blog...
with open(\"../../data/blog_text.txt\", \"r+\") as f:\n blog_text = f.read()\nblog_text[0:100] + \"...\"\n
\n\"# Automatically write awesome commit messages\\n\\nAs a data scientist, I work with Git.\\n\\nIf you're anyt...\"\n
And we'd like to create a function that takes in the text and gives us a draft LinkedIn post, complete with emojis, that is designed to entice others to read the blog post. LLaMaBot's SimpleBot lets us build that function easily.
from llamabot import SimpleBot\n\nsystem_prompt = \"\"\"You are a LinkedIn post generator bot.\nA human will give you the text of a blog post that they've authored,\nand you will compose a LinkedIn post that advertises it.\nThe post is intended to hook a reader into reading the blog post.\nThe LinkedIn post should be written with one line per sentence.\nEach sentence should begin with an emoji appropriate to that sentence.\nThe post should be written in professional English and in first-person tone for the human.\n\"\"\"\n\nlinkedin = SimpleBot(\n system_prompt=system_prompt,\n stream_target=\"stdout\", # this is the default!\n)\n
Note that SimpleBot by default will always stream. All that you need to configure is where you want to stream to.
With linkedin, we can now pass in the blog text and - voila! - get back a draft LinkedIn post.
linkedin_post = linkedin(blog_text)\n
\n\ud83d\ude80 Excited to share my latest blog post on crafting meaningful commit messages!\n\ud83d\udc40 Are your commit messages lacking detail and clarity?\n\ud83e\udd14 Ever wished for a tool that could automatically generate informative commit messages for you?\n\ud83c\udf1f Introducing my CLI tool within `llamabot` that crafts commit messages according to the Conventional Commits specification.\n\ud83d\udd0d With an OpenAI API key, GPT-4-32k will write a commit message that provides detailed insights into the changes made.\n\ud83c\udf89 The benefits of using meaningful commit messages are manifold - from improving collaboration to aiding in debugging and issue resolution.\n\ud83d\udd17 Check out the full blog post to learn more about the impact of meaningful commit messages and how to install `llamabot`! #Git #DataScience #Productivity\n
Now, you can edit it to your hearts content! :-)
Next up, we have streaming that is compatible with Panel's Chat interface, which expects the text to be returned in its entirety as it is being built up.
And finally, we have streaming via the API. We return a generator that yields individual parts of text as they are being generated.
linkedin_api = SimpleBot(\n system_prompt=system_prompt,\n stream_target=\"api\",\n)\n\nlinkedin_post = linkedin_api(blog_text)\nfor post in linkedin_post:\n print(post, end=\"\")\n
If you have an Ollama server running, you can hit the API using SimpleBot. The pre-requisite is that you have already run ollama pull <modelname> to download the model to the Ollama server.
print(system_prompt)\n
import os\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\nlinkedin_ollama = SimpleBot(\n model_name=\"ollama/mistral\", # Specifying Ollama via the model_name argument is necessary!s\n system_prompt=system_prompt,\n stream_target=\"stdout\", # this is the default!\n api_base=f\"http://{os.getenv('OLLAMA_SERVER')}:11434\",\n)\nlinkedin_post = linkedin_ollama(blog_text)\n
\n
"},{"location":"examples/simplebot/#llamabots-simplebot-in-under-5-minutes","title":"LLaMaBot's SimpleBot in under 5 minutes","text":""},{"location":"modules/doc_processor/","title":"Doc Processor","text":"
Note
This tutorial was written by GPT4 and edited by a human.
The doc processor is a Python script designed to preprocess documents by loading them from various file formats and splitting them into smaller sub-documents. It works in two main steps:
(1) Loading documents: The magic_load_doc function is used to load a document from a file. It automatically detects the file format based on the file extension and uses the appropriate loader to read the content. Supported file formats include PDF, DOCX, PPTX, XLSX, Markdown, and IPYNB. If the file format is not recognized, it is treated as a plain text file.
(2) Splitting documents: The split_document function is used to split a document into smaller sub-documents using a token text splitter. You can specify the maximum length of each sub-document (chunk_size) and the number of tokens to overlap between each sub-document (chunk_overlap). The function returns a list of sub-documents.
To use the doc processor, simply import the required functions and call them with the appropriate parameters. For example:
from llamabot.doc_processor import magic_load_doc, split_document\n\n# Load a document from a file\nfile_path = \"path/to/your/document.pdf\"\ndocuments = magic_load_doc(file_path)\n\n# Split the document into sub-documents\nchunk_size = 2000\nchunk_overlap = 0\nsub_documents = [split_document(doc, chunk_size, chunk_overlap) for doc in documents]\n
This will give you a list of sub-documents that can be further processed inside QueryBot.
"},{"location":"modules/file_finder/","title":"File Handling in Python: A Tutorial","text":"
In this tutorial, we will explore a module that provides functions for file handling in Python. The module contains three main functions:
recursive_find(root_dir: Path, file_extension: str) -> List[Path]: Find all files in a given path with a given extension.
check_in_git_repo(path) -> bool: Check if a given path is in a git repository.
read_file(path: Path) -> str: Read a file.
Let's dive into each function and see how they can be used.
The recursive_find function allows you to find all files with a specific extension within a given directory and its subdirectories. This can be useful when you want to process all files of a certain type in a project.
This will output a list of Path objects representing all the Python files found in the specified directory and its subdirectories.
"},{"location":"modules/file_finder/#2-checking-if-a-path-is-in-a-git-repository","title":"2. Checking if a Path is in a Git Repository","text":"
The check_in_git_repo function allows you to check if a given path is part of a git repository. This can be useful when you want to ensure that your code is only executed within a version-controlled environment.
This will output True if the specified path is part of a git repository, and False otherwise.
"},{"location":"modules/file_finder/#3-reading-a-file","title":"3. Reading a File","text":"
The read_file function allows you to read the contents of a file. This can be useful when you want to process the contents of a file, such as analyzing code or parsing data.
In this tutorial, we have explored a module that provides functions for file handling in Python. By using these functions, you can easily find files with specific extensions, check if a path is part of a git repository, and read the contents of a file. These functions can be combined to create powerful file processing pipelines in your Python projects.
This tutorial was written by GPT4 and edited by a human.
The prompt recorder is a class named PromptRecorder that helps in recording prompts and responses. It works as a context manager, allowing you to record prompts and responses within a specific context. Here's a brief overview of how it works:
The PromptRecorder class is initialized with an empty list called prompts_and_responses to store the prompts and responses.
When entering the context manager using the with statement, the __enter__() method is called, which sets the current instance of the PromptRecorder as the active recorder in the prompt_recorder_var context variable.
To log a prompt and response, the log() method is called with the prompt and response as arguments. This method appends the prompt and response as a dictionary to the prompts_and_responses list.
The autorecord() function is provided to be called within every bot. It checks if there is an active PromptRecorder instance in the context and logs the prompt and response using the log() method.
When exiting the context manager, the __exit__() method is called, which resets the prompt_recorder_var context variable to None and prints a message indicating that the recording is complete.
The PromptRecorder class also provides methods to represent the recorded data in different formats, such as a string representation (__repr__()), an HTML representation (_repr_html_()), a pandas DataFrame representation (dataframe()), and a panel representation (panel()).
By using the PromptRecorder class as a context manager, you can easily record prompts and responses within a specific context and then analyze or display the recorded data in various formats.
This new version introduces experiments with Llamahub document loaders, updates workspace settings, and makes Panel an official dependency. It also includes a new Panel example and a correction in the docstring for accuracy.
This version introduces automatic recording of prompts, improves the recording process, and verifies its functionality. It also includes a cleanup of notebooks and adds loguru as a dependency.
This new version includes an update to the example about querying PDFs and a bug fix related to query nodes. The version has been bumped from 0.0.12 to 0.0.13.
Fixed a bug where query nodes were hard-coded to 3, which limited the flexibility of the system. Now, the number of query nodes is dynamic (6802ac) (Eric Ma)
This new version includes several enhancements and bug fixes to improve the overall performance and user experience of the Llamabot. The Python environment and llama_index versions have been pinned for stability. The chatbot panel app now uses a width of 600 pixels for better UI. The Querybot system message now applies SEMBR for improved readability.
The chatbot panel app now uses a width of 600 pixels for a more user-friendly interface (7e2f05) (Eric Ma)
Faux chat history of length 6000 tokens is now used as context for further responses in chatbot, enhancing the chatbot's response accuracy (02ef9d) (Eric Ma)
SEMBR has been applied on the Querybot system message for improved readability (b3c53c) (Eric Ma)
This new version includes updates to the versions of bokeh used in the project. The bokeh version has been pinned to ensure compatibility and stability of the project.
This new version includes a variety of enhancements and new features, including the addition of a coding bot panel app, a refactor of the CLI, and updates to the notebooks and tutorial materials. The version also includes the integration of pyzotero to dependencies and the beginning of a prototype for coding and diffbots in a library of bots.
The version of langchain used in the project has been pinned, ensuring stability and compatibility with this version of the software (631c29) (Eric Ma)
Fixed the issue with the Querybot index retriever by updating it with the refactored Llamaindex code. This should improve the accuracy and efficiency of the index retriever (12a87b) (Eric Ma)
This version includes several enhancements and new features, including the addition of new modules and functions, improvements to documentation, and minor updates to notebooks.
This version includes a number of new features, improvements, and bug fixes. The main focus of this release was to enhance the testing environment, improve code generation, and add new functionalities.
This new version includes a significant change in the underlying parsing library, moving from astunparse to astor. This change is expected to improve the performance and reliability of the Llamabot.
This version introduces several new features and improvements to the LlamaBot CLI, including the addition of git diff display and commit message generation, a repr method for the Dummy class, and handling for no staged changes in commit_message. It also includes several refactors and a documentation update.
Added git diff display and commit message generation functionality to the LlamaBot CLI. This feature imports the get_git_diff function from llamabot.code_manipulation, creates a SimpleBot instance for commit message generation, defines a commit_message function with a text.prompt decorator, and calls commitbot with the generated commit message. (1a6104) (Eric Ma)
Added a repr method to the Dummy class in dummy.py. This provides a string representation of the object, making it easier to inspect and debug instances of the Dummy class. (ae3e7c) (Eric Ma)
Updated commit_message function in cli/git.py to check for staged changes before generating a commit message. If no staged changes are found, a message is printed and the function returns. The get_git_diff function in code_manipulation.py was also updated to return an empty string if there are no staged changes. (ed7a3d) (Eric Ma)
Fixed typos in the llamabot CLI git module. Changes include renaming git.app to git.gitapp in llamabot/cli/init.py, adding missing parentheses to decorators in llamabot/cli/git.py, and replacing \"docstring\" with \"commit message\" in the user prompt. (860930) (Eric Ma)
Refactored Typer app and command decorators in git.py. The app was renamed to gitapp for better context, and decorators were updated to use the new gitapp variable. (f7af8b) (Eric Ma)
Removed the unnecessary hello command from the git.py file in the llamabot CLI. This simplifies the codebase and focuses on the core functionality. (8f0b9d) (Eric Ma)
Added a detailed explanation of the Conventional Commits specification to the git.py file. This outlines the various commit types, scopes, and footers, as well as their correlation with Semantic Versioning. This information will help users understand the importance of following the Conventional Commits specification when crafting their commit messages. (ca9b1c) (Eric Ma)
This new version introduces an autocommit option to the commit_message function in llamabot/cli/git.py. This feature allows for automatic committing of changes using the generated commit message when the autocommit parameter is set to True.
Added an autocommit option to the commit_message function in llamabot/cli/git.py. When set to True, changes are automatically committed using the generated commit message. (5c202a) (Eric Ma)
The commit_message function in llamabot/cli/git.py has been renamed to commit to better reflect its purpose of committing staged changes (ae7b10) (Eric Ma)
Automatic push to origin after commit has been added. This feature simplifies the workflow and ensures that changes are synced with the remote repository. (7a0151) (Eric Ma)
Pytest-mock has been added to the pr-tests workflow. This enables mocking in test cases. (ab1cb0) (Eric Ma)
This new version includes several updates to the documentation and build process, as well as the addition of an all-contributors section. The version also includes a fix to the build command.
A new version command has been added to the llamabot CLI. This command prints the current version of the application. (a88028) (Eric Ma)
The .bumpversion.cfg file has been updated to include llamabot/version.py for version updates. This ensures that the version number is updated consistently across the application. (a88028) (Eric Ma)
This new version introduces an enhancement to the get_valid_input function and a new feature that allows users to manually edit the generated commit message using their system's default text editor.
Manual commit message editing option has been added. Users can now manually edit the generated commit message using their system's default text editor. This is done by creating a temporary file with the generated message, opening it in the editor, and reading the edited message back into the script. The 'm' option is added to the user input prompt to trigger manual editing. (37baea) (Eric Ma)
The get_valid_input function in cli/utils has been refactored for better input validation. A valid_inputs parameter has been added to the function, the input prompt has been updated to include valid_inputs, and the input validation now checks against valid_inputs. The error message has also been updated to display valid_inputs options. (b32986) (Eric Ma)
This new version includes several enhancements to the code workflow and code manipulation features, as well as an update to the default model_name in various bot classes.
Added new code cells and autoreload in code_workflow.ipynb. This includes the addition of new empty code cells for future implementation, a placeholder in one of the cells, autoreload magic commands for a better development experience, and the importation and demonstration of the get_dependencies function usage (5f6880) (Eric Ma)
Introduced the get_dependencies function to retrieve a list of dependencies for a specified object in a source file. Also fixed the return type annotation for the get_git_diff function and added a test case for the get_dependencies function in test_code_manipulation.py (2d816f) (Eric Ma)
Updated the default model_name parameter value from \"gpt-4\" to \"gpt-4-32k\" in the constructors of ChatBot, QueryBot, and SimpleBot classes (c93ba3) (Eric Ma)
Reorganized imports and improved test generation. This includes moving the get_valid_input import to the top of llamabot/cli/git.py, adding the get_dependencies import to llamabot/cli/python.py, and updating the tests function in llamabot/prompt_library/coding.py to include dependent source files for better test generation (f75202) (Eric Ma)
This new version includes a variety of enhancements and new features, including the addition of new notebooks, improvements to the Zotero and QueryBot functionalities, and the integration of Google Calendar API.
Added blogging assistant and gcal notebooks for blog tagger, summarizer, and Google Calendar related tasks. Also, updated existing notebooks for cache and Zotero with new features and improvements (2378760) (Eric Ma)
Implemented updates to all attendees on event creation and update in Google Calendar (57f80de) (Eric Ma)
This new version includes a refactor of the get_git_diff function in the code_manipulation module. The default value for the repo_path parameter has been changed to improve the function's usability.
The default value of the repo_path parameter in the get_git_diff function has been changed from here() to None. Additionally, a conditional check has been added to set repo_path to here() if it is None. This change makes the function more flexible and easier to use. (96e69b) (Eric Ma)
The tutorialbot has been refactored from a SimpleBot instance to a function that returns a SimpleBot instance. This change enhances the flexibility of the bot, allowing for more diverse use cases. (d85426) (Eric Ma)
Improved progress reporting in the chat_paper function of the Zotero feature. The changes include moving the retrieverbot response and paper_key retrieval outside of the progress context, adding progress tasks for embedding Zotero library, downloading paper, and initializing docbot, and wrapping relevant sections of code with progress context (da0fc0) (Eric Ma)
This new version introduces a tutorial for the Zotero CLI feature of Llamabot and refactors the tutorial generation process for improved code readability and maintainability.
A comprehensive tutorial for using the Llamabot Zotero CLI has been added. This tutorial includes sections on prerequisites, configuration, syncing Zotero items, and chatting with a paper, with examples and explanations provided for each step. (711011) (Eric Ma)
The tutorial generation process has been updated. Now, the tutorialbot is instantiated before calling the module_tutorial_writer, which improves code readability and maintainability. (99f487) (Eric Ma)
This new version introduces the QueryBot prototype and its corresponding tests. It also includes improvements in documentation and example notebooks. The version also includes some housekeeping changes like ignoring certain files and directories.
The project version has been bumped from 0.0.4 to 0.0.5. (e94a28) (Eric Ma)
Docstrings have been added to the project for better code understanding and readability. (2eb8c62) (Eric Ma)
The directory 'data/' is now ignored by Git. This prevents unnecessary tracking of changes in this directory. (4252cd4) (Eric Ma)
The 'mknotebooks' has been moved to the pip section. (e5f0e9d) (Eric Ma)
Temporary markdown files created by 'mknotebooks' are now ignored by Git. This prevents unnecessary tracking of these temporary files. (1e4821d) (Eric Ma)
The README file has been updated twice to provide the latest information about the project. (b3e02e2, 32f32db) (Eric Ma)
This new version introduces enhanced functionality to the chat_paper function and the get_key prompt in zotero.py, adds a streaming option to the QueryBot class in querybot.py, and removes a debugging print statement in doc_processor.py.
The chat_paper function in zotero.py now supports multiple paper keys, provides a list of paper titles for the user to choose from, and displays a summary of the selected paper (1c47a8) (Eric Ma)
The get_key prompt in zotero.py has been updated to return a list of keys instead of a single key, improving the user experience (1c47a8) (Eric Ma)
A new 'stream' parameter has been added to the QueryBot class in querybot.py, allowing users to choose whether to stream the chatbot or not. By default, 'stream' is set to True (01ada0) (Eric Ma)
This new version introduces several enhancements to the Zotero integration in the llamabot project, improving performance, user interaction, and error handling. It also includes important bug fixes and documentation updates.
Added a sync option to the ZoteroLibrary class, improving performance by reducing unnecessary queries to Zotero when the library can be loaded from a local file (a3ea1b) (Eric Ma)
Integrated the standalone sync command from zotero.py into the chat command and refactored ZoteroLibrary and ZoteroItem classes to handle synchronization and downloading of Zotero items (a75308) (Eric Ma)
Updated the guidelines for writing commit messages in the git.py file (a98ba93) (Eric Ma)
Added support for accessing nested keys in the ZoteroItem class (216abc) (Eric Ma)
Improved task progress visibility and command help in the Zotero integration (895079) (Eric Ma)
Enhanced the chat function in zotero.py with an interactive prompt and an exit command (bf043b) (Eric Ma)
Updated file handling in ZoteroItem class, including a fallback to write an abstract.txt file when no PDF is available (8b9fa4) (Eric Ma)
Simplified progress task handling and improved output formatting in the Zotero integration (26dc67) (Eric Ma)
Improved user interaction and error handling in Zotero integration, including persistent progress display, better progress tracking, real-time streaming, and continuous interaction (347a08) (Eric Ma)
Ensured that the get_key function in zotero.py strictly returns JSON format (34b82d) (Eric Ma)
Enhanced Zotero library and item classes, including faster lookup, better PDF handling, and improved functionality and usability (a813c5) (Eric Ma)
The tutorial content for the Llamabot Zotero CLI has been updated to provide a more accurate and user-friendly guide. Changes include rewording the introduction, updating the prerequisites section, removing the section on syncing Zotero items, and adding sections on various topics such as chatting with a paper, retrieving keys, downloading papers, and asking questions (fab7d3) (Eric Ma)
The field declaration for 'zot' in ZoteroLibrary class has been changed to use default_factory instead of default. This ensures that the load_zotero function is called when a new instance of ZoteroLibrary is created, rather than at import time (c65618) (Eric Ma)
This new version introduces significant improvements to the chat recording and saving mechanism of the Llamabot. It also includes a minor refactor in the Zotero module.
Added chat recording and saving functionality. This feature includes the addition of case-converter to the project dependencies, the importation of date and snakecase from datetime and caseconverter respectively, the addition of PromptRecorder to record the chat, modification of the chat function to record and save the chat with a filename in snakecase format prefixed with the current date, and the addition of a save method in PromptRecorder to save the recorded chat to a specified path (22738e) (Eric Ma)
Improved the chat recording and saving mechanism. The creation of the save path was moved to the beginning of the chat function, the save path now includes the date and the snakecased user choice, the save path is printed to the console when the user exits the chat, the save function now coerces the path argument to a pathlib.Path object for compatibility, and the save function is now called with the save path instead of a string for flexibility and ease of use (c44562) (Eric Ma)
Removed the temperature parameter from the QueryBot instantiation in the chat function of the Zotero module. This simplifies the QueryBot configuration and does not affect the functionality of the bot (663594) (Eric Ma)
This new version introduces a chat command to LlamaBot CLI, adds a logging option to the ChatBot class, and updates the documentation with new usage examples and a CLI demos section.
Added a chat command to LlamaBot CLI. This new command allows users to interact with the ChatBot and includes an option to save the chat to a markdown file. The filename for the saved chat is generated based on the current date and time. The chat command will exit if the user types \"exit\" or \"quit\". (baa4d64) (Eric Ma)
Added a logging option to the ChatBot class. This new parameter is a boolean that determines whether to log the chat history and token budget. This feature provides more flexibility for users who want to monitor the chat history and token budget during the bot operation. (6550cf3) (Eric Ma)
Updated the documentation's index file with new usage examples. These include a new example of exposing a chatbot directly at the command line using llamabot chat, an updated description and command for using llamabot as part of the backend of a CLI app to chat with Zotero library, and a new example of using llamabot's SimpleBot to create a bot that automatically writes commit messages. (274a779) (Eric Ma)
Introduced a new section in the documentation, specifically in the index.md file. The section is titled \"CLI Demos\" and provides examples of what can be built with Llamabot and some supporting code. It also includes an embedded asciicast for a more interactive demonstration. (ce7e734) (Eric Ma)
Added an asciicast script to the documentation index file. This will provide users with a visual guide or tutorial. (e332f0a) (Eric Ma)
This new version brings a number of improvements to the user interface, streamlines the handling of user prompts and Zotero library, and introduces new features such as a document chat bot functionality. It also includes several bug fixes and refactoring of the code for better performance and readability.
Document chat bot functionality has been added. This feature allows users to chat with a document by providing a path to the document (005a10) (Eric Ma)
The 'textual' package has been added to the dependencies, enhancing the functionality of the codebase (9b53aa) (Eric Ma)
A new Jupyter notebook, patreon_ghostwriter.ipynb, has been introduced in the scratch_notebooks directory. The notebook includes code for a bot that can generate Patreon posts based on provided talking points (849497) (Eric Ma)
User prompts have been streamlined for consistency across modules, and Zotero library handling has been improved (7e9ea4) (Eric Ma)
CLI prompts and exit handling have been streamlined (3c4cc3) (Eric Ma)
Instructions for writing commit messages in git.py have been improved for clarity and user-friendliness (942005) (Eric Ma)
A function has been renamed to ensure_work_email_on_calendly_events to make it more generic (841c78) (Eric Ma)
"},{"location":"releases/v0.0.56/#environment-and-dependencies","title":"Environment and Dependencies","text":"
Python version has been updated from 3.9 to 3.11, and pre-commit has been removed from dependencies (8f880f) (Eric Ma)
Python version has been downgraded from 3.11 to 3.9 to ensure compatibility with existing libraries, and version constraint on bokeh has been removed to use the latest version (0e8bff) (Eric Ma)
This new version introduces several enhancements to the QueryBot class, adds a language inference function to the embed_repo.ipynb notebook, and provides a command line interface for interacting with a code repository. It also includes progress bars for file hashing and document splitting processes, an option to ignore directories when displaying the directory tree, and support for multiple documents for indexing. Lastly, a comprehensive tutorial on how to install, configure, and use LlamaBot is added.
Added caching option and improved document handling in QueryBot. This includes changes to the make_or_load_index function, exit_if_asked function, ZOTERO_JSON_DIR, ZoteroLibrary class, and magic_load_doc function. Also, updates were made to the zotero.ipynb notebook to reflect these changes (579f162) (Eric Ma)
Added language inference function and updated execution counts in embed_repo.ipynb notebook. This enhances the functionality of the notebook by allowing it to infer the programming languages used in a repository and providing a more detailed view of the repository's structure (b795e72) (Eric Ma)
Added CLI for interacting with code repository. This is part of ongoing efforts to improve the usability of the LlamaBot project (042ae26) (Eric Ma)
Added progress bars to file hashing and document splitting in the QueryBot module. This provides a visual indication of progress when processing large numbers of documents, improving user experience (4634185) (Eric Ma)
Added directory ignore option to show_directory_tree. This allows specifying a list of directory names to ignore when displaying the directory tree (271ccde) (Eric Ma)
Added support for multiple documents for indexing in QueryBot. This includes changes to the doc_paths parameter and the make_or_load_index function (c813522) (Eric Ma)
Added LlamaBot tutorial documentation. This provides a comprehensive tutorial on how to install, configure, and use LlamaBot (9e25fb5) (Eric Ma)
This new version includes an important bug fix that improves the compatibility of the ZoteroLibrary with other components. The output format of the ZoteroLibrary has been changed from JSONL to JSON.
This new version introduces a significant refactor of the retriever initialization and cache handling in the Llamabot application. It also includes minor changes in the Zotero chat function and the zotero notebook.
Refactored the retriever initialization and cache handling in the Llamabot application. This includes the removal of direct import and usage of VectorIndexRetriever in querybot.py, the addition of a method to get the retriever from the index, and the definition of CACHE_DIR as a constant in querybot.py and init.py. The get_persist_dir has been refactored to use the CACHE_DIR constant, and a clear_cache command has been added in init.py to clear the Llamabot cache. The default value of the sync option in the zotero.py chat function has been changed, and the doc_paths argument in the retrieverbot initialization in zotero.py has been updated. Directory creation in zotero.ipynb has been commented out, and code has been added to list json files in the ZOTERO_JSON_DIR in zotero.ipynb. (49645b) (Eric Ma)
Enabled use of cache in the chat function to improve performance (013dae) (Eric Ma)
Enhanced the paper selection process to handle single paper scenario and provide a more interactive selection process for multiple papers (013dae) (Eric Ma)
The faux chat history construction in querybot has been updated for better clarity and functionality. The VectorIndexRetriever has been replaced with the index.as_retriever method, a system message has been added to the faux chat history, the last four responses from the chat history are now included in the faux chat history, and the order of faux chat history construction has been adjusted for better clarity (47a35d) (Eric Ma)
Blog assistant functionality has been added to llamabot. This new feature can summarize and tag a blog post. It includes the addition of a new 'blog' module, a new 'blog' command to the CLI, and the creation of several new files in the CLI and prompt_library directories. This enhancement provides users with a tool to automatically summarize and tag their blog posts. (265962) (Eric Ma)
The pyproject.toml file now requires a minimum Python version of 3.10. This change ensures compatibility with the latest features and security updates. (a664df) (Eric Ma)
This new version focuses on improving the configuration process of LlamaBot. It introduces a new feature that fetches the default language model from the configuration file. The LlamaBot tutorial has been updated to provide detailed instructions on how to set up the OpenAI API key and select the default model. Additionally, the configuration command has been moved to a separate module for better code organization.
The LlamaBot tutorial now focuses on the configuration process, providing detailed instructions on how to set up the OpenAI API key and select the default model. The sections on installation, version checking, and chatting with LlamaBot have been removed. (87dfef) (Eric Ma)
Introduced a new feature where the default language model is now fetched from the configuration file. This change affects the ChatBot, QueryBot, and SimpleBot classes where the model_name parameter in their constructors now defaults to the value returned by the default_language_model function from the config module. (d531cb) (Eric Ma)
The configuration command has been moved from the main init.py file to a new configure.py module. This change improves the organization of the code and makes it easier to maintain. A new command for setting the default model has been added to the configure module. (2bffdaf) (Eric Ma)
This new version introduces several enhancements to the blog assistant CLI and blogging prompts, adds token budgeting for different models in the chatbot, and updates blogging and Patreon notebooks. A new notebook for semantic line breaks has also been added.
Blogging and Patreon notebooks have been updated with new code cells and existing ones have been improved. A new notebook, sembr.ipynb, has been added with code for semantic line breaks. These changes improve the functionality and expand the capabilities of the notebooks (a34a02) (Eric Ma)
Token budgeting for different models has been added to the chatbot. This feature allows for more flexible token budgeting depending on the model used (cc7ab8) (Eric Ma)
Several enhancements have been made to the blog assistant CLI and blogging prompts. The summarize_and_tag function has been renamed to summarize and now also returns the blog title. A new social_media function has been added to generate social media posts for LinkedIn, Patreon, and Twitter. The blog_tagger_and_summarizer prompt has been renamed to blog_title_tags_summary and now also returns the blog title. New prompts compose_linkedin_post, compose_patreon_post, and compose_twitter_post have been added to generate social media posts. A new BlogInformation model has been added to represent blog information (453e5d) (Eric Ma)
This new version introduces the Semantic Line Breaks (SEMBR) functionality to the blog summary and a new command. It enhances the readability and maintainability of the blog posts by applying a consistent line break strategy.
This new version introduces enhancements to the social media post generation, updates to the testing matrix for Python versions, and a new GitHub workflow for daily testing of PyPI packages.
Enhanced social media post generation. The update refactors the social media content generation to handle different platforms more effectively, adds JSON schema to standardize the return format, improves the handling of Patreon posts, and copies the post text to the clipboard for platforms other than Patreon. (07f90e) (Eric Ma)
Introduced a new GitHub workflow for daily testing of PyPI packages. The workflow runs on the main branch and uses a matrix strategy to test on Python versions 3.9, 3.10, and 3.11. (fce17c) (Eric Ma)
Updated the python versions used in the test-pypi-package workflow. The versions have been updated from 3.10 to 3.10.12 and from 3.11 to 3.11.4. This ensures that the package is tested against the latest patch versions of Python. (e9ec8d) (Eric Ma)
Removed Python version 3.12 from the testing matrix in the GitHub Actions workflow for testing the PyPI package. This change is made to focus on the more stable and widely used versions of Python. (b90b8c) (Eric Ma)
Updated the python versions used in the testing matrix of the test-pypi-package workflow. The version 3.9 has been removed and version 3.12 has been added. This ensures our package remains compatible with the latest python versions. (70e4dc) (Eric Ma)
This new version introduces several enhancements to the LLaMaBot project, including the addition of a 'prompts' section to the pyproject.toml file, improved error handling for missing packages, a new Jupyter notebook for LLaMaBot demo, and updates to the Google Calendar integration. The version also includes several code refactoring and documentation updates for better readability and maintainability.
Pinned the version of mkdocs to 1.4.3 in the environment.yml file to ensure consistent documentation builds across different environments (ee7e7e) (Eric Ma)
This new version introduces extended installation options for the llamabot package and adds two new Jupyter notebooks to the project. The installation now includes all optional dependencies, ensuring full feature availability during testing. The new notebooks provide code for language model configuration and OpenAI API setup.
Extended llamabot installation to include all optional dependencies, improving the thoroughness of the testing process and ensuring all package features are working as expected (e6e1e3) (Eric Ma)
Added two new Jupyter notebooks: multiscale_embeddings.ipynb and outlines_backend_prototype.ipynb. The first notebook provides code for loading and configuring language models, creating and loading indices, retrieving and scoring nodes, and building queries. The second notebook provides code for setting up the OpenAI API and generating completions (044439) (Eric Ma)
Changed the default argument of return_sources to True. This might affect the behavior of functions that rely on the previous default value (a03db6) (Eric Ma)
This new version introduces a more streamlined and reliable process for releasing Python packages, with several enhancements to the GitHub Actions workflows. It also includes a new feature for similarity search in the QueryBot class and some minor bug fixes.
Added a project description and linked the README.md file to the project configuration (92002ba) (Eric Ma)
Updated the pypi-publish action used in the GitHub Actions workflow for releasing the Python package to ensure stability and reliability of the release process (b8ecf9f) (Eric Ma)
Separated the installation of the 'build' and 'wheel' packages in the GitHub Actions workflow for releasing a Python package to make the installation steps more explicit and easier to understand (005280e) (Eric Ma)
Added the 'build' package to the python setup step in the GitHub Actions workflow for releasing a python package (62af643) (Eric Ma)
Simplified the python package build process in the GitHub workflow to use the build module instead of setup.py (321e282) (Eric Ma)
Set the default release type to 'patch' in the release-python-package workflow to prevent accidental major or minor releases (b339f88) (Eric Ma)
Added a new step in the GitHub Actions workflow for releasing the Python package that configures the Git user name and email (f8f6ab4) (Eric Ma)
Changed the GitHub workflow from running tests on different Python versions to publishing the Python package to PyPI (628b91f) (Eric Ma)
Introduced a new GitHub workflow for releasing Python packages that includes steps for running tests, bumping version numbers, building and publishing the package, and creating a release in the GitHub repository (2f28ab7) (Eric Ma)
Added a new method 'retrieve' in the QueryBot class for retrieving source nodes associated with a query using similarity search (a08d0f0) (Eric Ma)
Added the ability to manually trigger the test-pypi-package workflow from the GitHub Actions UI (7611052) (Eric Ma)
Disabled the deadline for the ghostwriter test in the Python prompt library to prevent Hypothesis from failing the test due to it taking too long to run (b960ced) (Eric Ma)
This new version includes several updates to the GitHub Actions workflow for releasing the Python package. The git configuration has been updated for better readability and specific use by the GitHub Actions user. The secret used for the user password in the release workflow has been changed for correct deployment. The git configuration now includes the credential helper and GitHub token for authentication when pushing changes. The versions of actions/checkout and actions/setup-python have been upgraded for better performance and security.
This new version includes several enhancements to the Zotero module, improvements to the QueryBot, and updates to the pre-commit hooks. It also introduces a new Jupyter notebook for outlines models and enables package publishing to PyPI.
Added code to retrieve the title of a specific article from the Zotero library using the article's unique identifier (5921df) (Eric Ma)
Added support for default similarity top ks in QueryBot based on the OPENAI_DEFAULT_MODEL environment variable (ae392f) (Eric Ma)
Enhanced the ZoteroLibrary class by adding an articles_only filter and a key_title_map function (85a223) (Eric Ma)
Improved the get_key function documentation in the Zotero module (89b6bc) (Eric Ma)
Streamlined the paper selection process in the Zotero CLI by introducing a new PaperTitleCompleter for more efficient paper selection (1122e6) (Eric Ma)
Improved handling of similarity_top_k in QueryBot and refactored index creation (acc6e8) (Eric Ma)
Added 'sh' dependency to environment.yml and pyproject.toml files (5e23f9) (Eric Ma)
Added execution of pre-commit hooks before committing changes (82979d) (Eric Ma)
Added a new class, PaperTitleCompleter, to provide completion suggestions for paper titles in the Zotero module (3fac26) (Eric Ma)
Updated pre-commit config and notebooks (b077aa) (Eric Ma)
Extended the ruff pre-commit hook to also check python and jupyter files (4ae772) (Eric Ma)
Added nltk as a transitive dependency via llama_index in the environment.yml file (2bd392) (Eric Ma)
Introduced a new pre-commit hook, ruff, to the .pre-commit-config.yaml file (c7c5bc) (Eric Ma)
Enabled package publishing to PyPI (baca5c) (Eric Ma)
The commitbot has been updated to use the gpt-3.5-turbo-16k-0613 model. This model provides the same quality of commit messages as the previous model but at a fraction of the cost (ce91d6b) (Eric Ma)
Updated pip installation in the test-pypi-package.yaml workflow to use the python -m pip install command instead of pipx to ensure the correct version of pip is used for installing the llamabot[all] package (2e860a) (Eric Ma)
This new version includes several enhancements to the CLI module of LlamaBot. The improvements focus on automating the process of writing commit messages and ensuring consistency. The version also includes codebase improvements such as the removal of unnecessary comments.
A new command autowrite_commit_message has been added to the git.py file in the llamabot/cli directory. This command automatically generates a commit message based on the diff and writes it to the .git/COMMIT_EDITMSG file. Error handling has also been included in case any exceptions occur during the process. (185613) (Eric Ma)
A new command install_commit_message_hook has been added to the Git subcommand for LlamaBot CLI. This command installs a commit message hook that runs the commit message through the bot, automating the process of writing commit messages and ensuring consistency. (d1254e) (Eric Ma)
This new version includes several enhancements to the CLI module and the Llamabot model. It also includes a bug fix for the autowrite_commit_message function.
Help messages for subcommands have been added to the CLI module. This will provide users with more information on how to use each command. (f4de87) (Eric Ma)
The model_chat_token_budgets in Llamabot have been updated. New models have been added to the dictionary and token budgets for existing models have been updated. (52522b) (Eric Ma)
The autowrite_commit_message function in the CLI module has been fixed. Print statements have been replaced with echo for consistent output and error messages are now written to stderr instead of stdout. (a66ead) (Eric Ma)
This new version introduces several enhancements to the release workflow, including the addition of release notes generation and the configuration of the OPENAI_API_KEY. It also includes improvements to the llamabot CLI and the documentation.
Added fetch-depth parameter to the checkout action in the release-python-package workflow. This allows the action to fetch the entire history of the repository. (c25fe84) (Eric Ma)
Upgraded the GitHub Actions checkout step to use version 4 and enabled the fetch-tags option. This ensures that all tags are fetched during the checkout process. (dadcf60) (Eric Ma)
Added a new step in the release-python-package workflow to configure the OPENAI_API_KEY using llamabot. This is necessary for the successful generation of release notes. (6c17c10) (Eric Ma)
Added OPENAI_API_KEY to environment variables in configure.py. This allows the application to access the OpenAI API key from the environment variables, improving security. (8df3cda) (Eric Ma)
Updated the GitHub Actions workflow for releasing a new version of the Python package to include the release notes in the body of the GitHub release. (07150dc) (Eric Ma)
Introduced a bot for converting git remote URL to HTTPS URL. This enhances the functionality of the release notes notebook. (85009ad) (Eric Ma)
Added release notes generation to the GitHub workflow for releasing the Python package. (3d28e12) (Eric Ma)
Introduced a new feature to the llamabot CLI, a command for generating release notes. This automates the process of generating release notes. (df181dd) (Eric Ma)
Allowed setting default model by name in the configure.py file of the llamabot CLI. This provides more flexibility in setting the default model. (d223c43) (Eric Ma)
Added a new Jupyter notebook 'release-notes.ipynb' in the 'scratch_notebooks' directory. The notebook contains code for generating release notes from git commit logs. (9ab58a5) (Eric Ma)
Added the ability to specify the model name via an environment variable. This allows for more flexibility when deploying the bot in different environments. (127b6c9) (Eric Ma)
This new version includes several improvements to the release workflow and bug fixes. The release notes handling has been updated and simplified, and several bugs in the GitHub Actions workflow have been fixed.
Release notes handling in the GitHub workflow has been updated. The workflow now copies the release notes to a temporary location before creating a release in the GitHub repository. This ensures that the release notes are correctly included in the release (d9ab5b) (Eric Ma)
The source of the release notes in the GitHub Actions workflow for releasing a Python package has been changed. Instead of using an environment variable, it now reads from a markdown file in the docs/releases directory. The filename is based on the version number (3958ff) (Eric Ma)
A bug in the GitHub Actions workflow for releasing a Python package has been fixed. The copy command used to copy the release notes was incorrect and has been fixed (7cda28) (Eric Ma)
The file path for the release notes in the release-python-package GitHub workflow has been corrected. The version number now correctly includes a 'v' prefix when reading the markdown file (e03626) (Eric Ma)
The path for the release notes in the GitHub Actions workflow has been corrected. The previous path was causing issues in the workflow execution. The path has been updated to correctly point to the release notes file (75978b) (Eric Ma)
The step of copying release notes to a temporary location has been removed and the original file is directly referenced in the release action. This simplifies the workflow and reduces unnecessary operations (eb2aef) (Eric Ma)
This version introduces a new prompt decorator and tests, improves the release workflow, fixes bugs in the GitHub Actions workflow, and removes the dependency on the 'outlines' package.
A new prompt decorator has been added in the scratch_notebooks directory, enhancing the functionality of functions by adding a prompt feature. Tests have been included to ensure the decorator works as expected with different types of function arguments (d023f22) (Eric Ma).
Tests for blogging prompts in the prompt_library directory have been added. These tests validate the output of different blogging prompt functions (d023f22) (Eric Ma).
The release notes handling in the GitHub workflow has been updated. The workflow now copies the release notes to a temporary location before creating a release in the GitHub repository (3884962) (Eric Ma).
The source of the release notes in the GitHub Actions workflow for releasing a Python package has been changed. It now reads from a markdown file in the docs/releases directory (3884962) (Eric Ma).
The file path for the release notes in the release-python-package GitHub workflow has been corrected. The version number now correctly includes a 'v' prefix when reading the markdown file (3884962) (Eric Ma).
The path for the release notes in the GitHub Actions workflow has been corrected. The previous path was causing issues in the workflow execution. The path has been updated to correctly point to the release notes file (3884962) (Eric Ma).
The step of copying release notes to a temporary location has been removed and the original file is directly referenced in the release action. This simplifies the workflow and reduces unnecessary operations (3884962) (Eric Ma).
The 'outlines' package was removed from the dependencies in the environment.yml and pyproject.toml files (af23aae) (Eric Ma).
The use of the outlines package has been replaced with a custom prompt_manager module across multiple files in the llamabot project. The prompt_manager provides a prompt decorator that turns Python functions into Jinja2-templated prompts, similar to the functionality provided by outlines. This refactor removes the dependency on the outlines package, simplifying the project's dependencies and potentially improving maintainability (dbe78e4) (Eric Ma).
This version includes several improvements to the ChatBot, QueryBot, and SimpleBot classes, including new parameters for additional configuration options and improved code readability. It also simplifies the pip install command used in the release-python-package GitHub workflow and removes unnecessary clutter from the codebase.
Added streaming and verbose parameters to the ChatBot class initialization method, providing more flexibility in controlling the chat history streaming and verbosity during the bot initialization (a69c0f) (Eric Ma)
Simplified the pip install command used in the release-python-package GitHub workflow. The previous command attempted to install all optional dependencies, which is not necessary for writing release notes. The new command only installs the package itself (2dffac) (Eric Ma)
Updated parameter names and descriptions in ChatBot, QueryBot, and SimpleBot for consistency and clarity. Added 'streaming' and 'verbose' parameters to SimpleBot for additional configuration options. Improved code readability by breaking up long lines and comments (6c0b37) (Eric Ma)
Removed a large block of commented out code from the prompt_manager.py file, improving readability and reducing clutter in the codebase (7f4b0a) (Eric Ma)
This new version primarily focuses on improving code readability and maintainability. It also introduces a new feature to handle different numbers of tags in the git log when writing release notes.
This new version introduces more flexibility and control over the token budget and chunk sizes used in the chatbot. It also includes a new attribute to store the model name used by the bot and a bug fix to ensure multiple document paths are handled correctly.
Added support for response_tokens and history_tokens parameters in the QueryBot class. These parameters allow the user to specify the number of tokens to use for the response and history in the chatbot. Also, a chunk_sizes parameter has been added to the make_or_load_vector_index function to specify a list of chunk sizes to use for the LlamaIndex TokenTextSplitter (a1de812) (Eric Ma)
Introduced a new attribute 'model_name' to both QueryBot and SimpleBot classes. This attribute will be used to store the name of the model used by the bot (d5d684) (Eric Ma)
Modified the doc_paths parameter in the chat function of the llamabot/cli/doc.py file to receive a list of doc_paths, ensuring that the function can handle multiple document paths correctly (c763327) (Eric Ma)
Changed the variable name in the chat function from doc_path to doc_paths for better clarity and consistency (11111e) (Eric Ma)
This new version introduces enhancements to the QueryBot class, adds a notebook for evaluating multiscale embeddings, and updates the funding configuration.
A new notebook for evaluating multiscale embeddings has been added. This notebook, \"zotero_multiscale.ipynb\", provides an in-depth look at the effectiveness of multiscale embeddings compared to single-scale embeddings in LlamaBot's QueryBot class. It includes an explanation of multiscale embeddings, the motivation behind using them, and the implementation details. It also includes code to load a document from a Zotero library, create instances of QueryBot with different chunk sizes, and test their performance on different prompts. (24f9b6) (Eric Ma)
The default chunk_sizes parameter in the QueryBot class has been updated to [2000]. This change ensures that the LlamaIndex TokenTextSplitter uses a chunk size of 2000 tokens by default. (f9d7f6) (Eric Ma)
The GitHub funding platform in FUNDING.yml has been updated to use an array instead of a single string to support multiple contributors. (da221f) (Eric Ma)
A new funding configuration file has been added to the project. This file includes supported funding model platforms such as GitHub and Patreon. (68c974) (Eric Ma)
This version introduces several enhancements and refactors to the Llamabot project. The changes include improvements to the codebase's flexibility and maintainability, updates to the documentation, and the addition of new features.
Added a new parameter model_name to the chat function in zotero.py, allowing users to specify the language model to use. (c03a13f) (Eric Ma)
Introduced a new Jupyter notebook 'ollama.ipynb' demonstrating the implementation of a simple chatbot named 'ollama' using the 'llamabot' library. (c4919b2) (Eric Ma)
Added a new .vscode/extensions.json file with a list of recommended extensions for Visual Studio Code. (964bafa) (Eric Ma)
Added a new file model_dispatcher.py in the llamabot/bot directory, which contains a function create_model that dispatches and creates the right model based on the model name. (3dee9ea) (Eric Ma)
Updated simplebot.py to use the create_model function from model_dispatcher.py instead of directly creating the model. (3dee9ea) (Eric Ma)
Added a prompt to the default_model function in configure.py that informs the user to run llamabot configure default-model to set the default model. (b7a50e5) (Eric Ma)
Replaced the hardcoded model name \"codellama\" with the default language model from the config file in simplebot.py. (bfb47a2) (Eric Ma)
Moved model token constants to a new file model_tokens.py for better organization and maintainability. (f2a1f46) (Eric Ma)
Refactored QueryBot class in querybot.py to use create_model function from model_dispatcher.py for model creation. (f2a1f46) (Eric Ma)
Simplified model creation and token budget calculation in chatbot.py. (491ab6f) (Eric Ma)
Removed an unnecessary echo message that was instructing the user to set the default model in the default_model function of configure.py. (d3c3751) (Eric Ma)
This version includes several enhancements and updates to the codebase, including the addition of new tutorials, refactoring of the code, and updates to the Python version used in the GitHub Actions workflow.
Added a tutorial for building a QueryBot chat interface with file upload functionality. This tutorial guides users on how to build a chat interface using the QueryBot and Panel libraries. (4b5799a) (Eric Ma)
Introduced a new tutorial in the documentation that guides users on how to create a simple chat interface using the SimpleBot class from the llamabot library and the Panel library. (efaef316) (Eric Ma)
Introduced a new Jupyter notebook 'panel-chat.ipynb' in the 'scratch_notebooks' directory. The notebook includes code for setting up a chat interface using the Panel library, and integrating it with a chatbot for interactive responses. (ba5d8009) (Eric Ma)
Introduced a new Jupyter notebook 'zotero-panel.ipynb' in the 'scratch_notebooks' directory. The notebook contains code for creating a Zotero panel with interactive widgets for configuring Zotero API key, library ID, and library type. (8f477ec6) (Eric Ma)
Introduced a new instance of SimpleBot named 'feynman' to the ollama notebook. The bot is tasked with explaining complex concepts, specifically in this case, the challenge of enzyme function annotation and the introduction of a machine learning algorithm named CLEAN. (7f844dca) (Eric Ma)
Added \".html\": \"UnstructuredReader\" to EXTENSION_LOADER_MAPPING in doc_processor.py to enable processing of .html files. (45d6485c) (Eric Ma)
Removed unused imports from querybot.py and updated make_or_load_vector_index function to take service_context as a parameter instead of creating it within the function. (935e3dad) (Eric Ma)
Removed the unused @validate_call decorator from the call method in querybot.py. (3f7e8c0b) (Eric Ma)
Added instructions to the documentation on how to use local Ollama models with LlamaBot. It includes a Python code snippet demonstrating how to specify the model_name keyword argument when creating a SimpleBot instance. (57f12809) (Eric Ma)
Updated the documentation for LlamaBot. It introduces two options for getting access to language models: using local models with Ollama or using the OpenAI API. (fc42049c) (Eric Ma)
Updated the versions of pre-commit hooks for pre-commit-hooks, black, and ruff-pre-commit. It also replaces the darglint hook with pydoclint for better documentation linting. (9cc49022) (Eric Ma)
This new version introduces several enhancements and features to improve the flexibility and maintainability of the code. The major highlight of this release is the dynamic scraping of Ollama model names, which allows the code to adapt to changes in the Ollama model library. Additionally, the codebase has been updated to Python 3.10, and new models have been added to the llama_model_keywords list.
Dynamically scrape Ollama model names from the Ollama website. If the website cannot be reached, a static list of model names is used as a fallback. The function is cached using lru_cache to improve performance. (1f7e27) (Eric Ma)
Added a function to automatically update the list of Ollama models. A new Python script has been added to the hooks in the pre-commit configuration file. This script scrapes the Ollama AI library webpage to get the latest model names and writes them to a text file. (f22007) (Eric Ma)
Added the content.code.copy feature to the theme configuration in mkdocs.yaml. This feature allows users to easily copy code snippets from the documentation. (594d16) (Eric Ma)
Added beautifulsoup4, lxml, and requests to the environment.yml file. These packages are necessary for the automatic scraping of ollama models. (2737a9) (Eric Ma)
The method ollama_model_keywords() in model_dispatcher.py has been refactored. The dynamic scraping of model names from the Ollama website has been removed. Instead, the model names are now read from a static text file distributed with the package. This change simplifies the code and removes the dependency on the BeautifulSoup and requests libraries. (73d25) (Eric Ma)
The 'Commit release notes' step has been separated from the 'Write release notes' step in the release-python-package workflow. The 'pre-commit' package installation has been moved to the 'Commit release notes' step. (4613a) (Eric Ma)
The target Python version in the Black configuration has been updated from Python 3.9 to Python 3.10. (cfadb3) (Eric Ma)
Some of the existing models have been reordered and new ones have been added to the llama_model_keywords list in the model_dispatcher module. (22ade) (Eric Ma)
A newline has been added at the end of the v0.0.86 release notes file. This change is in line with the standard file formatting conventions. (c22810) (Eric Ma)
This new version brings updates to the ollama model names and sorting method, updates to dependencies, and a temporary fix to the openai version. It also includes enhancements to the model name handling in llamabot.
Updated ollama model names and implemented a new sorting method. The models are now sorted by newest. (a19004) (Eric Ma)
Enhanced model name handling in llamabot. The model names in ollama_model_names.txt have been reordered for better organization, and additional code cells have been added to ollama.ipynb for testing and demonstrating the use of PromptRecorder and SimpleBot. (57389f) (Eric Ma)
Temporarily limited the version of openai dependency to <=0.28.1 in pyproject.toml. This is due to an issue with OpenAI's update breaking a lot of LangChain. (1d881a) (Eric Ma)
Updated langchain and llama_index dependencies in pyproject.toml. The langchain version has been set to 0.0.330 and llama_index version set to 0.8.62. This ensures three-way compatibility with openai, langchain, and llama-index until langchain is upgraded to work with the openai Python API without error. (e3cf0d) (Eric Ma)
This version includes several refactoring changes, new features, and documentation updates. The main focus of this release was to improve the code organization and efficiency, and to update the usage of the OpenAI API.
Added a new test for the ImageBot class in the llamabot library. The test checks the behavior of the call method when it is invoked outside of a Jupyter notebook and no save path is provided. (0e23857) (Eric Ma)
Introduced a new Jupyter notebook under the docs/examples directory. The notebook demonstrates how to use the ImageBot API to generate images from text using the OpenAI API. (8779040) (Eric Ma)
Added ImageBot class to bot module for generating images based on prompts. (7174058) (Eric Ma)
Increased the default token budget from 2048 to 4096 and added token budget for the new \"mistral\" model. (7f13698) (Eric Ma)
Fixed the cache-downloads-key in the pr-tests.yaml workflow file. The key now includes a hash of the 'environment.yml' file to ensure cache is updated when the environment changes. (1c12ff5) (Eric Ma)
Moved the initialization of the OpenAI client into the default_model function. (bd50b90) (Eric Ma)
Removed the direct access to the environment variable for the OpenAI API key in the client initialization. (7cb3d09) (Eric Ma)
Changed the way model list attributes are accessed in the configure.py file of the llamabot CLI. (4deb93f) (Eric Ma)
Extracted the filename generation logic, which was previously inside the ImageBot class, to a separate function named filename_bot. (aec4f3c) (Eric Ma)
Removed direct assignment of OpenAI API key in init.py and replaced direct model list retrieval from OpenAI with client's model list method. (66fbcec) (Eric Ma)
Updated the docstring for the filename_bot function in the imagebot.py file. The updated docstring now includes parameter and return value descriptions. (c5dd51d) (Eric Ma)
Removed the deadline for the test_codebot_instance function in the python_prompt_library test suite to prevent potential timeout issues. (4a30e96) (Eric Ma)
Removed the deadline for the simple bot initialization test to prevent false negatives due to time constraints. (16ee108) (Eric Ma)
This new version includes several enhancements and new features, including the addition of a chatbot test, the integration of pytest-cov into the conda environment, and the successful implementation of streaming with SimpleBot. The chatbot UI prototype is now operational, and the code has been refactored for better organization and efficiency.
Update default language model to Mistral and remove OpenAI API key warning (e74954b) (Eric Ma): The default language model used by the SimpleBot class has been updated to Mistral, which is a more cost-effective option compared to the previously used gpt-3.5-turbo-16k-0613 model. The OpenAI API key warning has also been removed, as the Mistral model does not require an API key.
Add API key support for QABot and SimpleBot (b5f8253) (Eric Ma): This commit adds support for providing API keys to the QABot and SimpleBot classes, allowing for secure access to external services. This enhancement improves the security and flexibility of the bot's functionality.
Update default language model environment variable (4bfd362) (Eric Ma): The default language model environment variable has been updated from OPENAI_DEFAULT_MODEL to DEFAULT_LANGUAGE_MODEL to align with the changes in the codebase.
Update default language model to gpt-3.5-turbo-1106 (c8f0893) (Eric Ma): The default language model used by the commitbot has been updated to \"gpt-3.5-turbo-1106\" for improved performance and cost efficiency.
Add logging for API key usage (3be39ad) (Eric Ma): Logging has been added to SimpleBot to log the usage of the API key for debugging and monitoring purposes.
Add model_name parameter to SimpleBot instance (6a78332) (Eric Ma): A new parameter, model_name, has been added to the SimpleBot instance in the llamabot/cli/git.py file. The model_name is set to \"mistral/mistral-medium\". This change allows for more flexibility and customization when using the SimpleBot.
Add new model name to ollama_model_names.txt (3110dc9) (Eric Ma): 'megadolphin' has been added to the list of model names in ollama_model_names.txt.
Add new model name and refactor test_docstore (17352b8) (Eric Ma): 'llama-pro' has been added to ollama_model_names.txt and the test_docstore function has been refactored to remove unused imports and the make_fake_document function.
Add Knowledge Graph bot (963cd63) (Eric Ma): A new feature has been added to the codebase, the Knowledge Graph bot (KGBot). The KGBot takes in a chunk of text and returns a JSON of triplets. It is tested with mistral-medium and uses the default language model. The bot is called with a query and returns a JSON of triplets.
Add QABot class to llamabot (21197c1) (Eric Ma): A new class, DocQABot, has been added to the qabot.py file. This bot is designed to use pre-computed questions and answers to generate a response. It includes methods for adding documents for the bot to query and for calling the QABot. This enhancement will improve the bot's ability to generate responses based on the provided documents.
Add DocumentStore class for LlamaBot (117baf7) (Eric Ma): A new feature has been added to the codebase, a DocumentStore class for LlamaBot. This class wraps around ChromaDB and provides methods to append and retrieve documents from the store. The DocumentStore class is defined in the newly created file llamabot/components/docstore.py.
Add top-level API for llamabot's components (b2cf9f0) (Eric Ma): A new file, __init__.py, has been added which serves as the top-level API for llamabot's components.
Fix logging of API key (932beec) (Eric Ma): The commit fixes the logging of the API key in the SimpleBot class to display the complete key instead of just the first 5 characters. This change improves the clarity and security of the logging information.
Fix environment variable retrieval in write_release_notes function (c627b18) (Eric Ma): This commit fixes an issue where the environment variable was not being retrieved correctly in the write_release_notes function.
Fix stream parameter not being passed to bot function (185f2bc) (Eric Ma): This commit fixes an issue where the stream parameter was not being passed to the bot function in the cli/git module.
API Server: Merged pull request #28, which introduces an API server for the LlamaBot project. (4ea160a, Eric Ma)
Mock Response and API Key Support: Added api_key and mock_response parameters to the SimpleBot constructor for OpenAI API key usage and testing with predefined responses. (2f6d1d9, Eric Ma)
Streaming Response Test: Implemented a new test case to verify that SimpleBot can stream responses correctly. (5ddb804, Eric Ma)
Delta Content Printing: The SimpleBot class now prints the delta content to the console after processing each message for better readability. (d657b4a, Eric Ma)
ChatBot UI Jupyter Notebook: Created a new Jupyter notebook for ChatBot UI development, including the setup of necessary classes and functions. (bb4397a, Eric Ma)
ChatUIMixin: Introduced a new ChatUIMixin class for easier integration of chat functionalities in LlamaBot components. (4209b18, Eric Ma)
Streamlined Message Handling and Typing: Simplified the message construction and typing in the SimpleBot class for improved readability and maintainability. (65e026c, Eric Ma)
Streaming Response for Chat Messages: Implemented streaming response functionality in the ChatBot class for better real-time interactivity. (1ebc356, Eric Ma)
Improved Response Streaming: Extracted streaming logic into a separate method and ensured consistent yielding of AIMessage instances in the SimpleBot class. (08636a7, Eric Ma)
Toggleable Streaming Responses: Added a stream parameter to the generate_response method in the SimpleBot class to control streaming behavior. (565aed7, Eric Ma)
Streaming Response Capability: Implemented a new stream_response method in the SimpleBot class for streaming responses incrementally. (2a8254c, Eric Ma)
Response Generation Extraction: Simplified the generate_response method in the SimpleBot class by extracting the response generation logic into a new _make_response function. (0ad9a1e, Eric Ma)
API Key Instructions: Added instructions for setting API keys for other providers in the documentation. (55ec13e, Eric Ma)
Standardized LlamaBot Naming Convention: Corrected the casing of 'LLaMaBot' to 'LlamaBot' throughout the index.md documentation and separated API provider configuration instructions into subsections for OpenAI and Mistral. (7fd2e13, Eric Ma)
New Model Names and CLI Options Refactoring: Added 'stablelm2' and 'duckdb-nsql' to the list of available models and refactored command-line interface arguments in serve.py to use Typer options instead of arguments. (e6a2122, Eric Ma)
FastAPI Endpoint for QueryBot: Implemented APIMixin to allow QueryBot to serve FastAPI endpoints and added a serve command to the CLI for starting a FastAPI server with QueryBot. (5edd84b, Eric Ma)
Improved System Prompt for QueryBot: Modified the system prompt in QueryBot to be more specific about the source of knowledge and clarified the response behavior when the repository does not contain the answer. (5f7ce51, Eric Ma)
LlamaBot CLI Usage Guide: Added a comprehensive guide for the LlamaBot CLI in the documentation, including installation instructions, key commands, and usage examples. (9f0b1c8, Eric Ma)
ImageBot Import Path Update: Changed the import path of AIMessage from langchain.schema to llamabot.components.messages to reflect the new module structure. (27904d0, Eric Ma)
Error Handling for Image URL Retrieval: Added an exception raise in the ImageBot.generate_image method to handle cases where no image URL is found in the response. (27904d0, Eric Ma)
Disabled Streaming in SimpleBot Tests: Passed stream=False when creating a SimpleBot instance in tests to ensure consistent behavior without relying on streaming features. (e559114, Eric Ma)
Ensured Non-Empty Strings in Bot Tests: Modified tests to generate non-empty strings for system_prompt and human_message using hypothesis strategies. (e8fed0a, Eric Ma)
Removed Unused Panel App Creation Code: Removed the create_panel_app function and its related imports from python.py as they are no longer used. (4469b35, Eric Ma)
Removed PanelMarkdownCallbackHandler Class: Removed the PanelMarkdownCallbackHandler class as it is no longer required in the llamabot project. (b7ef263, Eric Ma)
Removed Unused pytest Import and Obsolete Test: Removed an unused import of pytest in test_simplebot.py and deleted the test_simple_bot_stream_response function, which is no longer needed due to changes in the SimpleBot streaming response logic. (ed0756b, Eric Ma)
Removed model_dispatcher Module: The model_dispatcher.py module has been removed as part of a refactoring effort. This change simplifies the llamabot architecture by delegating model dispatch responsibilities to a new system or removing the need for such functionality entirely. (0887618, Eric Ma)
Removed api_key Command from configure.py: The api_key command was deprecated and has been removed to simplify configuration. Users should now set API keys directly via environment variables. (2752d7e, Eric Ma)
Remove 'api' stream_target from ChatBot and change the expected return type for consumers of the ChatBot class (c11aace) (Eric Ma)
Replace 'stream' boolean parameter with 'stream_target' in ChatBot and SimpleBot constructors (1211115) (Eric Ma)
Please note that some breaking changes have been introduced in this release. Make sure to update your code accordingly. For more details, refer to the individual commit messages.
Autorecord function has been streamlined to record only the last message content, reducing data processing and potential performance issues (268590, Eric Ma)
The chat command in the CLI now includes a timestamped session name for better traceability and organization of chat sessions (268590, Eric Ma)
The Python kernel version in sembr notebook has been updated to 3.11.7 to ensure compatibility with the latest libraries and features (0ad4701, Eric Ma)
Note: The commit 9153c5d is a refactoring commit that improves the readability and maintenance of the notebook code, but it does not introduce any new features or bug fixes. The commit b120061 and 31b1056 are related to version bumping and release notes, respectively. The merge commit ae66c86 is not associated with any new features, bug fixes, or deprecations.
This release includes a small fix to the plaintext_loader function in the doc_processor module. The file open mode was changed from \"r\" to \"r+\" to allow for additional operations on the file if needed in the future.
The file open mode in plaintext_loader function was changed from \"r\" (read-only) to \"r+\" (read and write). This allows for additional operations on the file if needed in the future. (8251fdc) (Eric Ma)
Note: The commit 48bb8c4 is related to version bump and does not introduce any new features or bug fixes. The commit faa971d is related to adding release notes and does not introduce any new features or bug fixes. Therefore, they are not included in the release notes.
This tutorial was written by GPT4 and edited by a human.
In this tutorial, we will learn how to use the ChatBot class to create a simple chatbot that can interact with users. The chatbot is built using the OpenAI GPT-4 model and can be used in a Panel app.
Now, let's create a new instance of the ChatBot class. We need to provide a system prompt, which will be used to prime the chatbot. Optionally, we can also set the temperature and model name:
system_prompt = \"Hello, I am a chatbot. How can I help you today?\"\nchatbot = ChatBot(system_prompt, temperature=0.0, model_name=\"gpt-4\")\n
"},{"location":"tutorials/chatbot/#interacting-with-the-chatbot","title":"Interacting with the ChatBot","text":"
To interact with the chatbot, we can simply call the chatbot instance with a human message:
human_message = \"What is the capital of France?\"\nresponse = chatbot(human_message)\nprint(response.content)\n
The chatbot will return an AIMessage object containing the response to the human message, primed by the system prompt.
The chatbot automatically manages the chat history. To view the chat history, we can use the __repr__ method:
print(chatbot)\n
This will return a string representation of the chat history, with each message prefixed by its type (System, Human, or AI).
"},{"location":"tutorials/chatbot/#creating-a-panel-app","title":"Creating a Panel App","text":"
The ChatBot class also provides a panel method to create a Panel app that wraps the chatbot. This allows users to interact with the chatbot through a web interface.
To create a Panel app, simply call the panel method on the chatbot instance:
app = chatbot.panel(show=False)\n
By default, the app will be shown in a new browser window. If you want to return the app directly, set the show parameter to False.
You can customize the appearance of the app by providing additional parameters, such as site, title, and width:
In this tutorial, we learned how to use the ChatBot class to create a simple chatbot that can interact with users. We also learned how to create a Panel app to provide a web interface for the chatbot. With this knowledge, you can now create your own chatbots and customize them to suit your needs. Happy chatting!
"},{"location":"tutorials/ollama/","title":"How to Run Llamabot with Ollama","text":""},{"location":"tutorials/ollama/#overview","title":"Overview","text":"
In this guide, you'll learn how to run a chatbot using llamabot and Ollama. We'll cover how to install Ollama, start its server, and finally, run the chatbot within a Python session.
Open your terminal and start the Ollama server with your chosen model.
ollama run <model_name>\n
Example:
ollama run vicuna\n
For a list of available models, visit Ollama's Model Library.
Note: Ensure you have adequate RAM for the model you are running.
"},{"location":"tutorials/ollama/#running-llamabot-in-python","title":"Running Llamabot in Python","text":"
Open a Python session and import the SimpleBot class from the llamabot library.
from llamabot import SimpleBot # you can also use QueryBot or ChatBot\n\nbot = SimpleBot(\"You are a conversation expert\", model_name=\"vicuna:7b-16k\")\n
Note: vicuna:7b-16k includes tags from the vicuna model page.
And there you have it! You're now ready to run your own chatbot with Ollama and Llamabot.
This tutorial was written by GPT4 and edited by a human.
In this tutorial, we will learn how to use the QueryBot class to create a chatbot that can query documents using GPT-4. The QueryBot class allows us to index documents and use GPT-4 to generate responses based on the indexed documents.
To create a new instance of QueryBot, we need to provide a system message, a list of document paths, or a saved index path. The system message is used to instruct the chatbot on how to behave. The document paths are used to index the documents, and the saved index path is used to load a pre-built index.
Here's an example of how to initialize a QueryBot:
from pathlib import Path\nfrom llamabot import QueryBot\n\nsystem_message = \"You are a helpful assistant that can answer questions based on the provided documents.\"\ndoc_paths = [Path(\"document1.txt\"), Path(\"document2.txt\")]\n\nquery_bot = QueryBot(system_message=system_message, doc_paths=doc_paths)\n
"},{"location":"tutorials/querybot/#querying-the-index","title":"Querying the Index","text":"
To query the index, we can call the QueryBot instance with a query string. The QueryBot will return the top similarity_top_k documents from the index and use them to generate a response using GPT-4.
Here's an example of how to query the index:
query = \"What is the main idea of document1?\"\nresponse = query_bot(query)\nprint(response.content)\n
"},{"location":"tutorials/querybot/#saving-and-loading-the-index","title":"Saving and Loading the Index","text":"
We can save the index to disk using the save method and load it later using the __init__ method with the saved_index_path parameter.
Here's an example of how to save and load the index:
# Save the index\nquery_bot.save(\"index.json\")\n\n# Load the index\nloaded_query_bot = QueryBot(system_message=system_message, saved_index_path=\"index.json\")\n
"},{"location":"tutorials/querybot/#inserting-documents-into-the-index","title":"Inserting Documents into the Index","text":"
We can insert new documents into the index using the insert method. This method takes a file path as an argument and inserts the document into the index.
Here's an example of how to insert a document into the index:
In this tutorial, we learned how to use the QueryBot class to create a chatbot that can query documents using GPT-4. We covered how to initialize a QueryBot, query the index, save and load the index, and insert new documents into the index. With this knowledge, you can now create your own chatbot that can answer questions based on a set of documents.
"},{"location":"tutorials/recording_prompts/","title":"Automatically Record QueryBot Calls with PromptRecorder","text":"
Note
This tutorial was written by GPT4 and edited by a human.
In this tutorial, we will learn how to use the PromptRecorder class to automatically record calls made to the QueryBot. The PromptRecorder class is designed to record prompts and responses, making it a perfect fit for logging interactions with the QueryBot.
Before we begin, make sure you have the following Python libraries installed:
pandas
panel
You can install them using pip:
pip install pandas panel\n
"},{"location":"tutorials/recording_prompts/#step-1-import-the-necessary-classes","title":"Step 1: Import the necessary classes","text":"
First, we need to import the PromptRecorder and QueryBot classes from their respective source files. You can do this by adding the following lines at the beginning of your script:
from prompt_recorder import PromptRecorder, autorecord\nfrom query_bot import QueryBot\n
"},{"location":"tutorials/recording_prompts/#step-2-initialize-the-querybot","title":"Step 2: Initialize the QueryBot","text":"
Next, we need to create an instance of the QueryBot class. You can do this by providing the necessary parameters, such as the system message, model name, and document paths. For example:
system_message = \"You are a helpful assistant that can answer questions based on the provided documents.\"\nmodel_name = \"gpt-4\"\ndoc_paths = [\"document1.txt\", \"document2.txt\"]\n\nquery_bot = QueryBot(system_message, model_name=model_name, doc_paths=doc_paths)\n
"},{"location":"tutorials/recording_prompts/#step-3-use-the-promptrecorder-context-manager","title":"Step 3: Use the PromptRecorder context manager","text":"
Now that we have an instance of the QueryBot, we can use the PromptRecorder context manager to automatically record the prompts and responses. To do this, simply wrap your interactions with the QueryBot inside a with statement, like this:
with PromptRecorder() as recorder:\n # Interact with the QueryBot here\n
"},{"location":"tutorials/recording_prompts/#step-4-interact-with-the-querybot","title":"Step 4: Interact with the QueryBot","text":"
Inside the with statement, you can now interact with the QueryBot by calling it with your queries. For example:
with PromptRecorder() as recorder:\n query = \"What is the main idea of document1?\"\n response = query_bot(query)\n print(response.content)\n\n query = \"How does document2 support the main idea?\"\n response = query_bot(query)\n print(response.content)\n
The PromptRecorder will automatically record the prompts and responses for each interaction with the QueryBot.
"},{"location":"tutorials/recording_prompts/#step-5-access-the-recorded-data","title":"Step 5: Access the recorded data","text":"
After you have finished interacting with the QueryBot, you can access the recorded data using the PromptRecorder instance. For example, you can print the recorded data as a pandas DataFrame:
print(recorder.dataframe())\n
Or, you can display the recorded data as an interactive panel:
Here's the complete example that demonstrates how to use the PromptRecorder to automatically record QueryBot calls:
from prompt_recorder import PromptRecorder, autorecord\nfrom query_bot import QueryBot\n\nsystem_message = \"You are a helpful assistant that can answer questions based on the provided documents.\"\nmodel_name = \"gpt-4\"\ndoc_paths = [\"document1.txt\", \"document2.txt\"]\n\nquery_bot = QueryBot(system_message, model_name=model_name, doc_paths=doc_paths)\n\nwith PromptRecorder() as recorder:\n query = \"What is the main idea of document1?\"\n response = query_bot(query)\n print(response.content)\n\n query = \"How does document2 support the main idea?\"\n response = query_bot(query)\n print(response.content)\n\nprint(recorder.dataframe())\nrecorder.panel().show()\n
That's it! You now know how to use the PromptRecorder class to automatically record calls made to the QueryBot. This can be a useful tool for logging and analyzing interactions with your chatbot.
This tutorial was written by GPT4 and edited by a human.
In this tutorial, we will learn how to use the SimpleBot class, a Python implementation of a chatbot that interacts with OpenAI's GPT-4 model. The SimpleBot class is designed to be simple and easy to use, allowing you to create a chatbot that can respond to human messages based on a given system prompt.
"},{"location":"tutorials/simplebot/#initializing-the-simplebot","title":"Initializing the SimpleBot","text":"
To create a new instance of SimpleBot, you need to provide a system prompt. The system prompt is used to prime the GPT-4 model, giving it context for generating responses. You can also optionally set the temperature and model_name parameters.
system_prompt = \"You are an AI assistant that helps users with their questions.\"\nbot = SimpleBot(system_prompt)\n
"},{"location":"tutorials/simplebot/#interacting-with-the-simplebot","title":"Interacting with the SimpleBot","text":"
To interact with the SimpleBot, simply call the instance with a human message as a parameter. The bot will return an AIMessage object containing the generated response.
human_message = \"What is the capital of France?\"\nresponse = bot(human_message)\nprint(response.content)\n
"},{"location":"tutorials/simplebot/#using-the-panel-app","title":"Using the Panel App","text":"
SimpleBot also comes with a built-in Panel app that provides a graphical user interface for interacting with the chatbot. To create the app, call the panel() method on your SimpleBot instance:
app = bot.panel()\n
You can customize the appearance of the app by providing optional parameters such as input_text_label, output_text_label, submit_button_label, site_name, and title.
To display the app in your browser, call the show() method on the app:
Here's a complete example of how to create and interact with a SimpleBot:
from simple_bot import SimpleBot\n\n# Initialize the SimpleBot\nsystem_prompt = \"You are an AI assistant that helps users with their questions.\"\nbot = SimpleBot(system_prompt)\n\n# Interact with the SimpleBot\nhuman_message = \"What is the capital of France?\"\nresponse = bot(human_message)\nprint(response.content)\n\n# Create and display the Panel app\napp = bot.panel()\napp.show()\n
In this tutorial, we learned how to use the SimpleBot class to create a simple chatbot that interacts with OpenAI's GPT-4 model. We also learned how to create a Panel app for a more user-friendly interface. With this knowledge, you can now create your own chatbots and experiment with different system prompts and settings.
"},{"location":"tutorials/chat-ui/querybot/","title":"Building a QueryBot Chat Interface with File Upload in Panel","text":"
Building a QueryBot Chat Interface with File Upload in Panel
Prerequisites
Code Breakdown
Import Necessary Libraries
Initialize Panel Extension
Set Up Widgets and Global Variables
Define the File Upload Function
Interact with the Bot and Update Chat Interface
Monitor File Uploads
Define the Callback Function for Chat Interface
Set Up the Chat Interface
Combine Widgets and Chat Interface into a Single App
All the code together
The script
The terminal command
Conclusion
In this tutorial, we will walk through how to create a chat interface that allows users to upload a PDF file, which the QueryBot from the llamabot library will then summarize. This is all presented in a neat web app using the Panel library.
file_input: A widget that allows users to upload PDF files.
spinner: A loading spinner indicator to show when the bot is processing.
bot: A global variable to store the QueryBot instance.
"},{"location":"tutorials/chat-ui/querybot/#define-the-file-upload-function","title":"Define the File Upload Function","text":"
def upload_file(event):\n spinner.value = True\n raw_contents = event.new\n\n with tempfile.NamedTemporaryFile(\n delete=False, suffix=\".pdf\", mode=\"wb\"\n ) as temp_file:\n temp_file.write(raw_contents)\n global bot\n bot = QueryBot(\"You are Richard Feynman\", doc_paths=[Path(temp_file.name)])\n ...\n
This function is triggered when a file is uploaded:
It sets the spinner to active.
Retrieves the raw contents of the uploaded file.
Creates a temporary PDF file and writes the uploaded content to it.
Initializes the QueryBot instance with the context \"You are Richard Feynman\" and the path to the temporary file.
"},{"location":"tutorials/chat-ui/querybot/#interact-with-the-bot-and-update-chat-interface","title":"Interact with the Bot and Update Chat Interface","text":"
chat_interface.send(\n \"Please allow me to summarize the paper for you. One moment...\",\n user=\"System\",\n respond=False,\n )\n response = bot(\"Please summarize this paper for me.\")\n chat_interface.send(response.content, user=\"System\", respond=False)\n spinner.value = False\n
After initializing the bot:
A system message is sent to inform the user that the bot is working on summarizing.
The bot is then asked to summarize the uploaded paper.
This line sets up an event listener on the file_input widget. When a file is uploaded (i.e., its value changes), the upload_file function is triggered.
"},{"location":"tutorials/chat-ui/querybot/#define-the-callback-function-for-chat-interface","title":"Define the Callback Function for Chat Interface","text":"
This function is called whenever a user sends a message:
Activates the spinner.
Queries the bot with the user's message.
Deactivates the spinner.
Yields the bot's response.
"},{"location":"tutorials/chat-ui/querybot/#set-up-the-chat-interface","title":"Set Up the Chat Interface","text":"
chat_interface = pn.chat.ChatInterface(\n callback=callback,\n callback_user=\"QueryBot\",\n show_clear=False,\n)\nchat_interface.send(\n \"Send a message to get a reply from the bot!\",\n user=\"System\",\n respond=False,\n)\n
This sets up the chat interface and sends an initial message prompting the user to interact.
"},{"location":"tutorials/chat-ui/querybot/#combine-widgets-and-chat-interface-into-a-single-app","title":"Combine Widgets and Chat Interface into a Single App","text":"
The file upload widget, spinner, and chat interface are arranged in a layout. The app is made servable, marking it as the main component when the app runs.
"},{"location":"tutorials/chat-ui/querybot/#all-the-code-together","title":"All the code together","text":""},{"location":"tutorials/chat-ui/querybot/#the-script","title":"The script","text":"
# chat_interface.py\nfrom llamabot import QueryBot\nimport tempfile\nimport panel as pn\nfrom pathlib import Path\n\npn.extension()\n\nfile_input = pn.widgets.FileInput(mime_type=[\"application/pdf\"])\nspinner = pn.indicators.LoadingSpinner(value=False, width=30, height=30)\nglobal bot\nbot = None\n\n\ndef upload_file(event):\n spinner.value = True\n raw_contents = event.new\n\n with tempfile.NamedTemporaryFile(\n delete=False, suffix=\".pdf\", mode=\"wb\"\n ) as temp_file:\n temp_file.write(raw_contents)\n global bot\n bot = QueryBot(\"You are Richard Feynman\", doc_paths=[Path(temp_file.name)])\n\n chat_interface.send(\n \"Please allow me to summarize the paper for you. One moment...\",\n user=\"System\",\n respond=False,\n )\n response = bot(\"Please summarize this paper for me.\")\n chat_interface.send(response.content, user=\"System\", respond=False)\n spinner.value = False\n\n\nfile_input.param.watch(upload_file, \"value\")\n\n\nasync def callback(contents: str, user: str, instance: pn.chat.ChatInterface):\n spinner.value = True\n global bot\n response = bot(contents)\n spinner.value = False\n yield response.content\n\n\nchat_interface = pn.chat.ChatInterface(\n callback=callback,\n callback_user=\"QueryBot\",\n show_clear=False,\n)\nchat_interface.send(\n \"Send a message to get a reply from the bot!\",\n user=\"System\",\n respond=False,\n)\n\napp = pn.Column(pn.Row(file_input, spinner), chat_interface)\napp.show()\n
By integrating the QueryBot and Panel libraries, we've built a dynamic chat interface that can summarize uploaded PDF files. This tutorial serves as a foundation to develop more sophisticated chatbot applications with file processing capabilities. Happy coding!
"},{"location":"tutorials/chat-ui/simplebot-and-chatbot/","title":"Create a Chat Interface with SimpleBot and Panel","text":"
Create a Chat Interface with SimpleBot and Panel
Prerequisites
Code Breakdown
Import Necessary Libraries
Initialize Panel Extension
Create a SimpleBot Instance
Define the Callback Function
Create the Chat Interface
Send an Initial Message
Make the Chat Interface Servable
Serve up the Panel app
All the code together
The python script
The terminal command
Conclusion
In this tutorial, we will explore how to set up a simple chat interface using the SimpleBot class from the llamabot library and the Panel library. By the end of this tutorial, you'll be able to integrate a bot into a chat interface and see how it interacts.
Before using Panel's functionality, you need to initialize its extension with pn.extension(). This prepares your Python environment to work with Panel components.
"},{"location":"tutorials/chat-ui/simplebot-and-chatbot/#create-a-simplebot-instance","title":"Create a SimpleBot Instance","text":"
bot = SimpleBot(\"You are Richard Feynman.\")\n
Here, we're creating an instance of the SimpleBot class. The string argument, \"You are Richard Feynman.\", serves as a context or persona for the bot. Essentially, this bot will behave as if it's Richard Feynman.
"},{"location":"tutorials/chat-ui/simplebot-and-chatbot/#define-the-callback-function","title":"Define the Callback Function","text":"
We're creating an instance of ChatInterface from Panel's chat module.
The callback parameter is set to the previously defined callback function. This tells the chat interface to use our function to handle messages.
callback_user is the name that will be displayed for the bot's messages.
show_clear=False means the chat interface won't have a clear button to erase the chat history.
"},{"location":"tutorials/chat-ui/simplebot-and-chatbot/#send-an-initial-message","title":"Send an Initial Message","text":"
chat_interface.send(\n \"Send a message to get a reply from the bot!\",\n user=\"System\",\n respond=False,\n)\n
This sends an initial message to the chat interface to prompt users to interact with the bot.
The message is sent from the \"System\" user and does not expect a reply (respond=False).
"},{"location":"tutorials/chat-ui/simplebot-and-chatbot/#make-the-chat-interface-servable","title":"Make the Chat Interface Servable","text":"
chat_interface.servable()\n
By calling .servable() on the chat interface, you're telling Panel to treat this interface as the main component when you run the app.
"},{"location":"tutorials/chat-ui/simplebot-and-chatbot/#serve-up-the-panel-app","title":"Serve up the Panel app","text":"
Now, you can serve up the app typing
panel serve chat_interface.py\n
in your terminal. This will open up a new browser window with the chat interface.
"},{"location":"tutorials/chat-ui/simplebot-and-chatbot/#all-the-code-together","title":"All the code together","text":""},{"location":"tutorials/chat-ui/simplebot-and-chatbot/#the-python-script","title":"The python script","text":"
# chat_interface.py\nfrom llamabot import SimpleBot, ChatBot\nimport panel as pn\n\npn.extension()\n\nbot = SimpleBot(\"You are Richard Feynman.\") # can be ChatBot as well\n\n\nasync def callback(contents: str, user: str, instance: pn.chat.ChatInterface):\n response = bot(contents)\n yield response.content\n\n\nchat_interface = pn.chat.ChatInterface(\n callback=callback, callback_user=\"Feynman Bot\", show_clear=False\n)\nchat_interface.send(\n \"Send a message to get a reply from the bot!\",\n user=\"System\",\n respond=False,\n)\nchat_interface.servable()\n
With just a few lines of code, you can create a chat interface and integrate it with a bot using the llamabot and Panel libraries. This setup provides a foundational step towards creating more interactive and dynamic chatbot applications. Happy coding!
In this tutorial, we will walk through how to create a chat interface
+that allows users to upload a PDF file, which the QueryBot
+from the llamabot library will then summarize.
+This is all presented in a neat web app using the Panel library.
This function is triggered when a file is uploaded:
+
+
It sets the spinner to active.
+
Retrieves the raw contents of the uploaded file.
+
Creates a temporary PDF file and writes the uploaded content to it.
+
Initializes the QueryBot instance with the context "You are Richard Feynman" and the path to the temporary file.
+
+
Interact with the Bot and Update Chat Interface
+
chat_interface.send(
+ "Please allow me to summarize the paper for you. One moment...",
+ user="System",
+ respond=False,
+ )
+ response=bot("Please summarize this paper for me.")
+ chat_interface.send(response.content,user="System",respond=False)
+ spinner.value=False
+
+
After initializing the bot:
+
+
A system message is sent to inform the user that the bot is working on summarizing.
+
The bot is then asked to summarize the uploaded paper.
+
The bot's response is sent to the chat interface.
+
The spinner is deactivated.
+
+
Monitor File Uploads
+
file_input.param.watch(upload_file,"value")
+
+
This line sets up an event listener on the file_input widget. When a file is uploaded (i.e., its value changes), the upload_file function is triggered.
This function is called whenever a user sends a message:
+
+
Activates the spinner.
+
Queries the bot with the user's message.
+
Deactivates the spinner.
+
Yields the bot's response.
+
+
Set Up the Chat Interface
+
chat_interface=pn.chat.ChatInterface(
+ callback=callback,
+ callback_user="QueryBot",
+ show_clear=False,
+)
+chat_interface.send(
+ "Send a message to get a reply from the bot!",
+ user="System",
+ respond=False,
+)
+
+
This sets up the chat interface and sends an initial message prompting the user to interact.
+
Combine Widgets and Chat Interface into a Single App
The file upload widget, spinner, and chat interface are arranged in a layout. The app is made servable, marking it as the main component when the app runs.
+
All the code together
+
The script
+
# chat_interface.py
+fromllamabotimportQueryBot
+importtempfile
+importpanelaspn
+frompathlibimportPath
+
+pn.extension()
+
+file_input=pn.widgets.FileInput(mime_type=["application/pdf"])
+spinner=pn.indicators.LoadingSpinner(value=False,width=30,height=30)
+globalbot
+bot=None
+
+
+defupload_file(event):
+ spinner.value=True
+ raw_contents=event.new
+
+ withtempfile.NamedTemporaryFile(
+ delete=False,suffix=".pdf",mode="wb"
+ )astemp_file:
+ temp_file.write(raw_contents)
+ globalbot
+ bot=QueryBot("You are Richard Feynman",doc_paths=[Path(temp_file.name)])
+
+ chat_interface.send(
+ "Please allow me to summarize the paper for you. One moment...",
+ user="System",
+ respond=False,
+ )
+ response=bot("Please summarize this paper for me.")
+ chat_interface.send(response.content,user="System",respond=False)
+ spinner.value=False
+
+
+file_input.param.watch(upload_file,"value")
+
+
+asyncdefcallback(contents:str,user:str,instance:pn.chat.ChatInterface):
+ spinner.value=True
+ globalbot
+ response=bot(contents)
+ spinner.value=False
+ yieldresponse.content
+
+
+chat_interface=pn.chat.ChatInterface(
+ callback=callback,
+ callback_user="QueryBot",
+ show_clear=False,
+)
+chat_interface.send(
+ "Send a message to get a reply from the bot!",
+ user="System",
+ respond=False,
+)
+
+app=pn.Column(pn.Row(file_input,spinner),chat_interface)
+app.show()
+
+
The terminal command
+
panelservechat_interface.py
+
+
Conclusion
+
By integrating the QueryBot and Panel libraries,
+we've built a dynamic chat interface that can summarize uploaded PDF files.
+This tutorial serves as a foundation to develop more sophisticated chatbot applications
+with file processing capabilities.
+Happy coding!
In this tutorial, we will explore how to set up a simple chat interface
+using the SimpleBot class from the llamabot library and the Panel library.
+By the end of this tutorial,
+you'll be able to integrate a bot into a chat interface and see how it interacts.
SimpleBot and ChatBot: Classes from the llamabot library that allow you to create chatbot instances.
+
panel (aliased as pn): A Python library for creating web-based interactive apps and dashboards.
+
+
Initialize Panel Extension
+
pn.extension()
+
+
Before using Panel's functionality, you need to initialize its extension with pn.extension(). This prepares your Python environment to work with Panel components.
+
Create a SimpleBot Instance
+
bot=SimpleBot("You are Richard Feynman.")
+
+
Here, we're creating an instance of the SimpleBot class. The string argument, "You are Richard Feynman.", serves as a context or persona for the bot. Essentially, this bot will behave as if it's Richard Feynman.
We're creating an instance of ChatInterface from Panel's chat module.
+
The callback parameter is set to the previously defined callback function. This tells the chat interface to use our function to handle messages.
+
callback_user is the name that will be displayed for the bot's messages.
+
show_clear=False means the chat interface won't have a clear button to erase the chat history.
+
+
Send an Initial Message
+
chat_interface.send(
+ "Send a message to get a reply from the bot!",
+ user="System",
+ respond=False,
+)
+
+
+
This sends an initial message to the chat interface to prompt users to interact with the bot.
+
The message is sent from the "System" user and does not expect a reply (respond=False).
+
+
Make the Chat Interface Servable
+
chat_interface.servable()
+
+
By calling .servable() on the chat interface,
+you're telling Panel to treat this interface as the main component when you run the app.
+
Serve up the Panel app
+
Now, you can serve up the app typing
+
panelservechat_interface.py
+
+
in your terminal.
+This will open up a new browser window with the chat interface.
+
All the code together
+
The python script
+
# chat_interface.py
+fromllamabotimportSimpleBot,ChatBot
+importpanelaspn
+
+pn.extension()
+
+bot=SimpleBot("You are Richard Feynman.")# can be ChatBot as well
+
+
+asyncdefcallback(contents:str,user:str,instance:pn.chat.ChatInterface):
+ response=bot(contents)
+ yieldresponse.content
+
+
+chat_interface=pn.chat.ChatInterface(
+ callback=callback,callback_user="Feynman Bot",show_clear=False
+)
+chat_interface.send(
+ "Send a message to get a reply from the bot!",
+ user="System",
+ respond=False,
+)
+chat_interface.servable()
+
+
The terminal command
+
panelservechat_interface.py
+
+
Conclusion
+
With just a few lines of code, you can create a chat interface
+and integrate it with a bot using the llamabot and Panel libraries.
+This setup provides a foundational step towards creating
+more interactive and dynamic chatbot applications.
+Happy coding!
This tutorial was written by GPT4 and edited by a human.
+
+
In this tutorial, we will learn how to use the ChatBot class to create a simple chatbot that can interact with users. The chatbot is built using the OpenAI GPT-4 model and can be used in a Panel app.
+
Getting Started
+
First, let's import the ChatBot class:
+
fromllamabotimportChatBot
+
+
Now, let's create a new instance of the ChatBot class. We need to provide a system prompt, which will be used to prime the chatbot. Optionally, we can also set the temperature and model name:
+
system_prompt="Hello, I am a chatbot. How can I help you today?"
+chatbot=ChatBot(system_prompt,temperature=0.0,model_name="gpt-4")
+
+
Interacting with the ChatBot
+
To interact with the chatbot, we can simply call the chatbot instance with a human message:
+
human_message="What is the capital of France?"
+response=chatbot(human_message)
+print(response.content)
+
+
The chatbot will return an AIMessage object containing the response to the human message, primed by the system prompt.
+
Chat History
+
The chatbot automatically manages the chat history. To view the chat history, we can use the __repr__ method:
+
print(chatbot)
+
+
This will return a string representation of the chat history, with each message prefixed by its type (System, Human, or AI).
+
Creating a Panel App
+
The ChatBot class also provides a panel method to create a Panel app that wraps the chatbot. This allows users to interact with the chatbot through a web interface.
+
To create a Panel app, simply call the panel method on the chatbot instance:
+
app=chatbot.panel(show=False)
+
+
By default, the app will be shown in a new browser window. If you want to return the app directly, set the show parameter to False.
+
You can customize the appearance of the app by providing additional parameters, such as site, title, and width:
To run the app, you can either call the show method on the app or use the Panel serve function:
+
app.show()
+
+
or
+
importpanelaspn
+pn.serve(app)
+
+
Now you have a fully functional chatbot that can interact with users through a web interface!
+
Conclusion
+
In this tutorial, we learned how to use the ChatBot class to create a simple chatbot that can interact with users. We also learned how to create a Panel app to provide a web interface for the chatbot. With this knowledge, you can now create your own chatbots and customize them to suit your needs. Happy chatting!
In this guide, you'll learn how to run a chatbot using llamabot and Ollama. We'll cover how to install Ollama, start its server, and finally, run the chatbot within a Python session.
This tutorial was written by GPT4 and edited by a human.
+
+
In this tutorial, we will learn how to use the QueryBot class to create a chatbot that can query documents using GPT-4. The QueryBot class allows us to index documents and use GPT-4 to generate responses based on the indexed documents.
+
Initializing QueryBot
+
To create a new instance of QueryBot, we need to provide a system message, a list of document paths, or a saved index path. The system message is used to instruct the chatbot on how to behave. The document paths are used to index the documents, and the saved index path is used to load a pre-built index.
+
Here's an example of how to initialize a QueryBot:
+
frompathlibimportPath
+fromllamabotimportQueryBot
+
+system_message="You are a helpful assistant that can answer questions based on the provided documents."
+doc_paths=[Path("document1.txt"),Path("document2.txt")]
+
+query_bot=QueryBot(system_message=system_message,doc_paths=doc_paths)
+
+
Querying the Index
+
To query the index, we can call the QueryBot instance with a query string. The QueryBot will return the top similarity_top_k documents from the index and use them to generate a response using GPT-4.
+
Here's an example of how to query the index:
+
query="What is the main idea of document1?"
+response=query_bot(query)
+print(response.content)
+
+
Saving and Loading the Index
+
We can save the index to disk using the save method and load it later using the __init__ method with the saved_index_path parameter.
+
Here's an example of how to save and load the index:
+
# Save the index
+query_bot.save("index.json")
+
+# Load the index
+loaded_query_bot=QueryBot(system_message=system_message,saved_index_path="index.json")
+
+
Inserting Documents into the Index
+
We can insert new documents into the index using the insert method. This method takes a file path as an argument and inserts the document into the index.
+
Here's an example of how to insert a document into the index:
+
query_bot.insert(Path("new_document.txt"))
+
+
Conclusion
+
In this tutorial, we learned how to use the QueryBot class to create a chatbot that can query documents using GPT-4. We covered how to initialize a QueryBot, query the index, save and load the index, and insert new documents into the index. With this knowledge, you can now create your own chatbot that can answer questions based on a set of documents.
Automatically Record QueryBot Calls with PromptRecorder
+
+
Note
+
This tutorial was written by GPT4 and edited by a human.
+
+
In this tutorial, we will learn how to use the PromptRecorder class to automatically record calls made to the QueryBot. The PromptRecorder class is designed to record prompts and responses, making it a perfect fit for logging interactions with the QueryBot.
+
Prerequisites
+
Before we begin, make sure you have the following Python libraries installed:
+
+
pandas
+
panel
+
+
You can install them using pip:
+
pipinstallpandaspanel
+
+
Step 1: Import the necessary classes
+
First, we need to import the PromptRecorder and QueryBot classes from their respective source files. You can do this by adding the following lines at the beginning of your script:
Next, we need to create an instance of the QueryBot class. You can do this by providing the necessary parameters, such as the system message, model name, and document paths. For example:
+
system_message="You are a helpful assistant that can answer questions based on the provided documents."
+model_name="gpt-4"
+doc_paths=["document1.txt","document2.txt"]
+
+query_bot=QueryBot(system_message,model_name=model_name,doc_paths=doc_paths)
+
+
Step 3: Use the PromptRecorder context manager
+
Now that we have an instance of the QueryBot, we can use the PromptRecorder context manager to automatically record the prompts and responses. To do this, simply wrap your interactions with the QueryBot inside a with statement, like this:
+
withPromptRecorder()asrecorder:
+ # Interact with the QueryBot here
+
+
Step 4: Interact with the QueryBot
+
Inside the with statement, you can now interact with the QueryBot by calling it with your queries. For example:
+
withPromptRecorder()asrecorder:
+ query="What is the main idea of document1?"
+ response=query_bot(query)
+ print(response.content)
+
+ query="How does document2 support the main idea?"
+ response=query_bot(query)
+ print(response.content)
+
+
The PromptRecorder will automatically record the prompts and responses for each interaction with the QueryBot.
+
Step 5: Access the recorded data
+
After you have finished interacting with the QueryBot, you can access the recorded data using the PromptRecorder instance. For example, you can print the recorded data as a pandas DataFrame:
+
print(recorder.dataframe())
+
+
Or, you can display the recorded data as an interactive panel:
+
recorder.panel().show()
+
+
Complete Example
+
Here's the complete example that demonstrates how to use the PromptRecorder to automatically record QueryBot calls:
+
fromprompt_recorderimportPromptRecorder,autorecord
+fromquery_botimportQueryBot
+
+system_message="You are a helpful assistant that can answer questions based on the provided documents."
+model_name="gpt-4"
+doc_paths=["document1.txt","document2.txt"]
+
+query_bot=QueryBot(system_message,model_name=model_name,doc_paths=doc_paths)
+
+withPromptRecorder()asrecorder:
+ query="What is the main idea of document1?"
+ response=query_bot(query)
+ print(response.content)
+
+ query="How does document2 support the main idea?"
+ response=query_bot(query)
+ print(response.content)
+
+print(recorder.dataframe())
+recorder.panel().show()
+
+
That's it! You now know how to use the PromptRecorder class to automatically record calls made to the QueryBot. This can be a useful tool for logging and analyzing interactions with your chatbot.
This tutorial was written by GPT4 and edited by a human.
+
+
In this tutorial, we will learn how to use the SimpleBot class, a Python implementation of a chatbot that interacts with OpenAI's GPT-4 model. The SimpleBot class is designed to be simple and easy to use, allowing you to create a chatbot that can respond to human messages based on a given system prompt.
+
Getting Started
+
First, let's import the SimpleBot class:
+
fromllamabotimportSimpleBot
+
+
Initializing the SimpleBot
+
To create a new instance of SimpleBot, you need to provide a system prompt. The system prompt is used to prime the GPT-4 model, giving it context for generating responses. You can also optionally set the temperature and model_name parameters.
+
system_prompt="You are an AI assistant that helps users with their questions."
+bot=SimpleBot(system_prompt)
+
+
Interacting with the SimpleBot
+
To interact with the SimpleBot, simply call the instance with a human message as a parameter. The bot will return an AIMessage object containing the generated response.
+
human_message="What is the capital of France?"
+response=bot(human_message)
+print(response.content)
+
+
Using the Panel App
+
SimpleBot also comes with a built-in Panel app that provides a graphical user interface for interacting with the chatbot. To create the app, call the panel() method on your SimpleBot instance:
+
app=bot.panel()
+
+
You can customize the appearance of the app by providing optional parameters such as input_text_label, output_text_label, submit_button_label, site_name, and title.
+
To display the app in your browser, call the show() method on the app:
+
app.show()
+
+
Example
+
Here's a complete example of how to create and interact with a SimpleBot:
+
fromsimple_botimportSimpleBot
+
+# Initialize the SimpleBot
+system_prompt="You are an AI assistant that helps users with their questions."
+bot=SimpleBot(system_prompt)
+
+# Interact with the SimpleBot
+human_message="What is the capital of France?"
+response=bot(human_message)
+print(response.content)
+
+# Create and display the Panel app
+app=bot.panel()
+app.show()
+
+
Conclusion
+
In this tutorial, we learned how to use the SimpleBot class to create a simple chatbot that interacts with OpenAI's GPT-4 model. We also learned how to create a Panel app for a more user-friendly interface. With this knowledge, you can now create your own chatbots and experiment with different system prompts and settings.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file