wtprompt is a lightweight, no-nonsense library designed to help you manage your LLM prompts efficiently.
Tired of cluttering your code with blocks of text? wtprompt keeps your code clean by loading prompts from files or JSON. Say goodbye to length issues and linting headaches.
- ✅ Zero bloat – Need just prompts? No need for a full MLOps library.
- ✅ Jinja support – Use powerful templating syntax for dynamic prompts.
- ✅ Markdown-friendly – Ready for OpenAI-style Markdown prompting.
- ✅ Folder & JSON support – Load prompts from structured files or JSON.
- ✅ Prompt hashing – Track prompts with unique identifiers.
- ✅ Built-in preprocessing – Clean input text before inserting it into prompts.
-
🚀 Simplified Prompt Filling: You can now use
fill_prompt
directly withFolderPrompts
:base_prompts.fill_prompt('fill_test', {'day': 'Monday', 'this_month': 'August'})
-
⚠ Breaking Change:
fill_list
andPromptGenerator
are no longer accessible at the top level (wtprompt
).
Update your imports as follows:from wtprompt.fill import fill_list, PromptGenerator
or drop them in favour of the simplified filling method.
pip install wtprompt
Store your prompts as .txt
or .md
files in a folder:
prompts/
├── fill_test.txt # "Today is {{ day }} in the month of {{ this_month }}."
├── hello.txt
├── subfolder/
│ ├── question.md
Then, load them:
from wtprompt import FolderPrompts
base_prompts = FolderPrompts("prompts")
# Retrieve a prompt
prompt = base_prompts.hello # Loads "hello.txt"
prompt = base_prompts.subfolder.question # Loads "subfolder/question.md"
Now you can directly fill prompts using fill_prompt
:
filled_prompt = base_prompts.fill_prompt("fill_test", {"day": "Monday", "this_month": "August"})
print(filled_prompt)
➡️ Output:
"Today is Monday in the month of August."
For simpler cases, use fill_list
:
filled_prompt = base_prompts.fill_list("fill_test", ["Monday", "August"])
Tip:
fill_list
is faster for basic substitutions, but Jinja allows more flexibility.
FolderPrompts
helps organize and manage prompts with structured storage.
my_prompts = FolderPrompts("prompts")
# Direct attribute access
prompt_text = my_prompts.prompt_name
prompt_text = my_prompts.subfolder.prompt_name
my_prompts.save_prompt_report("report.json")
my_prompts.load_from_prompt_report("report.json", strict=True)
Strict mode ensures that prompt hashes match.
Store prompts in a JSON file:
{
"greeting": "Hello, {{ name }}!",
"farewell": "Goodbye, {{ name }}."
}
Load and use them:
from wtprompt import JsonPrompts
prompts = JsonPrompts("prompts.json")
prompt = prompts.greeting # Or prompts("greeting")
Note: JSON loading is not lazy. Use
validate=True
to check for errors.
For quick prototyping, define prompts manually:
from wtprompt import PromptLoader
prompts = PromptLoader()
prompts.add_prompt("greeting", "Hello, {{ name }}!")
# Retrieve the prompt and its hash
text, hash_id = prompts.get_prompt("greeting", get_hash=True)
Raw text can be messy. Use TextPreprocessor
for basic cleaning:
from wtprompt.utils.preprocessor import TextPreprocessor
preprocessor = TextPreprocessor()
is_valid, cleaned_text = preprocessor.preprocess(" Hello! ")
assert is_valid # True
Customizable settings include:
- Whitespace handling: Trim spaces, limit consecutive spaces.
- Validation: Ensure text isn’t empty or gibberish.
- Truncation: Limit max length.
- Character filtering: Enforce ASCII, Unicode normalization.
Example:
def build_prompt(prompts, preprocessor, context, question):
is_ok, context = preprocessor.preprocess(context)
assert is_ok, "ERROR: Invalid context"
return base_prompts.fill_list("prompt_name", [context, question])
✔️ Store prompts in folders or JSON.
✔️ Use Jinja templating to insert dynamic values.
✔️ Apply preprocessing for cleaner input.
Licensed under MIT. Contributions are welcome!