Summon the power of Claude AI to transform your code with a wave of your terminal wand!
Wizard Prompt CLI is a magical command-line interface that conjures Claude AI to analyze, enhance, and transform your project files. Ask questions about your code in natural language, and watch as the AI wizard works its spells to provide insights and implement changes.
This tool requires an Anthropic API key to function. You must set the ANTHROPIC_API_KEY
environment variable before using any commands. See the API Key Setup section below for details.
Without this environment variable set, no commands will work!
# Set this before using any wiz commands
export ANTHROPIC_API_KEY=your_api_key_here
Note: This tool uses LiteLLM for compatibility with multiple AI providers. The default model is Anthropic's Claude, but you can switch to other providers using the
--model
flag.
OPENAI_API_KEY=${WEBUI_API_KEY} wiz prompt -f README.md \
'write a poem about the readme' \
--llm-base https://webui.dataturd.com/api \
--model openai/qwen2.5:14b
- 🔮 Automatically scans your project's grimoire of files, carefully avoiding binary artifacts
- 📜 Respects ancient incantations like
.gitignore
rules - 🌊 Streams Claude's mystical thinking process as it crafts responses
- 🪄 Magically applies code changes with a single command
- 🧪 Powerful file filtering to focus the AI's attention on specific scrolls of code
- 📚 Handles file output with proper directory creation spells
- 📥 Supports reading enchantments from stdin
- 🖼️ Includes image attachment capabilities for visual context
- 🚫 Exclude files via regex patterns to fine-tune your magical selection
- 📋 Preview files that would be included before spending API tokens
- 📝 Supports
.wizrc
configuration file for persistent command-line arguments
Summon the Wizard to your environment:
pip install wizard-prompt-cli
Or conjure it from source:
git clone https://github.com/ddrscott/wizard-prompt-cli.git
cd wizard-prompt-cli
pip install -e .
Before casting spells, you need to obtain your magical key:
- Create an account at Anthropic if you don't have one
- Generate an API key from your Anthropic dashboard
- Set the environment variable with your secret key:
# For Linux/macOS
export ANTHROPIC_API_KEY=your_api_key_here
# For Windows (Command Prompt)
set ANTHROPIC_API_KEY=your_api_key_here
# For Windows (PowerShell)
$env:ANTHROPIC_API_KEY="your_api_key_here"
For permanent enchantment, add this to your shell profile (.bashrc, .zshrc, etc.).
🔴 Note: The
ANTHROPIC_API_KEY
environment variable is required. The tool will not function without it properly set. If you encounter errors about unauthorized access or missing API keys, please verify this variable is correctly set.
You can store command configurations in a .wizrc
YAML file in your project directory:
# Example .wizrc file
files:
exclude: ["*test*"]
image: ["screenshot.png"]
max_tokens: 80000
# Example .wizrc configuration
files: &files
exclude: ["*.log", "build", "dist"]
prompt:
<<: *files
The tool will automatically load this configuration and apply it to corresponding commands. The example above shows:
- Setting exclusion patterns to ignore specific files
- Using YAML anchors (
&files
) to define common configurations - Inheriting settings between commands with YAML merge keys (
<<: *files
) - Configuring command-specific options like max_tokens
Each top-level key in the YAML corresponds to a subcommand, with nested options matching the command's options. All configuration is completely optional, and command-line arguments will override settings in .wizrc
when provided.
The .wizrc
file is automatically excluded from git via .gitignore
.
Wizard Prompt CLI offers three main incantations and supports configuration via a .wizrc
file:
Invoke Claude with your questions and project files:
wiz prompt "How can I improve this code?"
Include specific scrolls:
wiz prompt -f main.py -f utils.py "How can I make these files more efficient?"
Add images for visual context:
wiz prompt -i screenshot.png "What UI improvements would you suggest based on this screenshot?"
Exclude files matching a pattern:
wiz prompt -x ".*test.*\.py$" "Analyze all non-test Python files"
Options:
-f, --file
: Specify files to include (can be used multiple times)-i, --image
: Include image files as context (can be used multiple times)-o, --output
: Location to write response (default:.response.md
)-m, --max-tokens
: Maximum tokens for the response (default: 60000)-t, --thinking-tokens
: Maximum tokens for model's thinking process (default: 16000)-x, --exclude
: Regular expression pattern to exclude files--tokens
: Show token count estimates alongside file sizes--model
: Specify LLM model and provider using LiteLLM format (default: anthropic/claude-3-7-sonnet-20250219)--llm-base
: Custom API base URL for the LLM provider (for self-hosted models)
The wizard's response will be saved to .response.md
by default, and a copy of the full messages including system prompt will be saved to .messages.md
.
Preview which files would be included in a prompt without calling the API:
wiz files
This command accepts the same filtering options as prompt
:
wiz files -f main.py -f utils.py
wiz files -x ".*test.*\.py$"
Options:
-f, --file
: Specify files to include (can be used multiple times)-i, --image
: Include image files as context (can be used multiple times)-x, --exclude
: Regular expression pattern to exclude files--tokens
: Show token count estimates alongside file sizes
Use this command to verify file selection before spending API tokens with a real prompt.
Cast the spell to implement the suggested changes:
wiz apply
Or specify a different spell book:
wiz apply custom_response.md
You can also channel content directly to the apply command:
cat response.md | wiz apply -
- The
prompt
command channels your question and file contents to Claude - Claude weaves a spell over your files and provides a response with suggested changes
- Changes are formatted with
[FILE path]...[/FILE]
magical markers - The
apply
command interprets these markers and transforms your files
Ask the wizard to refactor a specific function:
wiz prompt -f src/utils.py "Refactor the parse_data function to be more efficient"
Conjure a new feature:
wiz prompt "Add a progress bar to the file processing function"
Get feedback on UI design with an image:
wiz prompt -i design.png -f styles.css "How can I improve this layout?"
Exclude test files when analyzing your codebase:
wiz prompt -x ".*test.*\.py$|__pycache__" "Review my Python code architecture"
Preview files that would be included in a complex filter:
wiz files -x "\.(json|md|txt)$"
View files with token count estimates:
wiz files --tokens
Focus on specific file types by excluding others:
wiz prompt -x "\.(json|md|txt)$" "Analyze only my code files, not documentation or data"
Adjust token limits for complex analysis:
wiz prompt -m 80000 -t 20000 "Perform a comprehensive security audit of this codebase"
See token usage when sending a prompt:
wiz prompt --tokens "How can I optimize these files?"
Use alternative AI providers via LiteLLM (requires setting appropriate API keys):
# Use Groq's Qwen model
export GROQ_API_KEY=your_groq_key_here
wiz prompt --model groq/qwen-qwq-32b "What pattern is this code using?"
# Use OpenAI's GPT-4o
export OPENAI_API_KEY=your_openai_key_here
wiz prompt --model openai/gpt-4o --max-tokens 10000 "How can I improve this algorithm?"
# Use a self-hosted model or custom deployment
wiz prompt --model openai/gpt-4o --llm-base "http://localhost:8000/v1" "Explain this code to me"
Then cast the spell to apply the changes:
wiz apply
Or do it all in one and see the output at the same time:
wiz prompt "Fix bugs" | tee -a /dev/tty | wiz apply -
When using Wizard Prompt CLI to modify your code, it's essential to integrate with Git or your preferred version control system to keep track of changes and maintain safety.
-
Commit your current changes to create a restore point:
git add . git commit -m "Save state before wizard modifications"
-
Alternatively, create a new branch for experimental wizard changes:
git checkout -b wizard-experiment
-
Review all modifications with Git before committing:
git diff
-
For a more detailed review, use a visual diff tool:
git difftool
-
For a file-by-file review:
git add -p
-
If you're satisfied with the changes:
git add . git commit -m "Applied AI-suggested improvements to X"
-
If the wizard's spells went awry, revert the changes:
git reset --hard HEAD~1 # If you committed before using wiz # or git checkout -- . # If you haven't committed the wizard's changes
Always review changes carefully before committing them to your repository. The AI is powerful but not infallible - your expertise is the final arbiter of which magical transformations to keep!
If you encounter errors like "API key not found" or "Unauthorized", check that:
- You have set the
ANTHROPIC_API_KEY
environment variable correctly - The API key is valid and active in your Anthropic account
- You've correctly formatted the environment variable without extra spaces
You can verify your environment variable is set correctly with:
# Linux/macOS
echo $ANTHROPIC_API_KEY
# Windows PowerShell
echo $env:ANTHROPIC_API_KEY
For issues related to token counting or display:
- Make sure you have the latest version of the tool installed
- The
--tokens
flag shows estimates based on the tiktoken library - Image token counts are approximate since they depend on image dimensions and content
If you see LiteLLM errors:
- Verify you have the
litellm
package installed (min version 1.63.0) - This tool uses LiteLLM to connect to AI providers like Anthropic
- Make sure both
litellm
andtiktoken
packages are correctly installed - Check that you're using a compatible version of Python (3.11+)
✨ If this magical tool helped you, consider giving it a star! ✨