A powerful, feature-rich command-line interface for interacting with Model Context Protocol servers. This client enables seamless communication with LLMs through integration with the CHUK Tool Processor and CHUK-LLM, providing tool usage, conversation management, and multiple operational modes.
The MCP CLI is built on a modular architecture with clean separation of concerns:
- CHUK Tool Processor: Async-native tool execution and MCP server communication
- CHUK-LLM: Unified LLM provider configuration and client management
- MCP CLI: Rich user interface and command orchestration (this project)
- Chat Mode: Conversational interface with streaming responses and automated tool usage
- Interactive Mode: Command-driven shell interface for direct server operations
- Command Mode: Unix-friendly mode for scriptable automation and pipelines
- Direct Commands: Run individual commands without entering interactive mode
- Streaming Responses: Real-time response generation with live UI updates
- Concurrent Tool Execution: Execute multiple tools simultaneously while preserving conversation order
- Smart Interruption: Interrupt streaming responses or tool execution with Ctrl+C
- Performance Metrics: Response timing, words/second, and execution statistics
- Rich Formatting: Markdown rendering, syntax highlighting, and progress indicators
- OpenAI: GPT models (
gpt-4o
,gpt-4o-mini
,gpt-4-turbo
, etc.) - Anthropic: Claude models (
claude-3-opus
,claude-3-sonnet
,claude-3-haiku
) - Ollama: Local models (
llama3.2
,qwen2.5-coder
,deepseek-coder
, etc.) - Custom Providers: Extensible architecture for additional providers
- Dynamic Switching: Change providers and models mid-conversation
- Automatic Discovery: Server-provided tools are automatically detected and catalogued
- Provider Adaptation: Tool names are automatically sanitized for provider compatibility
- Concurrent Execution: Multiple tools can run simultaneously with proper coordination
- Rich Progress Display: Real-time progress indicators and execution timing
- Tool History: Complete audit trail of all tool executions
- Streaming Tool Calls: Support for tools that return streaming data
- Environment Integration: API keys and settings via environment variables
- File-based Config: YAML and JSON configuration files
- User Preferences: Persistent settings for active providers and models
- Validation & Diagnostics: Built-in provider health checks and configuration validation
- Cross-Platform Support: Windows, macOS, and Linux with platform-specific optimizations
- Rich Console Output: Colorful, formatted output with automatic fallbacks
- Command Completion: Context-aware tab completion for all interfaces
- Comprehensive Help: Detailed help system with examples and usage patterns
- Graceful Error Handling: User-friendly error messages with troubleshooting hints
- Python 3.11 or higher
- API Keys (as needed):
- OpenAI:
OPENAI_API_KEY
environment variable - Anthropic:
ANTHROPIC_API_KEY
environment variable - Custom providers: Provider-specific configuration
- OpenAI:
- Local Services (as needed):
- Ollama: Local installation for Ollama models
- MCP Servers: Server configuration file (default:
server_config.json
)
- Clone the repository:
git clone https://github.com/chrishayuk/mcp-cli
cd mcp-cli
- Install the package:
pip install -e ".[cli,dev]"
- Verify installation:
mcp-cli --help
UV provides faster dependency resolution and better environment management:
# Install UV if not already installed
pip install uv
# Install dependencies
uv sync --reinstall
# Run with UV
uv run mcp-cli --help
Global options available for all modes and commands:
--server
: Specify server(s) to connect to (comma-separated)--config-file
: Path to server configuration file (default:server_config.json
)--provider
: LLM provider (openai
,anthropic
,ollama
, etc.)--model
: Specific model to use (provider-dependent)--disable-filesystem
: Disable filesystem access (default: enabled)--api-base
: Override API endpoint URL--api-key
: Override API key--verbose
: Enable detailed logging--quiet
: Suppress non-essential output
export LLM_PROVIDER=openai # Default provider
export LLM_MODEL=gpt-4o-mini # Default model
export OPENAI_API_KEY=sk-... # OpenAI API key
export ANTHROPIC_API_KEY=sk-ant-... # Anthropic API key
export MCP_TOOL_TIMEOUT=120 # Tool execution timeout (seconds)
Provides a natural language interface with streaming responses and automatic tool usage:
# Default mode (no subcommand needed)
mcp-cli --server sqlite
# Explicit chat mode
mcp-cli chat --server sqlite
# With specific provider and model
mcp-cli chat --server sqlite --provider anthropic --model claude-3-sonnet
# With custom configuration
mcp-cli chat --server sqlite --provider openai --api-key sk-... --model gpt-4o
Command-driven shell interface for direct server operations:
mcp-cli interactive --server sqlite
# With provider selection
mcp-cli interactive --server sqlite --provider ollama --model llama3.2
Unix-friendly interface for automation and scripting:
# Process text with LLM
mcp-cli cmd --server sqlite --prompt "Analyze this data" --input data.txt
# Execute tools directly
mcp-cli cmd --server sqlite --tool list_tables --output tables.json
# Pipeline-friendly processing
echo "SELECT * FROM users LIMIT 5" | mcp-cli cmd --server sqlite --tool read_query --input -
Execute individual commands without entering interactive mode:
# List available tools
mcp-cli tools --server sqlite
# Show provider configuration
mcp-cli provider list
# Ping servers
mcp-cli ping --server sqlite
# List resources
mcp-cli resources --server sqlite
Chat mode provides the most advanced interface with streaming responses and intelligent tool usage.
# Simple startup
mcp-cli --server sqlite
# Multiple servers
mcp-cli --server sqlite,filesystem
# Specific provider configuration
mcp-cli --server sqlite --provider anthropic --model claude-3-opus
/provider # Show current configuration
/provider list # List all providers
/provider config # Show detailed configuration
/provider diagnostic # Test provider connectivity
/provider set openai api_key sk-... # Configure provider settings
/provider anthropic # Switch to Anthropic
/provider openai gpt-4o # Switch provider and model
/model # Show current model
/model gpt-4o # Switch to specific model
/models # List available models
/tools # List available tools
/tools --all # Show detailed tool information
/tools --raw # Show raw JSON definitions
/tools call # Interactive tool execution
/toolhistory # Show tool execution history
/th -n 5 # Last 5 tool calls
/th 3 # Details for call #3
/th --json # Full history as JSON
/conversation # Show conversation history
/ch -n 10 # Last 10 messages
/ch 5 # Details for message #5
/ch --json # Full history as JSON
/save conversation.json # Save conversation to file
/compact # Summarize conversation
/clear # Clear conversation history
/cls # Clear screen only
/verbose # Toggle verbose/compact display
/interrupt # Stop running operations
/servers # List connected servers
/help # Show all commands
/help tools # Help for specific command
/exit # Exit chat mode
- Real-time text generation with live updates
- Performance metrics (words/second, response time)
- Graceful interruption with Ctrl+C
- Progressive markdown rendering
- Automatic tool discovery and usage
- Concurrent execution with progress indicators
- Verbose and compact display modes
- Complete execution history and timing
- Seamless switching between providers
- Model-specific optimizations
- API key and endpoint management
- Health monitoring and diagnostics
Interactive mode provides a command shell for direct server interaction.
mcp-cli interactive --server sqlite
help # Show available commands
exit # Exit interactive mode
clear # Clear terminal
# Provider management
provider # Show current provider
provider list # List providers
provider anthropic # Switch provider
# Tool operations
tools # List tools
tools --all # Detailed tool info
tools call # Interactive tool execution
# Server operations
servers # List servers
ping # Ping all servers
resources # List resources
prompts # List prompts
Command mode provides Unix-friendly automation capabilities.
--input FILE # Input file (- for stdin)
--output FILE # Output file (- for stdout)
--prompt TEXT # Prompt template
--tool TOOL # Execute specific tool
--tool-args JSON # Tool arguments as JSON
--system-prompt TEXT # Custom system prompt
--raw # Raw output without formatting
--single-turn # Disable multi-turn conversation
--max-turns N # Maximum conversation turns
# Text processing
echo "Analyze this data" | mcp-cli cmd --server sqlite --input - --output analysis.txt
# Tool execution
mcp-cli cmd --server sqlite --tool list_tables --raw
# Complex queries
mcp-cli cmd --server sqlite --tool read_query --tool-args '{"query": "SELECT COUNT(*) FROM users"}'
# Batch processing with GNU Parallel
ls *.txt | parallel mcp-cli cmd --server sqlite --input {} --output {}.summary --prompt "Summarize: {{input}}"
The CLI automatically manages provider configurations using the CHUK-LLM library:
# Configure a provider
mcp-cli provider set openai api_key sk-your-key-here
mcp-cli provider set anthropic api_base https://api.anthropic.com
# Test configuration
mcp-cli provider diagnostic openai
# List available models
mcp-cli provider list
Providers are configured in ~/.chuk_llm/providers.yaml
:
openai:
api_base: https://api.openai.com/v1
default_model: gpt-4o-mini
anthropic:
api_base: https://api.anthropic.com
default_model: claude-3-sonnet
ollama:
api_base: http://localhost:11434
default_model: llama3.2
API keys are stored securely in ~/.chuk_llm/.env
:
OPENAI_API_KEY=sk-your-key-here
ANTHROPIC_API_KEY=sk-ant-your-key-here
Create a server_config.json
file with your MCP server configurations:
{
"mcpServers": {
"sqlite": {
"command": "python",
"args": ["-m", "mcp_server.sqlite_server"],
"env": {
"DATABASE_PATH": "database.db"
}
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/files"],
"env": {}
},
"brave-search": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
"env": {
"BRAVE_API_KEY": "your-brave-api-key"
}
}
}
}
# Start with OpenAI
mcp-cli chat --server sqlite --provider openai --model gpt-4o
# In chat, switch to Anthropic for reasoning tasks
> /provider anthropic claude-3-opus
# Switch to Ollama for local processing
> /provider ollama llama3.2
# Compare responses across providers
> /provider openai
> What's the capital of France?
> /provider anthropic
> What's the capital of France?
# Database analysis workflow
> List all tables in the database
[Tool: list_tables] β products, customers, orders
> Show me the schema for the products table
[Tool: describe_table] β id, name, price, category, stock
> Find the top 10 most expensive products
[Tool: read_query] β SELECT name, price FROM products ORDER BY price DESC LIMIT 10
> Export this data to a CSV file
[Tool: write_file] β Saved to expensive_products.csv
# Batch data processing
for file in data/*.csv; do
mcp-cli cmd --server sqlite \
--tool analyze_data \
--tool-args "{\"file_path\": \"$file\"}" \
--output "results/$(basename "$file" .csv)_analysis.json"
done
# Pipeline processing
cat input.txt | \
mcp-cli cmd --server sqlite --prompt "Extract key entities" --input - | \
mcp-cli cmd --server sqlite --prompt "Categorize these entities" --input - > output.txt
# Enable verbose mode for detailed timing
> /verbose
# Monitor tool execution times
> /toolhistory
Tool Call History (15 calls)
# | Tool | Arguments | Time
1 | list_tables | {} | 0.12s
2 | read_query | {"query": "SELECT..."} | 0.45s
...
# Check provider performance
> /provider diagnostic
Provider Diagnostics
Provider | Status | Response Time | Features
openai | β
Ready | 234ms | π‘π§ποΈ
anthropic | β
Ready | 187ms | π‘π§
ollama | β
Ready | 56ms | π‘π§
-
"Missing argument 'KWARGS'" error:
# Use equals sign format mcp-cli chat --server=sqlite --provider=openai # Or add double dash mcp-cli chat -- --server sqlite --provider openai
-
Provider not found:
mcp-cli provider diagnostic mcp-cli provider set <provider> api_key <your-key>
-
Tool execution timeout:
export MCP_TOOL_TIMEOUT=300 # 5 minutes
-
Connection issues:
mcp-cli ping --server <server-name> mcp-cli servers
Enable verbose logging for troubleshooting:
mcp-cli --verbose chat --server sqlite
mcp-cli --log-level DEBUG interactive --server sqlite
- API Keys: Stored securely in environment variables or protected files
- File Access: Filesystem access can be disabled with
--disable-filesystem
- Tool Validation: All tool calls are validated before execution
- Timeout Protection: Configurable timeouts prevent hanging operations
- Server Isolation: Each server runs in its own process
- Concurrent Tool Execution: Multiple tools can run simultaneously
- Streaming Responses: Real-time response generation
- Connection Pooling: Efficient reuse of client connections
- Caching: Tool metadata and provider configurations are cached
- Async Architecture: Non-blocking operations throughout
Core dependencies are organized into feature groups:
- cli: Rich terminal UI, command completion, provider integrations
- dev: Development tools, testing utilities, linting
- chuk-tool-processor: Core tool execution and MCP communication
- chuk-llm: Unified LLM provider management
Install with specific features:
pip install "mcp-cli[cli]" # Basic CLI features
pip install "mcp-cli[cli,dev]" # CLI with development tools
We welcome contributions! Please see our Contributing Guide for details.
git clone https://github.com/chrishayuk/mcp-cli
cd mcp-cli
pip install -e ".[cli,dev]"
pre-commit install
pytest
pytest --cov=mcp_cli --cov-report=html
This project is licensed under the MIT License - see the LICENSE file for details.
- CHUK Tool Processor - Async-native tool execution
- CHUK-LLM - Unified LLM provider management
- Rich - Beautiful terminal formatting
- Typer - CLI framework
- Prompt Toolkit - Interactive input
- Model Context Protocol - Core protocol specification
- MCP Servers - Official MCP server implementations
- CHUK Tool Processor - Tool execution engine
- CHUK-LLM - LLM provider abstraction