"The Dao that can be told is not the eternal Dao." - Laozi
A sophisticated artificial intelligence system implementing metacognitive architecture based on Global Workspace Theory. The system features persistent memory, self-reflection capabilities, advanced emotional processing, dynamic attention mechanism, and real-time consciousness simulation through vector-based semantic search and LLM-powered introspection.
graph TB
A[Input Text] --> B[SensoryCortex]
B --> C[WorkingMemory]
C --> M[AttentionEngine]
M --> D[AssociativeEngine]
M --> E[IntrospectionEngine]
M --> J[EmotionalEngine]
D --> F[LongTermMemory]
E --> G[ResponseGenerator]
J --> G
F --> D
E --> H[New Insights]
J --> K[Emotional Memories]
H --> F
K --> J
G --> I[Final Response]
subgraph "Cognitive Cycle"
C
M
D
E
J
end
subgraph "Memory Systems"
F
H
K
end
subgraph "Emotional Processing"
J
K
L[EmotionalState]
J --> L
L --> C
end
subgraph "Attention System"
M
N[AttentionState]
O[AttentionFocus]
M --> N
N --> O
end
- ๐งฎ SensoryCortex: AI-powered analysis of input intent, sentiment, and tone using Google Gemini
- ๐ง WorkingMemory: Central coordination hub implementing Global Workspace Theory
- ๐ฏ AttentionEngine: Dynamic attention allocation with balanced, focused, and exploratory strategies
- ๐ AssociativeEngine: Vector-based semantic search for relevant memory associations
- ๐ค IntrospectionEngine: LLM-powered insight generation and paradox detection
- ๐ EmotionalEngine: Advanced emotional processing with PAD model and emotional memory
- ๐ญ ResponseGenerator: Emotionally-aware response synthesis using context and memories
- ๐พ LongTermMemory: Persistent vector database using ChromaDB with sentence transformers
- Persistent Memory: Vector-based storage with semantic similarity search
- Real-time Consciousness: Multi-cycle cognitive processing with stabilization
- AI-Powered Analysis: Google Gemini integration for natural language understanding
- ๐ Dynamic Attention: Intelligent resource allocation with 3 attention strategies
- ๐ Attention Persistence: Focus tracking across cognitive cycles with capacity management
- ๐ Context Switching: Adaptive attention reallocation based on processing needs
- โ Emotional Intelligence: PAD emotional model with AI-powered emotion analysis
- โ Emotional Memory: Persistent emotional memories with decay and relevance scoring
- โ Emotionally-Aware Responses: Context-sensitive emotional response generation
- MCP Integration: Works directly with Cursor IDE as a tool
- Multilingual Support: English and Russian language processing
- Self-Reflection: Generates insights, questions, and identifies paradoxes
- Production Ready: Comprehensive testing, error handling, and logging
- Python 3.9 or higher
- Google Gemini API key (for AI analysis)
-
Clone the repository
git clone https://github.com/netsky-devel/Metacognitive-Engine-AI-Consciousness-Simulation.git cd Metacognitive-Engine-AI-Consciousness-Simulation
-
Install dependencies
pip install -r requirements.txt
-
Download spaCy model
python -m spacy download en_core_web_sm
-
Set up environment
cp .env.example .env # Edit .env and add your GEMINI_API_KEY
-
Initialize the system
python test_system.py
Create a .env
file in the project root:
GEMINI_API_KEY=your_gemini_api_key_here
The system can be used directly in Cursor IDE as an MCP tool:
-
Start the MCP server:
python src/mcp_server.py
-
Configure Cursor to use the MCP tool
-
Use the tool functions:
add_memory
: Add new memoriesquery_memories
: Search for relevant memoriesreflect_on_thought
: Generate insights from thoughtsprocess_thought
: Advanced multi-cycle processing
from src.engine.engine import MetacognitiveEngine
from src.engine.models.entry import Entry, EntryType
from src.engine.models.emotional_state import EmotionalState, EmotionType
# Initialize the engine with emotional processing and attention system
engine = MetacognitiveEngine(enable_emotions=True, enable_attention=True)
# Add memories
entry = Entry(
content="Consciousness is the subjective experience of awareness",
entry_type=EntryType.INSIGHT
)
engine.add_memory(entry)
# Process thoughts with full cognitive cycles (includes emotional processing)
response = engine.process_thought("I'm excited about AI consciousness research!")
print(response)
# Access emotional state (if emotions enabled)
if engine.emotional_engine:
emotional_summary = engine.emotional_engine.get_emotional_state_summary()
print(f"Current emotional state: {emotional_summary['current_state']}")
# Access attention state (if attention enabled)
if engine.attention_engine:
attention_context = engine.attention_engine.get_attention_context()
print(f"Active focuses: {attention_context['attention_summary']['total_focuses']}")
print(f"Attention strategy: {attention_context['current_strategy']}")
print(f"Usage: {attention_context['attention_summary']['usage_percentage']}")
# Legacy reflection mode
insights = engine.analyze_new_thought("How does AI relate to consciousness?")
for insight in insights:
print(f"{insight.entry_type.name}: {insight.content}")
from src.engine.models.emotional_state import EmotionalState, EmotionType
# Create emotional states
emotional_state = EmotionalState(
valence=0.8, # Positive emotion
arousal=0.9, # High energy
dominance=0.7 # Feeling in control
)
# Add specific emotions
emotional_state.add_emotion(EmotionType.JOY, 0.9)
emotional_state.add_emotion(EmotionType.EXCITEMENT, 0.8)
# Get emotional insights
print(f"Dominant emotion: {emotional_state.get_dominant_emotion().value}")
print(f"Emotional quadrant: {emotional_state.get_emotional_quadrant()}")
print(f"Summary: {emotional_state.to_summary_string()}")
# Blend emotional states
other_state = EmotionalState(valence=-0.3, arousal=0.6)
other_state.add_emotion(EmotionType.WORRY, 0.7)
blended = emotional_state.blend_with(other_state, weight=0.3)
print(f"Blended state: {blended.to_summary_string()}")
from src.engine.processors.attention_engine import AttentionEngine, AttentionStrategy
from src.engine.models.attention_state import AttentionType, AttentionPriority
# Initialize attention engine
attention_engine = AttentionEngine(capacity=1.0, strategy=AttentionStrategy.BALANCED)
# Manual attention allocation
attention_engine.allocate_attention(
focus_type=AttentionType.MEMORY_SEARCH,
weight=0.6,
priority=AttentionPriority.HIGH,
duration=30.0
)
# Get attention context
context = attention_engine.get_attention_context()
print(f"Strategy: {context['current_strategy']}")
print(f"Active focuses: {context['attention_summary']['total_focuses']}")
print(f"Capacity usage: {context['attention_summary']['usage_percentage']}")
# Processing recommendations
for rec in context['attention_guidance']['processing_recommendations']:
print(f"Recommendation: {rec}")
# Switch strategies
attention_engine.set_strategy(AttentionStrategy.FOCUSED) # 70% to primary focus
attention_engine.set_strategy(AttentionStrategy.EXPLORATORY) # Wide exploration
Start the FastAPI server:
uvicorn src.mcp_server:app --reload
- POST
/add
- Add new memory - POST
/query
- Search memories - POST
/reflect
- Reflect on thought (legacy) - POST
/process
- Advanced cognitive processing with attention system - GET
/list
- List all memories - POST
/clear
- Clear all memories
# Add a memory
curl -X POST "http://localhost:8000/add" \
-H "Content-Type: application/json" \
-d '{"content": "AI systems require consciousness for true understanding", "entry_type": "insight"}'
# Process a thought
curl -X POST "http://localhost:8000/process" \
-H "Content-Type: application/json" \
-d '{"content": "How does consciousness emerge from neural networks?"}'
# Query memories
curl -X POST "http://localhost:8000/query" \
-H "Content-Type: application/json" \
-d '{"content": "consciousness emergence", "n_results": 3}'
Experience the new dynamic attention mechanism:
# Run the interactive attention system demo
python demo_attention_system.py
The demo showcases:
- Dynamic Attention Allocation: Real-time resource distribution
- Three Attention Strategies: Balanced, focused, and exploratory approaches
- Context-Aware Processing: Adaptive attention based on input complexity
- Capacity Management: Intelligent focus rebalancing and cleanup
- Processing Recommendations: AI-guided attention optimization
๐ฏ 5 Attention Types: Memory search, insight generation, emotional processing, response generation, meta-cognition
๐ง Attention Strategies:
- Balanced: Even distribution (25% each focus)
- Focused: Primary focus (70%) + secondary (30%)
- Exploratory: Wide exploration across all types
๐ Dynamic Management:
- Capacity Tracking: Real-time usage monitoring
- Focus Expiration: Automatic cleanup of expired focuses
- Rebalancing: Intelligent resource redistribution
- Context Switching: Adaptive strategy changes
๐พ Attention Persistence: Focus tracking across cognitive cycles
Experience the advanced emotional processing capabilities:
# Run the interactive emotional system demo
python demo_emotional_system.py
The demo showcases:
- Real-time Emotion Analysis: AI-powered detection of emotions in text
- Emotional State Transitions: How emotions evolve during conversations
- Emotional Memory: How past emotional experiences influence responses
- Emotionally-Aware Responses: Context-sensitive reply generation
- PAD Model: Pleasure-Arousal-Dominance emotional framework
๐ฏ 18 Emotion Types: Joy, sadness, anger, fear, surprise, curiosity, excitement, and more
๐ง PAD Emotional Model:
- Valence: Negative (-1.0) โ Positive (+1.0)
- Arousal: Calm (0.0) โ Excited (1.0)
- Dominance: Submissive (0.0) โ Dominant (1.0)
๐ Emotional Quadrants:
- Excited: High valence + High arousal
- Content: High valence + Low arousal
- Distressed: Low valence + High arousal
- Depressed: Low valence + Low arousal
๐พ Emotional Memory: Persistent memories with decay and relevance scoring
# Install test dependencies
pip install -r requirements.txt
# Run comprehensive test suite
pytest
# Run with coverage
pytest --cov=src --cov-report=html
# Run specific test categories
pytest -m unit # Unit tests only
pytest -m integration # Integration tests only
pytest -m "not slow" # Exclude slow tests
- Unit Tests: Individual component testing
- Integration Tests: Full system workflow testing
- API Tests: FastAPI endpoint testing
- Memory Tests: Vector database and search testing
The test suite covers:
- โ All core components (Engine, Memory, Processors)
- โ NEW: Complete attention system (24 tests)
- โ โ Complete emotional system (25 tests)
- โ MCP server endpoints
- โ Error handling and edge cases
- โ Memory persistence and search
- โ Cognitive cycle processing
- โ AI integration (with mocking)
- โ โ Emotional state management and transitions
- โ โ Emotional memory with decay and relevance
- โ NEW: Attention allocation and strategy management
- โ NEW: Focus tracking and capacity management
The system uses optimized distance thresholds for vector search:
- Distance Threshold: 8.0 (optimal balance of precision/recall)
- Vector Model:
paraphrase-multilingual-mpnet-base-v2
- Database: ChromaDB with cosine similarity
- Search Space: HNSW indexing for fast retrieval
- Max Cycles: 3 (configurable)
- Stabilization: Automatic based on confidence scores
- Memory Integration: Real-time association retrieval
- Response Generation: Context-aware synthesis
scientific_paper/
โโโ src/
โ โโโ engine/
โ โ โโโ memory/ # Memory systems
โ โ โโโ processors/ # Cognitive processors
โ โ โโโ perception/ # Input analysis
โ โ โโโ models/ # Data models
โ โ โโโ engine.py # Main engine
โ โโโ mcp_server.py # FastAPI/MCP server
โโโ tests/ # Comprehensive test suite
โโโ data/ # Persistent data storage
โโโ docs/ # Additional documentation
- Global Workspace Theory: Central WorkingMemory coordination
- Producer-Consumer: Processor pipeline architecture
- Observer Pattern: Event-driven insight generation
- Strategy Pattern: Configurable processing strategies
- Repository Pattern: Abstracted memory persistence
- Fork the repository
- Create a feature branch
- Add comprehensive tests
- Ensure all tests pass
- Submit a pull request
# Format code
black src/ tests/
# Type checking
mypy src/
# Linting
flake8 src/ tests/
# Security scan
bandit -r src/
- Memory Search: < 100ms for 10K+ entries
- Cognitive Cycle: 2-5 seconds (with AI calls)
- Vector Generation: < 50ms per entry
- Response Generation: 1-3 seconds
- Vector Dimensions: 768 (sentence transformer)
- Storage: ~1KB per memory entry
- Indexing: HNSW with cosine similarity
- Retrieval: O(log n) average case
Current Status: 9.7/10 - See detailed ROADMAP.md for the path to 10/10
Phase 1 (Next 1-2 weeks): Foundation Strengthening
- 95%+ Test Coverage (currently 75%)
- Performance Monitoring & observability
- Production Readiness enhancements
Phase 2 (1-2 months): Cognitive Enhancement
- Attention Mechanism: โ COMPLETED - Dynamic focus allocation with 3 strategies
- Emotional Processing: โ COMPLETED - PAD model with AI analysis
- Memory Consolidation: Sleep-like reorganization
Phase 3 (2-3 months): Advanced Cognition
- Causal Reasoning: Cause-effect understanding
- Multi-modal Input: Image and audio processing
- Meta-Cognitive Enhancement: Self-monitoring
Phase 4+ (3+ months): Social Consciousness & Scientific Validation
- Theory of Mind: Understanding other agents
- Scientific Validation: Peer-reviewed research
- Enterprise Scale: Production deployment
See ROADMAP.md for complete development plan.
This project is licensed under the MIT License - see the LICENSE file for details.
- Global Workspace Theory: Bernard Baars
- Sentence Transformers: Hugging Face
- Vector Database: ChromaDB team
- Language Models: Google Gemini AI
- MCP Protocol: Anthropic
For questions, issues, or contributions:
- GitHub Issues: Create an issue
- Documentation: See ARCHITECTURE.md
- Email: [email protected]