AI-powered meal planning app. Under development. Running at https://gsganden--meal-planner-web.modal.run/.
graph LR
User(("User"))
Modal["Modal (Hosting)"]
subgraph "Meal Planner Application"
direction LR
direction TB
Web_Routing_Layer["Web Request Routing<br/>(routers/)<br/>FastHTML"]
UI_Components["UI Components<br/>(ui/)<br/>MonsterUI"]
subgraph "Business Logic (services/)"
direction TB
Webpage_Text_Extractor_Service["Webpage text extraction<br/>(extract_webpage_text.py)<br/>URL fetching, HTML cleaning"]
Recipe_Processing_Service["Recipe processing<br/>(process_recipe.py)<br/>Data cleaning & standardization"]
LLM_Service["LLM interactions<br/>(call_llm.py)<br/>Google Gemini, Instructor"]
end
API_Layer["Recipe CRUD API<br/>(api/recipes.py)<br/>FastAPI"]
Web_Routing_Layer -- "Renders" --> UI_Components
Web_Routing_Layer -- "Calls" --> Webpage_Text_Extractor_Service
Web_Routing_Layer -- "Calls" --> Recipe_Processing_Service
Web_Routing_Layer -- "Calls" --> LLM_Service
Web_Routing_Layer -- "Internal API Call" --> API_Layer
end
subgraph "External Resources"
direction TB
Database[("Database (SQLite)")]
External_Web_Pages["External Web Pages/URLs"]
Google_Gemini_Cloud["Google Gemini Cloud API"]
end
User --> Modal
Modal --> Web_Routing_Layer
Webpage_Text_Extractor_Service -- "Fetches content" --> External_Web_Pages
LLM_Service -- "AI Tasks" --> Google_Gemini_Cloud
API_Layer --> Database
uv sync --all-groups
Get a Gemini API key and assign its value to a GOOGLE_API_KEY
environment variable inside a dotenv file.
Install pre-commit hooks:
pre-commit install --hook-type pre-push -f
Run locally:
uv run modal serve deploy.py
Merging a PR into main
triggers deployment as part of a CI/CD process. THat is our usual deployment flow but we can deploy directly from local in a pinch, e.g. for an urgent hotfix:
uv run modal deploy deploy.py
The app's database is in the Modal environment, so we need to run Alembic CLI commands through Modal like so:
uv run modal shell deploy.py::alembic_env -c "alembic history"
Skip tests that make slow LLM calls:
uv run pytest
Run the tests that make slow LLM calls with a chance to retry once as a way to handle atypical nondeterministic failures:
source .env && uv run pytest tests/test_ml_evals.py --runslow --reruns=1
Check test coverage with minimal LLM calls:
./run_fast_coverage.sh