Skip to content

Latest commit

 

History

History
26 lines (17 loc) · 786 Bytes

README.md

File metadata and controls

26 lines (17 loc) · 786 Bytes

Overview

Index and search images based on descriptions generated by a local multimodal LLM.

This application makes a directory of images searchable with text queries. It does this by using a local multimodal LLM (e.g., llama3.2-vision) via the ollama API to generate descriptions of images, which it then writes to a semantic database (chromadb).

The text embeddings used by chromadb allow for querying the images with text prompts.

Prerequisites

Use

To index data, run main.py

python main.py --directory /path/to/images

To query the index, use --query

python main.py --query "buoy"