Skip to content

Commit

Permalink
Add Bedrock feature to readme file
Browse files Browse the repository at this point in the history
  • Loading branch information
Smiley73 committed Jan 6, 2025
1 parent de3cf0f commit f318b7e
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,13 +34,13 @@
<p align="center">
<strong>LLM Vision</strong> is a Home Assistant integration that can analyze images, videos,
live camera feeds and frigate events using the vision capabilities of multimodal LLMs.
Supported providers are OpenAI, Anthropic, Google Gemini, Groq,
Supported providers are OpenAI, Anthropic, Google Gemini, AWS Bedrock, Groq,
<a href="https://github.com/mudler/LocalAI">LocalAI</a>,
<a href="https://ollama.com/">Ollama</a> and any OpenAI compatible API.
</p>

## Features
- Compatible with OpenAI, Anthropic Claude, Google Gemini, Groq, [LocalAI](https://github.com/mudler/LocalAI), [Ollama](https://ollama.com/) and custom OpenAI compatible APIs
- Compatible with OpenAI, Anthropic Claude, Google Gemini, AWS Bedrock (Nova & Anthropic Claude), Groq, [LocalAI](https://github.com/mudler/LocalAI), [Ollama](https://ollama.com/) and custom OpenAI compatible APIs
- Analyzes images and video files, live camera feeds and Frigate events
- Remembers Frigate events and camera motion events so you can ask about them later
- Seamlessly updates sensors based on image input
Expand Down

0 comments on commit f318b7e

Please sign in to comment.