Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
Update layout
  • Loading branch information
Neet-Nestor authored May 25, 2024
1 parent e83d4b2 commit 8d3bf05
Showing 1 changed file with 3 additions and 2 deletions.
5 changes: 3 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,12 @@

</div>

**WebLLM is a high-performance in-browser LLM inference engine** that directly
## Overview
WebLLM is a high-performance in-browser LLM inference engine that directly
brings language model inference directly onto web browsers with hardware acceleration.
Everything runs inside the browser with no server support and is accelerated with WebGPU.

**WebLLM is fully compatible with [OpenAI API](https://platform.openai.com/docs/api-reference/chat).**
WebLLM is **fully compatible with [OpenAI API](https://platform.openai.com/docs/api-reference/chat).**
That is, you can use the same OpenAI API on **any open source models** locally, with functionalities
including json-mode, function-calling, streaming, etc.

Expand Down

0 comments on commit 8d3bf05

Please sign in to comment.