From 19815b3b175044b9c289dc90b15fc23da582a473 Mon Sep 17 00:00:00 2001 From: lyie28 Date: Wed, 20 Sep 2023 14:16:01 +0200 Subject: [PATCH] docs: update roadmap section --- README.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/README.md b/README.md index b8fcffa..705dbd9 100644 --- a/README.md +++ b/README.md @@ -69,14 +69,14 @@ BlindChat aims to serve two users: ## Roadmap -- Revamping of Hugging Face Chat UI to make it entirely client-side (removal of telemetry, data sharing, server-side history of conversations, server-side inference, etc.) ✅ -- Integration of privacy-by-design inference with local model ✅ -- Local caching of conversations ⌛ -- Integration of more advanced local models (e.g. [phi-1.5](https://huggingface.co/microsoft/phi-1_5)) and more advanced inference (e.g. [Web LLM](https://github.com/mlc-ai/web-llm)) ⌛ -- Integration of privacy-by-design inference with remote enclaves using BlindLlama for powerful models such as [Llama 2 70b](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) & [Falcon 180b](https://huggingface.co/tiiuae/falcon-180B) ⌛ -- Integration with [LlamaIndex TS](https://github.com/run-llama/LlamaIndexTS) for local Retrieval Augmented Generation (RAG) ⌛ -- Internet search ⌛ -- Connectors to pull data from different sources ⌛ +- [x] Revamping of Hugging Face Chat UI to make it entirely client-side (removal of telemetry, data sharing, server-side history of conversations, server-side inference, etc.) +- [x] Integration of privacy-by-design inference with local model +- [x] Local caching of conversations +- [ ] Integration of more advanced local models (e.g. [phi-1.5](https://huggingface.co/microsoft/phi-1_5)) and more advanced inference (e.g. [Web LLM](https://github.com/mlc-ai/web-llm)) +- [ ] Integration of privacy-by-design inference with remote enclaves using BlindLlama for powerful models such as [Llama 2 70b](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) & [Falcon 180b](https://huggingface.co/tiiuae/falcon-180B) ⌛ +- [ ] Integration with [LlamaIndex TS](https://github.com/run-llama/LlamaIndexTS) for local Retrieval Augmented Generation (RAG) ⌛ +- [ ] Internet search ⌛ +- [ ] Connectors to pull data from different sources ⌛

(back to top)