Skip to content

Deploy your private Gemini application for free with one click, supporting Gemini 1.5, Gemini 2.0 models.

License

Notifications You must be signed in to change notification settings

u14app/gemini-next-chat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Gemini Next Chat

GitHub deployments GitHub Release Docker Image Size (tag) Docker Pulls GitHub License

Deploy your private Gemini application for free with one click, supporting Gemini 1.5 Pro, Gemini 1.5 Flash, Gemini Pro and Gemini Pro Vision models.

English · 简体中文

GeminiNextChat

Gemini Next Tailwind CSS shadcn/ui

Web MacOS Windows Linux

Web App / Desktop App / Issues

Share GeminiNextChat Repository

cover

Simple interface, supports image recognition and voice conversation

Gemini 1.5 Flash

Supports Gemini 1.5 and Gemini 1.5 Flash multimodal models

Support plugins

Support plugins, with built-in Web search, Web reader, Arxiv search, Weather and other practical plugins

Tray app

A cross-platform application client that supports a permanent menu bar, doubling your work efficiency

Note: If you encounter problems during the use of the project, you can check the known problems and solutions of FAQ.

TOC

Features

  • Deploy for free with one-click on Vercel in under 1 minute
  • Provides a very small (~4MB) cross-platform client (Windows/MacOS/Linux), can stay in the menu bar to improve office efficiency
  • Supports multi-modal models and can understand images, videos, audios and some text documents
  • Talk mode: Let you talk directly to Gemini
  • Visual recognition allows Gemini to understand the content of the picture
  • Assistant market with hundreds of selected system instruction
  • Support plugins, with built-in Web search, Web reader, Arxiv search, Weather and other practical plugins
  • Conversation list, so you can keep track of important conversations or discuss different topics with Gemini
  • Full Markdown support: LaTex formulas, code highlighting, and more
  • Automatically compress contextual chat records to save Tokens while supporting very long conversations
  • Privacy and security, all data is saved locally in the user's browser
  • Support PWA, can run as an application
  • Well-designed UI, responsive design, supports dark mode
  • Extremely fast first screen loading speed, supporting streaming response
  • Static deployment supports deployment on any website service that supports static pages, such as Github Page, Cloudflare, Vercel, etc.
  • Multi-language support: English、简体中文、繁体中文、日本語、한국어、Español、Deutsch、Français、Português、Русский and العربية

Roadmap

  • Reconstruct the topic square and introduce Prompt list
  • Use tauri to package desktop applications
  • Implementation based on functionCall plug-in
  • Support conversation list

Get Started

  1. Get Gemini API Key
  2. Click Deploy with Vercel
  3. Start using

Updating Code

If you want to update instantly, you can check out the GitHub documentation to learn how to synchronize a forked project with upstream code.

You can star or watch this project or follow author to get release notifications in time.

Environment Variables

GEMINI_API_KEY (optional)

Your Gemini api key. If you need to enable the server api, this is required.

GEMINI_API_BASE_URL (optional)

Default: https://generativelanguage.googleapis.com

Examples: http://your-gemini-proxy.com

Override Gemini api request base url. **To avoid server-side proxy url leaks, links in front-end pages will not be overwritten. **

GEMINI_UPLOAD_BASE_URL (optional)

Default: https://generativelanguage.googleapis.com

Example: http://your-gemini-upload-proxy.com

Override Gemini file upload api base url. **To avoid server-side proxy url leaks, links in front-end pages will not be overwritten. **

NEXT_PUBLIC_GEMINI_MODEL_LIST (optional)

Custom model list, default: all.

NEXT_PUBLIC_ASSISTANT_INDEX_URL (optional)

Default: https://chat-agents.lobehub.com

Examples: http://your-assistant-market-proxy.com

Override assistant market api request base url. The api link in the front-end interface will be adjusted synchronously.

NEXT_PUBLIC_UPLOAD_LIMIT (optional)

File upload size limit. There is no file size limit by default.

ACCESS_PASSWORD (optional)

Access password.

HEAD_SCRIPTS (optional)

Injected script code can be used for statistics or error tracking.

EXPORT_BASE_PATH (optional)

Only used to set the page base path in static deployment mode.

Access Password

This project provides limited access control. Please add an environment variable named ACCESS_PASSWORD on the vercel environment variables page.

After adding or modifying this environment variable, please redeploy the project for the changes to take effect.

Custom model list

This project supports custom model lists. Please add an environment variable named NEXT_PUBLIC_GEMINI_MODEL_LIST in the .env file or environment variables page.

The default model list is represented by all, and multiple models are separated by ,.

If you need to add a new model, please directly write the model name all,new-model-name, or use the + symbol plus the model name to add, that is, all,+new-model-name.

If you want to remove a model from the model list, use the - symbol followed by the model name to indicate removal, i.e. all,-existing-model-name. If you want to remove the default model list, you can use -all.

If you want to set a default model, you can use the @ symbol plus the model name to indicate the default model, that is, all,@default-model-name.

Development

If you have not installed pnpm

npm install -g pnpm
# 1. install nodejs and yarn first
# 2. config local variables, please change `.env.example` to `.env` or `.env.local`
# 3. run
pnpm install
pnpm dev

Requirements

NodeJS >= 18, Docker >= 20

Deployment

Docker (Recommended)

The Docker version needs to be 20 or above, otherwise it will prompt that the image cannot be found.

⚠️ Note: Most of the time, the docker version will lag behind the latest version by 1 to 2 days, so the "update exists" prompt will continue to appear after deployment, which is normal.

docker pull xiangfa/talk-with-gemini:latest

docker run -d --name talk-with-gemini -p 5481:3000 xiangfa/talk-with-gemini

You can also specify additional environment variables:

docker run -d --name talk-with-gemini \
   -p 5481:3000 \
   -e GEMINI_API_KEY=AIzaSy... \
   -e ACCESS_PASSWORD=your-password \
   xiangfa/talk-with-gemini

If you need to specify other environment variables, please add -e key=value to the above command to specify it.

Deploy using docker-compose.yml:

version: '3.9'
services:
   talk-with-gemini:
      image: xiangfa/talk-with-gemini
      container_name: talk-with-gemini
      environment:
         - GEMINI_API_KEY=AIzaSy...
         - ACCESS_PASSWORD=your-password
      ports:
         - 5481:3000

Static Deployment

You can also build a static page version directly, and then upload all files in the out directory to any website service that supports static pages, such as Github Page, Cloudflare, Vercel, etc..

pnpm build:export

If you deploy the project in a subdirectory and encounter resource loading failures when accessing, please add EXPORT_BASE_PATH=/path/project in the .env file or variable setting page.

Thanks

Technology Stack

Inspiration

FAQ

Solution for “User location is not supported for the API use”

  1. Use Cloudflare AI Gateway to forward APIs. Currently, Cloudflare AI Gateway already supports Google Vertex AI related APIs. For how to use it, please refer to How to Use Cloudflare AI Gateway. This solution is fast and stable, and is recommended.

  2. Use Cloudflare Worker for API proxy forwarding. For detailed settings, please refer to How to Use Cloudflare Worker Proxy API. Note that this solution may not work properly in some cases.

Why can’t I upload common documents such as doc, excel, and ppt?

Currently, the two models Gemini 1.5 Pro and Gemini 1.5 Flash support most images, audios, videos and some text files. For details. For other document types, we will try to use LangChain.js later.

Star History

Star History Chart

LICENSE

MIT