Vicunia is a frontend for using alpaca.cpp and providing a GUI for installing and chatting with Stanford Alpaca and other models from the llama.cpp
family. It's available for Windows, Linux and Mac
See releases for the latest version.
- Go to the "Setup" tab
- Click "Download Model"
- In the options menu in the chat tab, make sure the path points to the correct folder (vicunia root dir + resources/models)
On Windows, the vicunia root dir looks something like this:
C:\Users\yourusername\AppData\Local\Programs\vicunia
Follow the guide on alpaca.cpp!
TL;DR: have a working and compiled alpaca.cpp folder with a model in your OS home directory (Windows: C:\Users\%USERNAME%\alpaca.cpp
, Linux: /home/%USERNAME%/alpaca.cpp
, macOS: /Users/%USERNAME%/alpaca.cpp
). The compiled binaries (chat.exe, etc..) and model should be in the root of the alpaca.cpp folder.
- If current Alpaca.cpp throws errors, I have tested branches on my fork that work with Vicunia.
- On Windows, CPU with AVX2 support is required (at least with the included binaries).
- If you update your version, try deleting
.vicunia-settings.json
from the home folder first to avoid issues. - Mac version is not really tested right now
Feel free to open an issue on Github!
- more options
- markdown renderer
- rename executable to
chat
(OSX / Linux) orchat.exe
- have model file in same folder
- paste path to
chat
folder in the options menu