Skip to content

koboldcpp-1.53

Compare
Choose a tag to compare
@LostRuins LostRuins released this 23 Dec 03:18
· 3218 commits to concedo since this release

koboldcpp-1.53

  • Added support for SSL. You can now import your own SSL cert to use with KoboldCpp and serve it over HTTPS with --ssl [cert.pem] [key.pem] or via the GUI. The .pem files must be unencrypted, you can also generate them with OpenSSL, eg. openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 365 -config openssl.cnf -nodes for your own self signed certificate.
  • Added support for presence penalty (alternative rep pen) over the KAI API and in Lite. If Presence Penalty is set over the OpenAI API, and rep_pen is not set, then rep_pen will be set to a default of 1.0 instead of 1.1. Both penalties can be used together, although this is probably not a good idea.
  • Added fixes for Broken Pipe error, thanks @mahou-shoujo.
  • Added fixes for aborting ongoing connections while streaming in SillyTavern.
  • Merged upstream support for Phi models and speedups for Mixtral
  • The default non-blas batch size for GGUF models is now increased from 8 to 32.
  • Merged HIPBlas fixes from @YellowRoseCx
  • Fixed an issue with building convert tools in 1.52

To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here

Run it from the command line with the desired launch parameters (see --help), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001

For more information, be sure to run the program from command line with the --help flag.