Skip to content

koboldcpp-1.0.8beta

Compare
Choose a tag to compare
@LostRuins LostRuins released this 03 Apr 03:58
· 5742 commits to concedo since this release

koboldcpp-1.0.8beta

  • Rebranded to koboldcpp (formerly llamacpp-for-kobold). Library file names and references are changed too, Please let me know if anything is broken!
  • Added support for the original GPT4ALL.CPP format!
  • Added support for GPT-J formats, including the original 16bit legacy format as well as the 4bit version from Pygmalion.cpp
  • Switched compiler flag from -O3 to -Ofast. This should increase generation speed even more, but I dunno if anything will break, please let me know if so.
  • Changed default threads to scale according to physical Core counts instead of os.cpu_count(). This will generally result in fewer threads being utilized, but it should provide a better default for slower systems. You can override this manually with --threads parameter.

To use, download and run the koboldcpp.exe
Alternatively, drag and drop a compatible quantized model for llamacpp on top of the .exe, or run it and manually select the model in the popup dialog.

and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001