v0.8.0
Changes:
- llama.cpp updated to b1601
- added support for StableLM-3b-4e1t models
- added support for Qwen models
- added the possibility to merge LoRA with the model
- added merge and train LoRA progress bar
- added the possibility to save user templates
- added multyline input
- fixed many other errors
** Metal support temporary disabled for GPT2 models
** More about LoRA here https://github.com/guinmoon/LLMFarm/blob/main/lora.md