diff --git a/Ollama_ColabV4.ipynb b/Ollama_ColabV4.ipynb new file mode 100644 index 0000000..45347c4 --- /dev/null +++ b/Ollama_ColabV4.ipynb @@ -0,0 +1,168 @@ +{ + "nbformat": 4, + "nbformat_minor": 0, + "metadata": { + "colab": { + "provenance": [], + "gpuType": "T4", + "authorship_tag": "ABX9TyOL1tgFdQVPjxvlFymxVAOf", + "include_colab_link": true + }, + "kernelspec": { + "name": "python3", + "display_name": "Python 3" + }, + "language_info": { + "name": "python" + }, + "accelerator": "GPU" + }, + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "view-in-github", + "colab_type": "text" + }, + "source": [ + "\"Open" + ] + }, + { + "cell_type": "markdown", + "source": [ + "# Explore the Advanced Ollama-Companion:\n", + "# Streamlit Enhanced\n", + "---\n", + "\n", + "Welcome to the latest version of Ollama-Companion, now enhanced with Streamlit for a more interactive and intuitive user experience. As the developer behind this innovative project, I'm excited to introduce a suite of new features that redefine how you interact with and manage language models.\n", + "\n", + "Key Innovations in Ollama-Companion:\n", + "1. Quantization of Huggingface Models via UI\n", + "The foremost feature of Ollama-Companion is the ability to quantize Huggingface models through a user-friendly interface. This functionality allows you to efficiently convert models into different formats, catering to a variety of computational needs.\n", + "\n", + "2. Dynamic Module Integration\n", + "Seamlessly integrate a variety of modules as defined in shared.py, ensuring a modular and scalable approach to application development.\n", + "\n", + "3. Streamlit-Powered User Interface\n", + "The UI has been revamped using Streamlit to enhance intuitiveness and responsiveness, making it easier to navigate and interact with various features.\n", + "\n", + "4. Enhanced Model Interaction Features\n", + "Modelfile Manager: Beyond model selection, this feature lets you delve into the details of each model, providing options to view, manage, and even delete model files as required.\n", + "Interactive Modelfile Creator: Customize your model files in real-time, offering enhanced control and flexibility.\n", + "5. Chat Interface with LLAVA Image Analysis\n", + "The chat interface is equipped with LLAVA for image recognition and analysis, adding a dynamic and interactive dimension to your conversations with language models.\n", + "\n", + "6. Advanced Configuration Tools\n", + "Ollama API Configurator: Manage Ollama API endpoints directly from the UI.\n", + "LiteLLM Proxy and Public Endpoint: Easily set up proxies and public endpoints, ensuring secure and efficient model sharing.\n", + "7. Efficient Model Management Systems\n", + "Fast Model Downloading: Download models from Huggingface with improved speed.\n", + "Quantization Options: Choose between high or medium precision GGUF formats for model transformation.\n", + "Secure Model Uploads to Huggingface: Upload models to Huggingface confidently with enhanced security measures.\n", + "8. Security Enhancements\n", + "Token Encryption: Protect your Huggingface token with an additional layer of encryption for increased data security." + ], + "metadata": { + "id": "SIujJTVa7hAK" + } + }, + { + "cell_type": "markdown", + "source": [ + "### How to run\n", + "* Run these cells below to download all dependencies and start ollama with a public url will install alot of dependencies stay posted." + ], + "metadata": { + "id": "3gQuQak3Ckh4" + } + }, + { + "cell_type": "code", + "source": [ + "# Clone the repository\n", + "!git clone --branch Colab-installer https://github.com/Luxadevi/Ollama-Companion.git 2> /dev/null 1>&2\n", + "print(\"Cloning Ollama-Companion from git...\")\n", + "\n", + "# Install virtualenv\n", + "!sudo apt install virtualenv 2> /dev/null 1>&2\n", + "print(\"Installing some dependencies, please hold on...\")\n", + "\n", + "# Convert Windows line endings to Unix line endings in the install.sh script\n", + "!sed -i 's/\\r//' /content/Ollama-Companion/install.sh 2> /dev/null 1>&2\n", + "print(\"Converting line endings in install script...\")\n", + "\n", + "# Make the script executable and run it\n", + "!chmod +x /content/Ollama-Companion/install.sh 2> /dev/null 1>&2\n", + "print(\"Running the installation script and compiling Llama.cpp...\")\n", + "!/content/Ollama-Companion/install.sh 2> /dev/null 1>&2\n", + "\n", + "# Purge pip cache and clean up temporary files\n", + "!pip cache purge 2> /dev/null 1>&2\n", + "!find /tmp -type f -atime +1 -delete 2> /dev/null 1>&2\n", + "!sudo apt-get clean 2> /dev/null 1>&2\n", + "!sudo apt-get autoremove 2> /dev/null 1>&2\n", + "print(\"Cleaning up and finalizing setup...\")\n", + "print(\"Started all apps run the cell below if you want to restart\")\n", + "# Run the application\n", + "!python3 /content/Ollama-Companion/run_app.py\n", + "print(\"Started all apps run the cell below if you want to restart\")" + ], + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "gQSkK1w82y52", + "outputId": "eb0b3805-9c1f-49c4-bf4d-e91e999944e2" + }, + "execution_count": null, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "Cloning Ollama-Companion from git...\n", + "Installing some dependencies, please hold on...\n", + "Converting line endings in install script...\n", + "Running the installation script and compiling Llama.cpp...\n", + "Cleaning up and finalizing setup...\n", + "Started all apps run the cell below if you want to restart\n", + "Starting Cloudflare Tunnel...\n", + "Starting Streamlit App...\n", + "Tunnel URL: https://sleep-wonder-ban-shakira.trycloudflare.com\n", + "\n", + "Collecting usage statistics. To deactivate, set browser.gatherUsageStats to False.\n", + "\u001b[0m\n", + "\u001b[0m\n", + "\u001b[34m\u001b[1m You can now view your Streamlit app in your browser.\u001b[0m\n", + "\u001b[0m\n", + "\u001b[34m Network URL: \u001b[0m\u001b[1mhttp://172.28.0.12:8501\u001b[0m\n", + "\u001b[34m External URL: \u001b[0m\u001b[1mhttp://34.148.122.138:8501\u001b[0m\n", + "\u001b[0m\n" + ] + } + ] + }, + { + "cell_type": "markdown", + "source": [ + "**If stopped or wanting to relaunch.** \n", + "Run this cell to restart your LLM stack" + ], + "metadata": { + "id": "9QADK7qltjVd" + } + }, + { + "cell_type": "code", + "source": [ + "!python3 /content/Ollama-Companion/run_app.py" + ], + "metadata": { + "id": "aCwfO2OT6W_8" + }, + "execution_count": null, + "outputs": [] + } + ] +} \ No newline at end of file