Skip to content

Latest commit

 

History

History
139 lines (102 loc) · 3.32 KB

ollama-Cheatsheet.md

File metadata and controls

139 lines (102 loc) · 3.32 KB

Ollama Cheatsheet

Here is a comprehensive Ollama cheat sheet containing most often used commands and explanations:

Installation and Setup

macOS

curl -fsSL https://ollama.com/install.sh | sh

Windows

Download Ollama for Windows

Linux

curl -fsSL https://ollama.com/install.sh | sh

Docker

Use the official image available at ollama/ollama on Docker Hub.

Running Ollama

Start Ollama:

ollama serve

Run a specific model:

ollama run <model_name>

Model Library and Management

List available models:

ollama list

Pull a model:

ollama pull <model_name>

Create a model:

ollama create <model_name> -f <model_file>

Remove a model:

ollama rm <model_name>

Copy a model:

ollama cp <source_model> <new_model>

Advanced Usage

Multimodal Input

Use multimodal input by wrapping multiline text in triple quotes (""") and specifying image paths directly in the prompt.

REST API Examples

Generate a response:

curl http://localhost:11434/api/generate -d '{"model": "<model_name>", "prompt": "<prompt>"}'

Chat with a model:

curl http://localhost:11434/api/chat -d '{"model": "<model_name>", "messages": [{"role": "user", "content": "<message>"}]}'

VS Code Integration

Start Ollama:

ollama serve

Run a model:

ollama run <model_name>

AI Developer Scripts

Tools & Integrations

Community & Resources

Additional Tips

GPU Support:

podman run --rm --device nvidia.com/gpu=all --security-opt=label=disable ubuntu nvidia-smi -L

OpenShift:

oc new-project darmstadt-workshop
oc apply -f deployments/ollama.yaml

Debugging:

oc run mycurl --image=curlimages/curl -it -- sh

Useful Plugins

Additional Tools