🔌 Run Presenton with an Ollama Model (Fully Offline)

Presenton supports fully offline operation using open-source models via Ollama. This allows you to generate presentations without relying on cloud APIs, keeping your data private and costs low.

🚀 Example: Run Presenton with Ollama

docker run -it --name presenton -p 5000:80 \
  -e LLM="ollama" \
  -e OLLAMA_MODEL="llama3.2:3b" \
  -e IMAGE_PROVIDER="pexels" \
  -e PEXELS_API_KEY="your_pexels_api_key" \
  -e CAN_CHANGE_KEYS="false" \
  -v "./app_data:/app_data" \
  ghcr.io/presenton/presenton:latest

🚀 Example: Run Presenton with you own Ollama server

docker run -it --name presenton -p 5000:80 \
  -e LLM="ollama" \
  -e OLLAMA_MODEL="llama3.2:3b" \
  -e OLLAMA_URL="http://XXXXXXXXXXXXX" \
  -e IMAGE_PROVIDER="pexels" \
  -e PEXELS_API_KEY="your_pexels_api_key" \
  -e CAN_CHANGE_KEYS="false" \
  -v "./app_data:/app_data" \
  ghcr.io/presenton/presenton:latest

🧾 Ollama Environment Variables

  • LLM="ollama" Select Ollama as the LLM backend.
  • OLLAMA_MODEL Required. The Ollama model to use (e.g., llama3.2:3b, mistral, phi3, etc.). Example:
    OLLAMA_MODEL="llama3.2:3b"
    
  • OLLAMA_URL Optional. Set this if you’re running Ollama outside Docker or on a custom host. Example:
    OLLAMA_URL="http://XXXXXXXXXXXX"
    
You can get a free API key at https://www.pexels.com/api/
✅ Add --gpus=all to enable GPU acceleration (see Using GPU).

🧠 Supported Ollama Models

ModelSize
Llama Models
llama3:8b4.7 GB
llama3:70b40 GB
llama3.1:8b4.9 GB
llama3.1:70b43 GB
llama3.1:405b243 GB
llama3.2:1b1.3 GB
llama3.2:3b2 GB
llama3.3:70b43 GB
llama4:16x17b67 GB
llama4:128x17b245 GB
Gemma Models
gemma3:1b815 MB
gemma3:4b3.3 GB
gemma3:12b8.1 GB
gemma3:27b17 GB
DeepSeek Models
deepseek-r1:1.5b1.1 GB
deepseek-r1:7b4.7 GB
deepseek-r1:8b5.2 GB
deepseek-r1:14b9 GB
deepseek-r1:32b20 GB
deepseek-r1:70b43 GB
deepseek-r1:671b404 GB
Qwen Models
qwen3:0.6b523 MB
qwen3:1.7b1.4 GB
qwen3:4b2.6 GB
qwen3:8b5.2 GB
qwen3:14b9.3 GB
qwen3:30b19 GB
qwen3:32b20 GB
qwen3:235b142 GB

📌 Additional Notes

  • Use the OLLAMA_MODEL environment variable to select any supported model.
  • Ensure your system has enough RAM or GPU memory to handle the model.
  • Always include a PEXELS_API_KEY for full image generation functionality.