Presenton supports running fully offline using open-source models via Ollama. This allows you to generate presentations without relying on cloud APIs β€” keeping your data private and costs low.

πŸš€ Run Presenton with an Ollama Model

Make sure you have Ollama installed and models downloaded if running them outside Docker.

To run Presenton with an Ollama model:

docker run -it --name presenton -p 5000:80 \
  -e LLM="ollama" \
  -e OLLAMA_MODEL="llama3.2:3b" \
  -e PEXELS_API_KEY="your_pexels_api_key" \
  -e CAN_CHANGE_KEYS="false" \
  -v "./user_data:/app/user_data" \
  ghcr.io/presenton/presenton:v0.3.0-beta

πŸ’‘ Note: A valid Pexels API key is required for image generation when using Ollama models. You can get a free API key at https://www.pexels.com/api/

βœ… Add --gpus=all to enable GPU acceleration (see Using GPU).

🧠 Supported Ollama Models

ModelSizeGraph Support
Llama Models
llama3:8b4.7 GB❌ No
llama3:70b40 GBβœ… Yes
llama3.1:8b4.9 GB❌ No
llama3.1:70b43 GBβœ… Yes
llama3.1:405b243 GBβœ… Yes
llama3.2:1b1.3 GB❌ No
llama3.2:3b2 GB❌ No
llama3.3:70b43 GBβœ… Yes
llama4:16x17b67 GBβœ… Yes
llama4:128x17b245 GBβœ… Yes
Gemma Models
gemma3:1b815 MB❌ No
gemma3:4b3.3 GB❌ No
gemma3:12b8.1 GB❌ No
gemma3:27b17 GBβœ… Yes
DeepSeek Models
deepseek-r1:1.5b1.1 GB❌ No
deepseek-r1:7b4.7 GB❌ No
deepseek-r1:8b5.2 GB❌ No
deepseek-r1:14b9 GB❌ No
deepseek-r1:32b20 GBβœ… Yes
deepseek-r1:70b43 GBβœ… Yes
deepseek-r1:671b404 GBβœ… Yes
Qwen Models
qwen3:0.6b523 MB❌ No
qwen3:1.7b1.4 GB❌ No
qwen3:4b2.6 GB❌ No
qwen3:8b5.2 GB❌ No
qwen3:14b9.3 GB❌ No
qwen3:30b19 GBβœ… Yes
qwen3:32b20 GBβœ… Yes
qwen3:235b142 GBβœ… Yes

βœ… Graph Support means the model can generate charts and diagrams in presentations.

πŸ“Œ Additional Notes

  • Use the OLLAMA_MODEL environment variable to select any supported model.
  • Ensure your system has enough RAM or GPU memory to handle the model.
  • Always include a PEXELS_API_KEY for full image generation functionality.