Configurations
Using Ollama Models
Follow these steps to generate presentations using Ollama
Presenton supports running fully offline using open-source models via Ollama. This allows you to generate presentations without relying on cloud APIs β keeping your data private and costs low.
π Run Presenton with an Ollama Model
Make sure you have Ollama installed and models downloaded if running them outside Docker.
To run Presenton with an Ollama model:
π‘ Note: A valid Pexels API key is required for image generation when using Ollama models. You can get a free API key at https://www.pexels.com/api/
β Add
--gpus=all
to enable GPU acceleration (see Using GPU).
π§ Supported Ollama Models
Model | Size | Graph Support |
---|---|---|
Llama Models | ||
llama3:8b | 4.7 GB | β No |
llama3:70b | 40 GB | β Yes |
llama3.1:8b | 4.9 GB | β No |
llama3.1:70b | 43 GB | β Yes |
llama3.1:405b | 243 GB | β Yes |
llama3.2:1b | 1.3 GB | β No |
llama3.2:3b | 2 GB | β No |
llama3.3:70b | 43 GB | β Yes |
llama4:16x17b | 67 GB | β Yes |
llama4:128x17b | 245 GB | β Yes |
Gemma Models | ||
gemma3:1b | 815 MB | β No |
gemma3:4b | 3.3 GB | β No |
gemma3:12b | 8.1 GB | β No |
gemma3:27b | 17 GB | β Yes |
DeepSeek Models | ||
deepseek-r1:1.5b | 1.1 GB | β No |
deepseek-r1:7b | 4.7 GB | β No |
deepseek-r1:8b | 5.2 GB | β No |
deepseek-r1:14b | 9 GB | β No |
deepseek-r1:32b | 20 GB | β Yes |
deepseek-r1:70b | 43 GB | β Yes |
deepseek-r1:671b | 404 GB | β Yes |
Qwen Models | ||
qwen3:0.6b | 523 MB | β No |
qwen3:1.7b | 1.4 GB | β No |
qwen3:4b | 2.6 GB | β No |
qwen3:8b | 5.2 GB | β No |
qwen3:14b | 9.3 GB | β No |
qwen3:30b | 19 GB | β Yes |
qwen3:32b | 20 GB | β Yes |
qwen3:235b | 142 GB | β Yes |
β Graph Support means the model can generate charts and diagrams in presentations.
π Additional Notes
- Use the
OLLAMA_MODEL
environment variable to select any supported model. - Ensure your system has enough RAM or GPU memory to handle the model.
- Always include a
PEXELS_API_KEY
for full image generation functionality.