Follow these steps to generate presentations using Ollama on Presenton
LLM="ollama"
Select Ollama as the LLM backend.
OLLAMA_MODEL
Required. The Ollama model to use (e.g., llama3.2:3b
, mistral
, phi3
, etc.).
Example:
OLLAMA_URL
Optional. Set this if you’re running Ollama outside Docker or on a custom host.
Example:
You can get a free API key at https://www.pexels.com/api/
✅ Add --gpus=all
to enable GPU acceleration (see Using GPU).
Model | Size |
---|---|
Llama Models | |
llama3:8b | 4.7 GB |
llama3:70b | 40 GB |
llama3.1:8b | 4.9 GB |
llama3.1:70b | 43 GB |
llama3.1:405b | 243 GB |
llama3.2:1b | 1.3 GB |
llama3.2:3b | 2 GB |
llama3.3:70b | 43 GB |
llama4:16x17b | 67 GB |
llama4:128x17b | 245 GB |
Gemma Models | |
gemma3:1b | 815 MB |
gemma3:4b | 3.3 GB |
gemma3:12b | 8.1 GB |
gemma3:27b | 17 GB |
DeepSeek Models | |
deepseek-r1:1.5b | 1.1 GB |
deepseek-r1:7b | 4.7 GB |
deepseek-r1:8b | 5.2 GB |
deepseek-r1:14b | 9 GB |
deepseek-r1:32b | 20 GB |
deepseek-r1:70b | 43 GB |
deepseek-r1:671b | 404 GB |
Qwen Models | |
qwen3:0.6b | 523 MB |
qwen3:1.7b | 1.4 GB |
qwen3:4b | 2.6 GB |
qwen3:8b | 5.2 GB |
qwen3:14b | 9.3 GB |
qwen3:30b | 19 GB |
qwen3:32b | 20 GB |
qwen3:235b | 142 GB |
OLLAMA_MODEL
environment variable to select any supported model.PEXELS_API_KEY
for full image generation functionality.