🌐 Example: Run Presenton with a Custom LLM

docker run -it --name presenton -p 5000:80 \
  -e LLM="custom" \
  -e CUSTOM_LLM_URL="http://XXXXXXXXXXX/v1" \
  -e CUSTOM_LLM_API_KEY="your_custom_api_key" \
  -e CUSTOM_MODEL="your-model-name" \
  -e IMAGE_PROVIDER="pexels" \
  -e PEXELS_API_KEY="xxxxxxxxxxx"
  -e CAN_CHANGE_KEYS="false" \
  -v "./app_data:/app_data" \
  ghcr.io/presenton/presenton:latest

🔧 Environment Variables for Custom LLM

VariableDescription
LLM="custom"Use the custom value to enable OpenAI-compatible API support
CUSTOM_LLM_URLBase URL of your OpenAI-compatible API (e.g. http://XXXXXXXXXXX/v1)
CUSTOM_LLM_API_KEYAPI key used for authorization (Bearer header)
CUSTOM_MODELID of the model to use (as defined by your API provider)