Deploy Piper to any infrastructure using Docker. This guide covers bare-metal servers, VMs, and private clouds.
Prerequisites
Docker Engine 24+ and Docker Compose v2
PostgreSQL 16+ with the pgvector extension
Redis 7+ or Valkey 8+
A domain with DNS control (for SSL)
An LLM provider API key (Groq, OpenAI, Anthropic, or Google)
Architecture Overview
Piper runs as two containers from a single Docker image:
Container MODEPurpose API apiFastAPI server — handles HTTP requests, serves the REST API Worker workerARQ worker — processes background jobs (LLM calls, embeddings, email)
The entrypoint script (docker/entrypoint.sh) selects the process based on the MODE environment variable. When RUN_MIGRATIONS=true, the API container automatically runs Alembic migrations before starting.
┌─────────────┐ ┌─────────────┐
│ API │ │ Worker │
│ MODE=api │ │ MODE=worker │
│ port 8000 │ │ │
└──────┬──────┘ └──────┬──────┘
│ │
├───────────────────┤
│ │
┌────▼────┐ ┌────▼────┐
│PostgreSQL│ │ Redis │
│+pgvector │ │ /Valkey │
└─────────┘ └─────────┘
Quick Start with Docker Compose
Create a docker-compose.yml for your deployment:
services :
api :
build :
context : .
dockerfile : docker/Dockerfile.backend
environment :
MODE : api
RUN_MIGRATIONS : "true"
DATABASE_URL : postgresql+asyncpg://piper:${DB_PASSWORD}@postgres:5432/piper
REDIS_URL : redis://redis:6379
JWT_SECRET : ${JWT_SECRET}
ADMIN_API_SECRET : ${ADMIN_API_SECRET}
SECRET_ENCRYPTION_KEY : ${SECRET_ENCRYPTION_KEY}
PIPER_AGENT_PROVIDER : ${PIPER_AGENT_PROVIDER}
PIPER_AGENT_API_KEY : ${PIPER_AGENT_API_KEY}
FRONTEND_URL : https://app.yourcompany.com
API_BASE_URL : https://api.yourcompany.com
DEPLOYMENT_MODE : enterprise
ports :
- "8000:8000"
depends_on :
postgres :
condition : service_healthy
redis :
condition : service_healthy
restart : unless-stopped
worker :
build :
context : .
dockerfile : docker/Dockerfile.backend
environment :
MODE : worker
DATABASE_URL : postgresql+asyncpg://piper:${DB_PASSWORD}@postgres:5432/piper
REDIS_URL : redis://redis:6379
JWT_SECRET : ${JWT_SECRET}
ADMIN_API_SECRET : ${ADMIN_API_SECRET}
SECRET_ENCRYPTION_KEY : ${SECRET_ENCRYPTION_KEY}
PIPER_AGENT_PROVIDER : ${PIPER_AGENT_PROVIDER}
PIPER_AGENT_API_KEY : ${PIPER_AGENT_API_KEY}
depends_on :
postgres :
condition : service_healthy
redis :
condition : service_healthy
restart : unless-stopped
postgres :
image : pgvector/pgvector:pg17
environment :
POSTGRES_USER : piper
POSTGRES_PASSWORD : ${DB_PASSWORD}
POSTGRES_DB : piper
volumes :
- pgdata:/var/lib/postgresql/data
healthcheck :
test : [ "CMD-SHELL" , "pg_isready -U piper" ]
interval : 10s
timeout : 5s
retries : 5
restart : unless-stopped
redis :
image : redis:8-alpine
volumes :
- redisdata:/data
healthcheck :
test : [ "CMD" , "redis-cli" , "ping" ]
interval : 10s
timeout : 5s
retries : 5
restart : unless-stopped
volumes :
pgdata :
redisdata :
Create a .env file with your secrets:
DB_PASSWORD = your-strong-db-password
JWT_SECRET = your-jwt-secret-min-32-chars
ADMIN_API_SECRET = your-admin-secret-min-32-chars
SECRET_ENCRYPTION_KEY = your-fernet-key
PIPER_AGENT_PROVIDER = piper
PIPER_AGENT_API_KEY = fw_xxxxx
Generate secure values:
# Generate random secrets
openssl rand -hex 32 # Use for JWT_SECRET and ADMIN_API_SECRET
# Generate Fernet key for SECRET_ENCRYPTION_KEY
python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())"
Save the ADMIN_API_SECRET — you’ll need it to create your first organization and admin user.
Start the stack:
Configuration
See the Configuration page for the full environment variable reference. The critical variables are:
Variable Required Description DATABASE_URLYes PostgreSQL connection string (postgresql+asyncpg://...) REDIS_URLYes Redis connection string JWT_SECRETYes Secret for JWT signing (min 32 chars) ADMIN_API_SECRETYes Secret for Admin API authentication (min 32 chars) SECRET_ENCRYPTION_KEYYes Fernet key for encrypting stored credentials PIPER_AGENT_PROVIDERYes LLM provider: piper (managed), groq, openai, anthropic, google, fireworks, together, or cerebras PIPER_AGENT_API_KEYYes API key for the default LLM provider MODEYes api or worker — determines which process runsRUN_MIGRATIONSNo Set to true on the API container to auto-run migrations on startup
Database Setup
If you’re using the Docker Compose file above, PostgreSQL and the pgvector extension are set up automatically via the pgvector/pgvector:pg17 image.
For an external PostgreSQL instance (managed or self-hosted):
-- Create the database
CREATE DATABASE piper ;
-- Connect to the piper database, then enable pgvector
\c piper
CREATE EXTENSION IF NOT EXISTS vector ;
For managed Redis services, use TLS:
REDIS_URL = rediss://:password@your-redis-host:6380
Migrations run automatically when the API container starts with RUN_MIGRATIONS=true. You don’t need to run them manually.
Reverse Proxy and SSL
Place a reverse proxy in front of the API container to handle SSL termination. Any standard proxy works — Nginx, Caddy, or Traefik.
Example with Caddy (automatic HTTPS via Let’s Encrypt):
api.yourcompany.com {
reverse_proxy localhost:8000
}
Example with Nginx:
server {
listen 443 ssl;
server_name api.yourcompany.com;
ssl_certificate /etc/ssl/certs/your-cert.pem;
ssl_certificate_key /etc/ssl/private/your-key.pem;
location / {
proxy_pass http://localhost:8000;
proxy_set_header Host $ host ;
proxy_set_header X-Real-IP $ remote_addr ;
proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for ;
proxy_set_header X-Forwarded-Proto $ scheme ;
}
}
Bootstrapping
After deployment, create your first organization and admin user via the Admin API:
Bootstrap Your Installation Create the first organization and admin user using the Admin API
Updating
To update Piper, pull the latest image and restart:
git pull
docker compose up -d --build
The containers rebuild with the latest code and migrations run automatically on startup.
Monitoring
Health Check
The API exposes a health endpoint:
curl http://localhost:8000/health
# Returns: {"status": "healthy"}
The Docker image includes a built-in HEALTHCHECK that polls this endpoint every 30 seconds.
Logs
# All services
docker compose logs -f
# API only
docker compose logs -f api
# Worker only
docker compose logs -f worker