Docker Compose Deployment
Complete guide for deploying Productify using Docker Compose.
Prerequisites
Before starting, ensure your system meets these requirements:
System Requirements
- Operating System: Linux (Ubuntu 20.04+, Debian 11+), macOS (12+), or Windows 10/11 with WSL2
- Docker Engine: 20.10+ installed and running
- Docker Compose: 2.0+ (usually bundled with Docker Desktop)
- Memory: Minimum 4 GB RAM (8 GB recommended)
- Storage: Minimum 10 GB free disk space for images and database
- Network: Internet connection for downloading dependencies (first run)
Verify Installation
# Check Docker version
docker --version
# Should show: Docker version 20.10.x or higher
# Check Docker Compose version
docker compose version
# Should show: Docker Compose version 2.x.x or higher
# Verify Docker is running
docker ps
# Should list running containers (or empty list if none running)Quick Start
Minimal Setup
version: "3.8"
services:
postgres:
image: postgres:16
environment:
POSTGRES_DB: manager
POSTGRES_USER: manager
POSTGRES_PASSWORD: ${DB_PASSWORD:-changeme}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U manager"]
interval: 10s
timeout: 5s
retries: 5
manager:
image: ghcr.io/productifyfw/manager:latest
ports:
- "8080:8080"
environment:
MODE: both
DB_HOST: postgres
DB_PASSWORD: ${DB_PASSWORD:-changeme}
depends_on:
postgres:
condition: service_healthy
restart: unless-stopped
volumes:
postgres_data:Run:
docker-compose up -dStep-by-Step Setup Guide
1. Clone the Repository
First, download the source code for the components you need:
# Clone individual components:
git clone https://github.com/ProductifyFW/manager.git
git clone https://github.com/ProductifyFW/proxy.git
git clone https://github.com/ProductifyFW/autoscaler.git2. Start Manager Component
The manager component includes the GraphQL API, database, and frontend application.
cd manager
# Start all services (postgres DB + backend + frontend)
docker compose up -d
# View logs (optional)
docker compose logs -f
# Check service status
docker compose psThe manager will be available at http://localhost:8080.
Important Ports:
8080: Manager backend API and frontend5432: PostgreSQL database (localhost only)
Development Mode
In development mode, the manager runs without authentication. For production, always use the proxy for authenticated access.
3. Start Proxy Component (Optional)
The proxy provides unified entry point, authentication, and routing.
cd proxy
# Start proxy and Pocket ID
docker compose up -d
# Check status
docker compose ps
# Open Pocket ID setup in browser
open http://pocketid.localhost/setupAvailable Services:
http://pocketid.localhost: Pocket ID authentication servicehttp://manager.localhost: Manager application (with authentication)http://proxytest.localhost: Proxy test page
4. Start Autoscaler Optimizer (Optional)
The optimizer service can run standalone for testing:
cd autoscaler/optimizer
# Option 1: Using Python virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
python -m optimizer.main
# Option 2: Using Docker
docker compose up optimizerAutoscaler Limitations
Full autoscaler functionality requires Nomad. See Nomad Deployment for complete setup.
Production Deployment
Complete Stack
version: "3.8"
services:
# Database
postgres:
image: postgres:16-alpine
environment:
POSTGRES_DB: ${DB_NAME:-manager}
POSTGRES_USER: ${DB_USER:-manager}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_INITDB_ARGS: "-E UTF8 --locale=en_US.UTF-8"
volumes:
- postgres_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER:-manager}"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
networks:
- backend
# Manager API instances
manager-api:
image: ghcr.io/productifyfw/manager:${VERSION:-latest}
deploy:
replicas: 3
environment:
PFY_RUN_MODE: api
PFY_PORT: 8080
PFY_ENV: ${ENV:-production}
# Database
PFY_DB_HOST: postgres
PFY_DB_PORT: 5432
PFY_DB_USER: ${DB_USER:-manager}
PFY_DB_PASSWORD: ${DB_PASSWORD}
PFY_DB_NAME: ${DB_NAME:-manager}
PFY_DB_SSLMODE: ${DB_SSLMODE:-require}
# PocketID
PFY_POCKET_ID_HOST: ${POCKET_ID_HOST}
PFY_POCKET_ID_API_KEY: ${POCKET_ID_API_KEY}
# Cron
PFY_CRON_BACKEND_HEARTBEAT_TIMEOUT: 2m
PFY_CRON_TRIGGER_CHECK_INTERVAL: 1s
depends_on:
postgres:
condition: service_healthy
healthcheck:
test:
[
"CMD",
"wget",
"--no-verbose",
"--tries=1",
"--spider",
"http://localhost:8080/health",
]
interval: 10s
timeout: 5s
retries: 3
restart: unless-stopped
networks:
backend:
frontend:
monitoring:
## Pocket ID Authentication Setup
Pocket ID provides OpenID Connect (OIDC) authentication for the Productify platform. Follow these steps for complete configuration.
### 1. Initial Setup
After starting the containers, access the setup page:
1. Navigate to `http://pocketid.localhost/setup`
2. Create an administrator account:
- Username
- Email address
- Password
3. Configure email server (optional but recommended for production)
4. Set up Passkey (optional, enhances security)
### 2. Generate API Key for Manager
The manager needs an API key to communicate with Pocket ID:
1. Log in to Pocket ID at `http://pocketid.localhost`
2. Go to **Settings > Admin > API Keys**
3. Click "Create New API Key"
4. Enter a name (e.g., "Manager Service")
5. **Copy the generated key** (it will only be shown once!)
6. Add it to the manager's `config.yml`:
```yaml
env: production
port: 8080
db:
host: db
port: 5432
user: postgres
password: password
dbname: productify
sslmode: require
pocket_id:
host: http://pocketid:1411
api_key: <paste-your-api-key-here>3. Register OIDC Client for Proxy
To enable login through the proxy:
- Go to Settings > Admin > OIDC Clients
- Click "Create New Client"
- Fill in the details:
- Client Name: Manager Proxy (or any name)
- Redirect URIs:
http://*.localhost/auth/oauth2/generic/authorization-code-callback - Scopes:
openid,email,profile - Grant Types: Authorization Code
- Save and copy the Client ID and Client Secret
- Update the proxy
Caddyfile:
{
security {
oauth identity provider generic {
realm generic
driver generic
client_id <paste-client-id-here>
client_secret <paste-client-secret-here>
scopes openid email profile
# ... rest of config
}
}
}4. Development Login (No Email)
If Passkey is not set up and email is not configured, use one-time access codes:
# In Docker Compose environment
docker compose exec pocketid /app/pocket-id one-time-access-token <username-or-email>
# The output URL can be opened in a browser to log in directlySecurity Best Practices
Production Security
Always enable email verification in production
Use strong passwords and MFA (Multi-Factor Authentication)
Never store API keys or client secrets in version control
Use environment variables or encrypted configuration files
HTTPS is mandatory in production environments
Regularly rotate API keys and secrets ::: # Manager Executor (single instance) manager-executor: image: ghcr.io/productifyfw/manager:${VERSION:-latest} environment: MODE: executor
# Database DB_HOST: postgres DB_PORT: 5432 DB_USER: ${DB_USER:-manager} DB_PASSWORD: ${DB_PASSWORD} DB_NAME: ${DB_NAME:-manager} # Logging LOG_LEVEL: ${LOG_LEVEL:-info} LOG_FORMAT: json depends_on: postgres: condition: service_healthy restart: unless-stopped networks: - backendOptimizer Service
optimizer: image: ghcr.io/productifyfw/optimizer:${VERSION:-latest} ports: - "8000:8000" - "9090:9090" # Metrics environment: CACHE_SIZE: ${OPTIMIZER_CACHE_SIZE:-10} FORECAST_HORIZON: ${OPTIMIZER_FORECAST_HORIZON:-60} LOG_LEVEL: ${LOG_LEVEL:-INFO} volumes: - ./optimizer-config.ini:/app/config.ini:ro healthcheck: test: ["CMD", "curl", "-f", "http://localhost:8000/health"] interval: 30s timeout: 10s retries: 3 restart: unless-stopped networks: - backend
Proxy (Caddy)
proxy: image: caddy:2.7-alpine ports: - "80:80" - "443:443" - "2019:2019" # Admin API volumes: - ./Caddyfile:/etc/caddy/Caddyfile:ro - caddy_data:/data - caddy_config:/config - ${CERTS_PATH:-./certs}:/certs:ro environment: ACME_EMAIL: ${ACME_EMAIL} depends_on: - manager-api healthcheck: test: [ "CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:2019/health", ] interval: 10s timeout: 5s retries: 3 restart: unless-stopped networks: - frontend
volumes: postgres_data: caddy_data: caddy_config:
networks: backend: internal: true frontend:
### Environment File (.env)
```bash
# Version
VERSION=1.0.0
# Database
DB_NAME=manager
DB_USER=manager
DB_PASSWORD=<secure-random-password>
# PocketID Integration
POCKET_ID_HOST=https://auth.example.com
POCKET_ID_API_KEY=<your-api-key>
# Environment
ENV=production
# Proxy
ACME_EMAIL=admin@example.com
# Optimizer
OPTIMIZER_CACHE_SIZE=10
OPTIMIZER_FORECAST_HORIZON=60Caddyfile
{
admin :2019
auto_https on
email {$ACME_EMAIL}
}
manager.example.com {
reverse_proxy manager-api:8080 {
lb_policy least_conn
health_uri /health
health_interval 10s
health_timeout 5s
}
}Development Setup
Development Compose
version: "3.8"
services:
postgres:
image: postgres:16
ports:
- "5432:5432"
environment:
POSTGRES_DB: manager_dev
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
volumes:
- postgres_dev:/var/lib/postgresql/data
manager:
build:
context: ./manager
dockerfile: Dockerfile
ports:
- "8080:8080"
environment:
MODE: both
DB_HOST: postgres
DB_USER: dev
DB_PASSWORD: dev
DB_NAME: manager_dev
LOG_LEVEL: debug
volumes:
- ./manager:/app
depends_on:
- postgres
command: ["air", "-c", ".air.toml"] # Hot reload
optimizer:
build:
context: ./autoscaler/optimizer
dockerfile: Dockerfile
ports:
- "8000:8000"
environment:
LOG_LEVEL: DEBUG
volumes:
- ./autoscaler/optimizer:/app
volumes:
postgres_dev:Advanced Configuration
Secrets Management
Using Docker Secrets:
version: "3.8"
services:
manager-api:
image: ghcr.io/productifyfw/manager:latest
secrets:
- db_password
- auth_secret
environment:
DB_PASSWORD_FILE: /run/secrets/db_password
AUTH_CLIENT_SECRET_FILE: /run/secrets/auth_secret
secrets:
db_password:
file: ./secrets/db_password.txt
auth_secret:
file: ./secrets/auth_secret.txtResource Limits
services:
manager-api:
image: ghcr.io/productifyfw/manager:latest
deploy:
resources:
limits:
cpus: "2"
memory: 2G
reservations:
cpus: "0.5"
memory: 512MLogging
services:
manager-api:
image: ghcr.io/productifyfw/manager:latest
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"Custom Networks
networks:
frontend:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
backend:
driver: bridge
internal: true
ipam:
config:
- subnet: 172.21.0.0/16Monitoring
Prometheus
services:
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus_data:/prometheus
command:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
networks:
- backend
volumes:
prometheus_data:prometheus.yml:
global:
scrape_interval: 15s
scrape_configs:
- job_name: "manager"
static_configs:
- targets: ["manager-api:9090"]Troubleshooting
Port Conflicts
If you see port already in use errors:
# Check what's using the port (Linux/macOS)
sudo lsof -i :8080
sudo lsof -i :5432
# Find and stop the conflicting process
kill -9 <PID>
# Or change the port in docker-compose.yml
ports:
- "8081:8080" # Use 8081 instead of 8080Docker Network Issues
If containers cannot reach each other:
# Restart Docker network
docker compose down
docker network prune -f
docker compose up -d
# Check container connectivity
docker compose exec manager ping postgresDatabase Migration Errors
If the database schema is incorrect:
# Run migrations manually
docker compose exec manager make migrate
# Or recreate everything (WARNING: deletes all data)
docker compose down -v
docker compose up -dContainer Logs
View logs for debugging:
# All services
docker compose logs -f
# Specific service
docker compose logs -f manager
# Last 100 lines
docker compose logs --tail=100 manager
# With timestamps
docker compose logs -f -t managerHealth Check Failures
If health checks keep failing:
# Check service health
docker compose ps
# Inspect specific container
docker inspect <container-id>
# Check health check logs
docker compose logs manager | grep health
# Manually test health endpoint
docker compose exec manager wget -O- http://localhost:8080/healthPermission Issues
If you encounter permission errors with volumes:
# Fix volume permissions (Linux)
sudo chown -R $USER:$USER ./postgres_data
# Or run with correct user in docker-compose.yml
services:
postgres:
user: "${UID}:${GID}"Memory Issues
If containers are being killed due to OOM:
# Check Docker resource usage
docker stats
# Increase Docker Desktop memory limit (Mac/Windows)
# Docker Desktop > Settings > Resources > Memory
# Add resource limits in docker-compose.yml
services:
manager:
deploy:
resources:
limits:
memory: 2GClean Restart
For a complete clean restart:
# Stop all containers
docker compose down
# Remove volumes (deletes all data!)
docker compose down -v
# Remove images
docker compose down --rmi all
# Clean up Docker system
docker system prune -af
# Start fresh
docker compose up -dEnvironment Variables
Create a .env file in the same directory as docker-compose.yml:
# Database
DB_NAME=manager
DB_USER=manager
DB_PASSWORD=your-secure-password-here
# Manager
VERSION=latest
ENV=production
# Pocket ID
POCKET_ID_HOST=http://pocketid:1411
POCKET_ID_API_KEY=your-api-key-here
# Email (optional)
SMTP_HOST=smtp.example.com
SMTP_PORT=587
SMTP_USER=noreply@example.com
SMTP_PASSWORD=your-smtp-password
# TLS
ACME_EMAIL=admin@example.com
# Logging
LOG_LEVEL=info
# Optimizer
OPTIMIZER_CACHE_SIZE=10
OPTIMIZER_FORECAST_HORIZON=60WARNING
Never commit the .env file to version control. Add it to .gitignore:
echo ".env" >> .gitignoreNext Steps
- Nomad Deployment - Production-ready orchestration
- Manager Configuration - Detailed manager setup
- Proxy Configuration - Advanced proxy settings
- Autoscaler Setup - Intelligent scaling configuration
Grafana
services:
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
environment:
GF_SECURITY_ADMIN_PASSWORD: ${GRAFANA_PASSWORD}
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/dashboards:/etc/grafana/provisioning/dashboards:ro
- ./grafana/datasources:/etc/grafana/provisioning/datasources:ro
depends_on:
- prometheus
networks:
- backend
volumes:
grafana_data:Backup & Recovery
Database Backup
services:
backup:
image: postgres:16
volumes:
- ./backups:/backups
- postgres_data:/data:ro
environment:
PGHOST: postgres
PGUSER: ${DB_USER}
PGPASSWORD: ${DB_PASSWORD}
PGDATABASE: ${DB_NAME}
command: >
sh -c "pg_dump -Fc -f /backups/backup_$(date +%Y%m%d_%H%M%S).dump"
networks:
- backendCron job:
0 2 * * * docker-compose run --rm backupRestore
docker-compose exec postgres pg_restore \
-U manager \
-d manager \
-c \
/backups/backup_20250101_020000.dumpDeployment Commands
Initial Deployment
# Create environment file
cp .env.example .env
nano .env
# Generate secrets
openssl rand -hex 32 > secrets/db_password.txt
openssl rand -hex 32 > secrets/auth_secret.txt
# Pull images
docker-compose pull
# Start services
docker-compose up -d
# Check status
docker-compose ps
docker-compose logs -fUpdates
# Pull new images
docker-compose pull
# Recreate containers
docker-compose up -d
# View logs
docker-compose logs -f manager-apiScaling
# Scale API instances
docker-compose up -d --scale manager-api=5
# Do NOT scale executor (must be 1)Health Checks
# Check all services
docker-compose ps
# Health check endpoints
curl http://localhost:8080/health # Manager
curl http://localhost:8000/health # Optimizer
curl http://localhost:2019/health # Proxy
# Database connection
docker-compose exec postgres pg_isready -U managerTroubleshooting
Container Won't Start
# View logs
docker-compose logs manager-api
# Check environment
docker-compose config
# Inspect container
docker inspect productify_manager-api_1Database Connection Issues
# Test connection from Manager
docker-compose exec manager-api sh -c 'nc -zv postgres 5432'
# Check database logs
docker-compose logs postgres
# Verify credentials
docker-compose exec postgres psql -U manager -c '\l'Network Issues
# List networks
docker network ls
# Inspect network
docker network inspect productify_backend
# Test connectivity
docker-compose exec manager-api ping postgresPerformance Issues
# View resource usage
docker stats
# Check container limits
docker inspect --format='{{.HostConfig.Memory}}' productify_manager-api_1Production Checklist
- [ ] Use specific image versions (not
latest) - [ ] Set strong passwords in
.env - [ ] Configure backup strategy
- [ ] Enable TLS/SSL on Proxy
- [ ] Set resource limits
- [ ] Configure log rotation
- [ ] Set up monitoring (Prometheus + Grafana)
- [ ] Test disaster recovery
- [ ] Document deployment process
- [ ] Set up health checks
- [ ] Configure firewall rules
- [ ] Enable auto-restart policies