Manager Deployment
Complete deployment guide for the Productify Manager component.
Deployment Overview
The Manager consists of two run modes:
- API Mode - Handles HTTP/GraphQL requests (stateless, scalable)
- Executor Mode - Runs trigger execution loop (stateful, single instance)
- Both Mode - Combined API + Executor (development only)
Run Modes
The Manager supports three run modes configured via PFY_RUN_MODE environment variable or run_mode in config.yml:
API Mode
run_mode: apiOr with environment variable:
export PFY_RUN_MODE=api
./managerCharacteristics:
- Stateless
- Horizontally scalable
- Load balanced
- No trigger execution
Executor Mode
run_mode: executor
cron:
metrics_port: 9090 # Prometheus metrics endpointOr with environment variable:
export PFY_RUN_MODE=executor
export PFY_CRON_METRICS_PORT=9090
./managerCharacteristics:
- Stateful (cron loop)
- Single instance only (database-level locking)
- Not load balanced
- Handles trigger execution
- Exposes Prometheus metrics on port 9090
Prometheus Metrics:
When running in executor mode, metrics are exposed at :9090/metrics for autoscaler integration:
pfy_executor_queue_all_total{app="<app-id>"}- Total triggers queued per apppfy_executor_queue_processed_total{app="<app-id>"}- Successfully processed triggers per apppfy_executor_queue_waiting{app="<app-id>"}- Current waiting triggers per apppfy_executor_queue_process_time_seconds{app="<app-id>"}- Processing duration histogram per apppfy_executor_active_triggers- Currently active/enabled triggerspfy_executor_backend_dispatch_total{status="success|error"}- Backend dispatch results
Database-Level Locking:
The Executor uses database-level locking to ensure only one instance processes triggers at a time across the cluster. This prevents duplicate trigger executions even if multiple executor instances are accidentally started.
Both Mode (Development)
run_mode: bothOr with environment variable:
export PFY_RUN_MODE=both
./managerCharacteristics:
- Combined API + Executor
- Not recommended for production
- Convenient for development
Docker Deployment
API Instance
```bash
docker run -d \
--name manager-api \
-p 8080:8080 \
-e PFY_RUN_MODE=api \
-e PFY_DB_HOST=postgres \
ghcr.io/productifyfw/manager:latest
### Executor Instance
```bash
docker run -d \
--name manager-executor \
-p 9090:9090 \
-e PFY_RUN_MODE=executor \
-e PFY_CRON_METRICS_PORT=9090 \
-e PFY_DB_HOST=postgres \
-e PFY_DB_PASSWORD=secret \
ghcr.io/productifyfw/manager:latest
```
**Accessing metrics:**
```bash
curl http://localhost:9090/metricsDocker Compose
version: "3.8"
services:
manager-api:
image: ghcr.io/productifyfw/manager:latest
deploy:
replicas: 3
ports:
- "8080-8082:8080"
environment:
MODE: api
DB_HOST: postgres
DB_PASSWORD: ${DB_PASSWORD}
AUTH_CLIENT_SECRET: ${AUTH_SECRET}
depends_on:
- postgres
restart: unless-stopped
manager-executor:
image: ghcr.io/productifyfw/manager:latest
ports:
- "9090:9090"
environment:
MODE: executor
PFY_CRON_METRICS_PORT: 9090
DB_HOST: postgres
DB_PASSWORD: ${DB_PASSWORD}
depends_on:
- postgres
restart: unless-stopped
postgres:
image: postgres:16
environment:
POSTGRES_DB: manager
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
volumes:
postgres_data:Nomad Deployment
job "manager" {
datacenters = ["dc1"]
# API instances (scalable)
group "api" {
count = 3
network {
port "http" {
to = 8080
}
}
service {
name = "manager-api"
port = "http"
check {
type = "http"
path = "/health"
interval = "10s"
timeout = "2s"
}
}
task "server" {
driver = "docker"
config {
image = "ghcr.io/productifyfw/manager:latest"
ports = ["http"]
}
env {
MODE = "api"
PORT = "${NOMAD_PORT_http}"
}
template {
data = <<EOH
DB_HOST={{ key "manager/db/host" }}
DB_PASSWORD={{ key "manager/db/password" }}
AUTH_CLIENT_SECRET={{ key "manager/auth/secret" }}
EOH
destination = "secrets/env"
env = true
}
resources {
cpu = 500
memory = 512
}
}
}
# Executor instance (single)
group "executor" {
count = 1
network {
port "metrics" {
static = 9090
}
}
service {
name = "manager-executor-metrics"
port = "metrics"
tags = [
"prometheus",
"metrics"
]
check {
type = "http"
path = "/metrics"
interval = "30s"
timeout = "5s"
}
}
task "executor" {
driver = "docker"
config {
image = "ghcr.io/productifyfw/manager:latest"
ports = ["metrics"]
}
env {
MODE = "executor"
PFY_CRON_METRICS_PORT = "${NOMAD_PORT_metrics}"
}
template {
data = <<EOH
DB_HOST={{ key "manager/db/host" }}
DB_PASSWORD={{ key "manager/db/password" }}
EOH
destination = "secrets/env"
env = true
}
resources {
cpu = 200
memory = 256
}
}
}
}Future Support
Kubernetes deployment manifests will be added in a future release.
Configuration
See Manager Configuration for complete configuration reference.
Minimum required environment variables:
# Database
PFY_DB_HOST=postgres.example.com
PFY_DB_PORT=5432
PFY_DB_USER=manager
PFY_DB_PASSWORD=<secure-password>
PFY_DB_NAME=manager_prod
PFY_DB_SSLMODE=require
# PocketID Integration
PFY_POCKET_ID_HOST=https://auth.example.com
PFY_POCKET_ID_API_KEY=<your-api-key>
# Server Configuration
PFY_ENV=production
PFY_RUN_MODE=api # Options: api, executor, both
PFY_PORT=8080Health Checks
The Manager provides a health endpoint:
curl http://localhost:8080/healthResponse:
{
"status": "healthy",
"database": "connected",
"version": "1.0.0",
"mode": "api"
}Monitoring
Metrics
API Mode Metrics (Future):
- Request rate (per endpoint)
- Response latency (p50, p95, p99)
- Error rate
- Database connection pool usage
Executor Mode Metrics (Available Now):
Prometheus metrics exposed at :9090/metrics:
- Queue Metrics - Triggers queued, processed, and waiting (per app)
- Processing Time - Histogram of trigger execution duration (per app)
- Active Triggers - Count of enabled triggers
- Backend Dispatch - Success/error counts for backend callbacks
Prometheus Scrape Configuration:
scrape_configs:
- job_name: "manager-executor"
static_configs:
- targets: ["manager-executor:9090"]
scrape_interval: 15sExample PromQL Queries:
# Triggers waiting per application
pfy_executor_queue_waiting{app="app-uuid"}
# Trigger processing rate (per second)
rate(pfy_executor_queue_processed_total[5m])
# 95th percentile processing time
histogram_quantile(0.95, rate(pfy_executor_queue_process_time_seconds_bucket[5m]))
# Backend dispatch error rate
rate(pfy_executor_backend_dispatch_total{status="error"}[5m])Logs
Structured JSON logging:
{
"level": "info",
"msg": "Request completed",
"method": "POST",
"path": "/query",
"status": 200,
"duration_ms": 45,
"timestamp": "2025-12-01T10:00:00Z"
}Scaling
API Instances
Scale horizontally based on:
- CPU utilization (target 70%)
- Request rate
- Response latency
Nomad scaling policy:
scaling {
min = 2
max = 10
policy {
evaluation_interval = "30s"
cooldown = "60s"
}
}Executor Instance
DO NOT SCALE - Must be single instance only.
Troubleshooting
API Not Responding
Check:
- Container is running
- Port is accessible
- Database connection is valid
- Health check passes
Executor Not Running Triggers
Verify:
- Mode is set to
executororboth - Database connection is valid
- Triggers are enabled
- Cron schedules are valid
Database Connection Issues
Debug:
- Connection string is correct
- Database is accessible from Manager
- Credentials are valid
- SSL/TLS configured properly