Chapter 24: Deployment
Docker Deployment on Synology NAS
AdPriority runs on the same Synology NAS (192.168.1.26) that hosts the other Docker-based applications. It joins the shared postgres_default network to access the PostgreSQL 16 container, and uses a Cloudflare Tunnel for HTTPS access required by Shopify.
DEPLOYMENT ARCHITECTURE
=======================
Internet Synology NAS (192.168.1.26)
-------- ---------------------------
Shopify Admin +---------------------------+
(embedded app) | Docker Engine |
| | |
| HTTPS | +---------------------+ |
v | | cloudflared | |
Cloudflare Tunnel ---------------------->| | (tunnel container) | |
| | +----------+----------+ |
| | | |
+--- /api/*, /auth/*, /webhooks/ -->| +----------v----------+ |
| | | adpriority-backend | |
| | | (Express + TS) | |
| | | Port: 3010 | |
| | +----------+----------+ |
| | | |
+--- /* (UI assets) -------------->| +----------v----------+ |
| | adpriority-admin | |
| | (React + Vite) | |
| | Port: 3011 | |
| +---------------------+ |
| |
| +---------------------+ |
| | adpriority-worker | |
| | (Bull queue) | |
| +----------+----------+ |
| | |
| +----------v----------+ |
| | redis | |
| | Port: 6379 | |
| +---------------------+ |
| | |
| +----------v----------+ |
| | postgres16 | |
| | (shared) | |
| | DB: adpriority_db | |
| | Port: 5432 | |
| +---------------------+ |
+---------------------------+
Docker Compose Structure
Development Configuration
# /volume1/docker/adpriority/docker-compose.yml
version: "3.8"
services:
backend:
build:
context: ./backend
dockerfile: Dockerfile
container_name: adpriority-backend
ports:
- "3010:3010"
volumes:
- ./backend/src:/app/src
- ./backend/prisma:/app/prisma
environment:
- NODE_ENV=development
- PORT=3010
- DATABASE_URL=postgresql://adpriority_user:${DB_PASSWORD}@postgres16:5432/adpriority_db
- REDIS_URL=redis://redis:6379
- SHOPIFY_CLIENT_ID=${SHOPIFY_CLIENT_ID}
- SHOPIFY_CLIENT_SECRET=${SHOPIFY_CLIENT_SECRET}
- SHOPIFY_SCOPES=read_products,write_products,read_inventory
- HOST=https://${APP_DOMAIN}
- GOOGLE_SHEETS_CREDENTIALS=${GOOGLE_SHEETS_CREDENTIALS}
- ENCRYPTION_KEY=${ENCRYPTION_KEY}
depends_on:
- redis
networks:
- postgres_default
- adpriority
restart: unless-stopped
admin-ui:
build:
context: ./admin-ui
dockerfile: Dockerfile
container_name: adpriority-admin
ports:
- "3011:3011"
volumes:
- ./admin-ui/src:/app/src
environment:
- VITE_SHOPIFY_CLIENT_ID=${SHOPIFY_CLIENT_ID}
- VITE_API_URL=https://${APP_DOMAIN}
networks:
- adpriority
restart: unless-stopped
worker:
build:
context: ./backend
dockerfile: Dockerfile
container_name: adpriority-worker
command: ["npm", "run", "worker"]
environment:
- NODE_ENV=development
- DATABASE_URL=postgresql://adpriority_user:${DB_PASSWORD}@postgres16:5432/adpriority_db
- REDIS_URL=redis://redis:6379
- GOOGLE_SHEETS_CREDENTIALS=${GOOGLE_SHEETS_CREDENTIALS}
- ENCRYPTION_KEY=${ENCRYPTION_KEY}
depends_on:
- redis
networks:
- postgres_default
- adpriority
restart: unless-stopped
redis:
image: redis:7-alpine
container_name: adpriority-redis
ports:
- "6380:6379"
volumes:
- redis-data:/data
networks:
- adpriority
restart: unless-stopped
volumes:
redis-data:
networks:
postgres_default:
external: true
adpriority:
driver: bridge
Production Configuration
The production configuration removes volume mounts (no hot-reload), uses versioned images, and adds health checks.
# /volume1/docker/adpriority/docker-compose.prod.yml
version: "3.8"
services:
backend:
image: adpriority-backend:${VERSION:-latest}
container_name: adpriority-backend
ports:
- "3010:3010"
environment:
- NODE_ENV=production
- PORT=3010
- DATABASE_URL=postgresql://adpriority_user:${DB_PASSWORD}@postgres16:5432/adpriority_db
- REDIS_URL=redis://redis:6379
- SHOPIFY_CLIENT_ID=${SHOPIFY_CLIENT_ID}
- SHOPIFY_CLIENT_SECRET=${SHOPIFY_CLIENT_SECRET}
- SHOPIFY_SCOPES=read_products,write_products,read_inventory
- HOST=https://${APP_DOMAIN}
- GOOGLE_SHEETS_CREDENTIALS=${GOOGLE_SHEETS_CREDENTIALS}
- ENCRYPTION_KEY=${ENCRYPTION_KEY}
depends_on:
redis:
condition: service_healthy
networks:
- postgres_default
- adpriority
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3010/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 15s
admin-ui:
image: adpriority-admin:${VERSION:-latest}
container_name: adpriority-admin
ports:
- "3011:3011"
environment:
- VITE_SHOPIFY_CLIENT_ID=${SHOPIFY_CLIENT_ID}
- VITE_API_URL=https://${APP_DOMAIN}
networks:
- adpriority
restart: always
worker:
image: adpriority-backend:${VERSION:-latest}
container_name: adpriority-worker
command: ["npm", "run", "worker"]
environment:
- NODE_ENV=production
- DATABASE_URL=postgresql://adpriority_user:${DB_PASSWORD}@postgres16:5432/adpriority_db
- REDIS_URL=redis://redis:6379
- GOOGLE_SHEETS_CREDENTIALS=${GOOGLE_SHEETS_CREDENTIALS}
- ENCRYPTION_KEY=${ENCRYPTION_KEY}
depends_on:
redis:
condition: service_healthy
networks:
- postgres_default
- adpriority
restart: always
redis:
image: redis:7-alpine
container_name: adpriority-redis
volumes:
- redis-data:/data
command: ["redis-server", "--appendonly", "yes"]
networks:
- adpriority
restart: always
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 3
volumes:
redis-data:
networks:
postgres_default:
external: true
adpriority:
driver: bridge
Container Roles
| Container | Image | Role | Network |
|---|---|---|---|
adpriority-backend | adpriority-backend | Express API server, handles HTTP requests, OAuth, webhooks | postgres_default + adpriority |
adpriority-admin | adpriority-admin | Serves React/Polaris frontend (Nginx in production) | adpriority |
adpriority-worker | adpriority-backend (same image, different entrypoint) | Processes Bull queue jobs: syncs, seasonal transitions, bulk operations | postgres_default + adpriority |
adpriority-redis | redis:7-alpine | Job queue backend, session cache | adpriority |
postgres16 | Shared (external) | PostgreSQL 16, database: adpriority_db | postgres_default |
The worker container uses the same Docker image as the backend but runs a different entrypoint (npm run worker) that starts the Bull queue processor instead of the Express server. This ensures the scoring engine and sync logic are identical between API-triggered and scheduled operations.
Cloudflare Tunnel Configuration
Shopify requires HTTPS callback URLs for OAuth redirects and webhook delivery. The Cloudflare Tunnel provides this without exposing ports on the NAS firewall.
Tunnel routing rules:
| Hostname | Service | Purpose |
|---|---|---|
adpriority.nexusclothing.synology.me | http://localhost:3010 | API, auth, webhooks |
adpriority.nexusclothing.synology.me (path: /*) | http://localhost:3011 | Admin UI assets |
The backend serves both API routes and proxies UI requests in production mode. In development, the Vite dev server runs separately on port 3011.
Tunnel config (added to existing cloudflared configuration):
# Addition to existing tunnel config
ingress:
- hostname: adpriority.nexusclothing.synology.me
service: http://localhost:3010
# ... existing rules ...
- service: http_status:404
Environment Variables
Required Variables
| Variable | Description | Example |
|---|---|---|
SHOPIFY_CLIENT_ID | App client ID from Partner Dashboard | abc123def456 |
SHOPIFY_CLIENT_SECRET | App client secret (never log this) | shpss_xxxxxxxx |
SHOPIFY_SCOPES | OAuth scopes | read_products,write_products,read_inventory |
DATABASE_URL | PostgreSQL connection string | postgresql://adpriority_user:pass@postgres16:5432/adpriority_db |
DB_PASSWORD | Database password (referenced in DATABASE_URL) | AdPrioritySecure2026 |
REDIS_URL | Redis connection string | redis://redis:6379 |
HOST | Public HTTPS URL (for OAuth redirects) | https://adpriority.nexusclothing.synology.me |
APP_DOMAIN | Domain without protocol | adpriority.nexusclothing.synology.me |
ENCRYPTION_KEY | AES-256 key for token encryption | 32-byte hex string |
GOOGLE_SHEETS_CREDENTIALS | Service account JSON (base64 encoded) | Base64 string |
Optional Variables
| Variable | Description | Default |
|---|---|---|
PORT | Backend server port | 3010 |
NODE_ENV | Runtime environment | development |
LOG_LEVEL | Logging verbosity | info |
SYNC_FREQUENCY_MINUTES | Default sync interval | 360 (6 hours) |
NEW_ARRIVAL_DAYS | Days to consider a product “new” | 14 |
SENTRY_DSN | Error tracking (optional) | Not set |
Environment File Template
# /volume1/docker/adpriority/.env.example
# Shopify App
SHOPIFY_CLIENT_ID=
SHOPIFY_CLIENT_SECRET=
SHOPIFY_SCOPES=read_products,write_products,read_inventory
# Database
DB_PASSWORD=AdPrioritySecure2026
DATABASE_URL=postgresql://adpriority_user:${DB_PASSWORD}@postgres16:5432/adpriority_db
# Redis
REDIS_URL=redis://redis:6379
# Application
HOST=https://adpriority.nexusclothing.synology.me
APP_DOMAIN=adpriority.nexusclothing.synology.me
PORT=3010
NODE_ENV=development
# Security
ENCRYPTION_KEY=
# Google
GOOGLE_SHEETS_CREDENTIALS=
# Version (for production images)
VERSION=latest
CI/CD Pipeline
GitHub Actions Workflow
# .github/workflows/deploy.yml
name: Build and Deploy
on:
push:
branches: [main]
tags: ["v*"]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set version
run: |
if [[ "${{ github.ref }}" == refs/tags/* ]]; then
echo "VERSION=${{ github.ref_name }}" >> $GITHUB_ENV
else
echo "VERSION=latest" >> $GITHUB_ENV
fi
- name: Build backend image
run: |
docker build -t adpriority-backend:${{ env.VERSION }} ./backend
- name: Build admin image
run: |
docker build -t adpriority-admin:${{ env.VERSION }} ./admin-ui
- name: Save images
run: |
docker save adpriority-backend:${{ env.VERSION }} | gzip > backend.tar.gz
docker save adpriority-admin:${{ env.VERSION }} | gzip > admin.tar.gz
- name: Deploy to NAS
uses: appleboy/scp-action@v0.1.7
with:
host: ${{ secrets.NAS_HOST }}
username: ${{ secrets.NAS_USER }}
key: ${{ secrets.NAS_SSH_KEY }}
source: "backend.tar.gz,admin.tar.gz"
target: "/volume1/docker/adpriority/deploy/"
- name: Load and restart
uses: appleboy/ssh-action@v1.0.3
with:
host: ${{ secrets.NAS_HOST }}
username: ${{ secrets.NAS_USER }}
key: ${{ secrets.NAS_SSH_KEY }}
script: |
cd /volume1/docker/adpriority
docker load < deploy/backend.tar.gz
docker load < deploy/admin.tar.gz
docker-compose -f docker-compose.prod.yml up -d
rm -f deploy/backend.tar.gz deploy/admin.tar.gz
Deployment Process
DEPLOYMENT PIPELINE
===================
Developer pushes to main branch
|
v
GitHub Actions triggered
|
+-- Build backend Docker image
+-- Build admin-ui Docker image
+-- Run tests (unit + integration)
|
v
If tests pass:
|
+-- Save images as tar.gz
+-- SCP to NAS /volume1/docker/adpriority/deploy/
+-- SSH: docker load images
+-- SSH: docker-compose up -d (rolling restart)
|
v
Health check passes
|
v
Deployment complete
Rollback
If a deployment introduces issues:
# Quick rollback to previous version
cd /volume1/docker/adpriority
VERSION=v1.0.1 docker-compose -f docker-compose.prod.yml up -d
Image tags follow semantic versioning. The latest tag always points to the most recent main branch build. Tagged releases (v1.0.0, v1.0.1) are immutable and can be used for pinned rollbacks.
Database Backup
AdPriority data is backed up alongside the other databases on the postgres16 container.
# Manual backup
docker exec postgres16 pg_dump -U adpriority_user adpriority_db \
> /volume1/docker/backups/adpriority-$(date +%Y%m%d).sql
# Automated daily backup (add to NAS cron)
0 3 * * * docker exec postgres16 pg_dump -U adpriority_user adpriority_db \
> /volume1/docker/backups/adpriority-$(date +\%Y\%m\%d).sql
Backups are retained for 30 days with weekly archives kept for 6 months.