Chapter 10

Advanced Deployment

10.1 Production Docker Compose

The following Docker Compose configuration runs MeshOptixIQ, Neo4j, and an Nginx reverse proxy with automatic restarts and persistent volumes.

version: '3.8'

services:
  neo4j:
    image: neo4j:5-community
    restart: unless-stopped
    volumes:
      - neo4j-data:/data
      - neo4j-logs:/logs
    environment:
      NEO4J_AUTH: neo4j/${NEO4J_PASSWORD}
      NEO4J_PLUGINS: '["apoc"]'
      NEO4J_dbms_memory_heap_initial__size: 1G
      NEO4J_dbms_memory_heap_max__size: 4G
    healthcheck:
      test: ["CMD", "neo4j", "status"]
      interval: 30s
      timeout: 10s
      retries: 5

  meshoptixiq:
    image: meshoptixiq/meshoptixiq:latest
    restart: unless-stopped
    depends_on:
      neo4j:
        condition: service_healthy
    ports:
      - "127.0.0.1:8000:8000"
    environment:
      GRAPH_BACKEND: neo4j
      NEO4J_URI: bolt://neo4j:7687
      NEO4J_USER: neo4j
      NEO4J_PASSWORD: ${NEO4J_PASSWORD}
      MESHOPTIXIQ_LICENSE_KEY: ${MESHOPTIXIQ_LICENSE_KEY}
      API_KEY: ${API_KEY}
      CORS_ORIGINS: https://meshoptixiq.yourdomain.com
    volumes:
      - meshoptixiq-cache:/app/cache
      - ./inventory.yaml:/app/configs/inventory.yaml:ro
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
      interval: 30s
      timeout: 5s
      retries: 3

  nginx:
    image: nginx:alpine
    restart: unless-stopped
    depends_on:
      - meshoptixiq
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/certs:/etc/nginx/certs:ro

volumes:
  neo4j-data:
  neo4j-logs:
  meshoptixiq-cache:

Store secrets in a .env file (never committed to version control):

NEO4J_PASSWORD=strong-random-password-here
MESHOPTIXIQ_LICENSE_KEY=mq-prod-xxxxxxxxxx
API_KEY=strong-random-api-key-here
# Start the stack
docker compose --env-file .env up -d

# View logs
docker compose logs -f meshoptixiq

# Run collection inside the container
docker compose exec meshoptixiq meshq collect \
  --source /app/configs/inventory.yaml

10.2 PostgreSQL Backend

Switch to PostgreSQL by setting GRAPH_BACKEND=postgres and providing a POSTGRES_DSN. The PostgreSQL schema is created automatically on first ingest.

services:
  postgres:
    image: postgres:16
    restart: unless-stopped
    environment:
      POSTGRES_DB: meshoptixiq
      POSTGRES_USER: meshoptixiq
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - postgres-data:/var/lib/postgresql/data

  meshoptixiq:
    image: meshoptixiq/meshoptixiq:latest
    environment:
      GRAPH_BACKEND: postgres
      POSTGRES_DSN: postgresql://meshoptixiq:${POSTGRES_PASSWORD}@postgres:5432/meshoptixiq
      POSTGRES_POOL_MIN: "2"
      POSTGRES_POOL_MAX: "10"
      MESHOPTIXIQ_LICENSE_KEY: ${MESHOPTIXIQ_LICENSE_KEY}
      API_KEY: ${API_KEY}

10.3 Reverse Proxy & TLS

Always front the MeshOptixIQ API with a reverse proxy that terminates TLS in production. A minimal Nginx config:

server {
    listen 443 ssl;
    server_name meshoptixiq.yourdomain.com;

    ssl_certificate     /etc/nginx/certs/fullchain.pem;
    ssl_certificate_key /etc/nginx/certs/privkey.pem;
    ssl_protocols       TLSv1.2 TLSv1.3;
    ssl_ciphers         HIGH:!aNULL:!MD5;

    # Security headers
    add_header Strict-Transport-Security "max-age=31536000" always;
    add_header X-Frame-Options DENY;
    add_header X-Content-Type-Options nosniff;

    # SSE-safe proxy settings
    location /events {
        proxy_pass         http://127.0.0.1:8000;
        proxy_set_header   Host $host;
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_buffering    off;
        proxy_cache        off;
        proxy_read_timeout 3600s;
        chunked_transfer_encoding on;
    }

    location / {
        proxy_pass         http://127.0.0.1:8000;
        proxy_set_header   Host $host;
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;
        proxy_read_timeout 60s;
    }
}

# Redirect HTTP → HTTPS
server {
    listen 80;
    server_name meshoptixiq.yourdomain.com;
    return 301 https://$host$request_uri;
}
Let's Encrypt
Use Certbot with the Nginx plugin to obtain and auto-renew free TLS certificates: certbot --nginx -d meshoptixiq.yourdomain.com

10.3b Microservices Deployment

For high-throughput or large-scale environments, MeshOptixIQ can be decomposed into five independently-scalable services behind an nginx path router. This is the recommended architecture for Enterprise clusters.

ServicePortEntry pointHandles
api8000network_discovery.api.main:appAdmin, auth, users, alerts, webhooks, SSE, SOAR, k8s, health
ingest8001network_discovery.ingest_app:appeBPF ingest, tracing spans, syslog, NetFlow/sFlow; Redis Stream workers
collect8002network_discovery.collect_app:appDevice collection dispatch, collectors, schedule executor
ai8003network_discovery.ai_app:appChat, NL query, AI suggestions, RCA
graph8004network_discovery.graph_app:appNamed queries, history, what-if, explore

The main API routes traffic to the appropriate service via environment variables:

MESHQ_COLLECT_SERVICE_URL=http://collect:8002
MESHQ_INGEST_SERVICE_URL=http://ingest:8001
MESHQ_GRAPH_SERVICE_URL=http://graph:8004

When these variables are set the main API becomes a lightweight router — it retains admin/auth/alerts/SSE endpoints and proxies the rest. Each service is independently scalable with its own replica count.

The docker-compose.cluster.yml in the repository and the Helm chart (helm/meshoptixiq/) both implement this topology. nginx routes by URL prefix:

  • /ebpf, /tracing, /syslog, /flows → ingest (port 8001)
  • /collect, /collectors → collect (port 8002)
  • /chat, /ai → ai (port 8003)
  • /queries, /history, /explore, /graph → graph (port 8004)
  • Everything else → api (port 8000)

Per-service API keys allow scoped authentication: MESHQ_COLLECT_SERVICE_KEY, MESHQ_INGEST_SERVICE_KEY, MESHQ_GRAPH_SERVICE_KEY. Each falls back to the root API_KEY when not set.

10.4 Kubernetes / Helm Deployment

The MeshOptixIQ Helm chart provides a production-grade Kubernetes deployment with all required resources pre-configured. The chart is located at helm/meshoptixiq/ in the repository.

nginx (LB) ip_hash sticky API Pod 1 API Pod 2 API Pod 3 HPA (min 2, max 10) Redis snapshots pub/sub · queue Neo4j bolt://neo4j:7687 Collector Workers Deployment (2 replicas) Dispatcher CronJob

Quick Install

# Install with required values
helm install meshoptixiq ./helm/meshoptixiq \
  --set api.key=YOUR_API_KEY_HERE \
  --set neo4j.password=YOUR_NEO4J_PASSWORD \
  --set neo4j.uri=bolt://neo4j:7687 \
  --set license.key=mq-prod-xxxxxxxxxx

# Upgrade an existing release
helm upgrade meshoptixiq ./helm/meshoptixiq \
  --set api.key=NEW_API_KEY \
  --reuse-values

Key Helm Values

ValueRequiredDescription
api.keyRequiredAPI authentication key
license.keyRequiredMeshOptixIQ license key
neo4j.uriRequiredNeo4j Bolt URI
neo4j.passwordRequiredNeo4j password
redis.urlOptionalRedis URL; enables clustered mode
collector.enabledOptionalDeploy collector worker Deployment (default: true)
collector.replicasOptionalNumber of collector worker pods (default: 2)
ingress.enabledOptionalCreate Ingress resource (default: false)
hpa.enabledOptionalEnable HorizontalPodAutoscaler (default: true)
rbac.policyFileOptionalPath to RBAC policy YAML mounted into the pod

Example values.yaml

# values.yaml — production Kubernetes deployment
replicaCount: 3

api:
  key: ""            # set via --set or sealed secret

license:
  key: ""            # set via --set or sealed secret

neo4j:
  uri: bolt://neo4j.default.svc.cluster.local:7687
  user: neo4j
  password: ""       # set via --set or sealed secret

redis:
  url: redis://redis.default.svc.cluster.local:6379

ingress:
  enabled: true
  annotations:
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
  hosts:
    - host: meshoptixiq.yourdomain.com
      paths: ["/"]

hpa:
  enabled: true
  minReplicas: 2
  maxReplicas: 10
  targetCPUUtilizationPercentage: 70

collector:
  enabled: true
  replicas: 2
  dispatcher:
    schedule: "0 */4 * * *"   # every 4 hours

Resources Deployed

Running helm template with the chart renders the following Kubernetes resources:

  • ConfigMap — application environment configuration
  • Secret — API key, license key, database passwords
  • ServiceAccount — pod identity
  • Deployment (API) — the main FastAPI process, multi-replica
  • Service — ClusterIP service on port 8000
  • Ingress — optional; pre-configured with SSE affinity annotations (ip_hash / sticky sessions)
  • HPA — optional; CPU-based autoscaler
  • Deployment (collector) — worker processes that pop from the Redis collection queue
  • CronJob (dispatcher) — enqueues device collection tasks on a schedule
SSE Sticky Sessions
Server-Sent Events require that a browser's connection stays on the same pod across reconnects. The Helm chart Ingress annotations pre-configure nginx.ingress.kubernetes.io/affinity: cookie for this purpose. If using a different ingress controller, enable session affinity manually.