Production Deployment

Deploy Placino to production with high availability, security hardening, and monitoring in place.

Deployment Options

Docker Compose (Small)

Single-node deployment. Up to 100 concurrent users. Recommended for testing and small pilots.

Kubernetes (Large Scale)

Multi-node HA setup with auto-scaling. 1000+ concurrent users. Recommended for enterprise.

Kubernetes Deployment

Deploy to Kubernetes cluster (AWS EKS, Google GKE, Azure AKS, or self-managed):

1. Cluster Prerequisites

• Kubernetes 1.24+
• 3+ worker nodes (8 CPU, 32GB RAM each minimum)
• Persistent volume provisioner (EBS, GCE Persistent Disk, etc.)
• Ingress controller (nginx, traefik)
• Cert-manager for TLS certificates

2. Create Namespace

kubectl create namespace placino
kubectl config set-context --current --namespace=placino

3. Store Secrets

# Generate encryption key
ENCRYPTION_KEY=$(openssl rand -base64 32)
# Create secret
kubectl create secret generic placino-secrets \
  --from-literal=encryption_key=$ENCRYPTION_KEY \
  --from-literal=db_password=your_postgres_password

4. Deploy Helm Chart

# Add Placino Helm repo
helm repo add placino https://charts.placino.com
# Install
helm install placino placino/placino \
  --namespace placino \
  --values values-prod.yaml

Helm Chart Configuration

Key values in values-prod.yaml:

# Global settings
replicas: 3
image: placino/core:latest
# PostgreSQL HA
postgresql:
  replicas: 3
  storageClass: ebs-gp3
  size: 100Gi
# Redis Cluster
redis:
  cluster:
    enabled: true
    nodes: 6
# Monitoring
prometheus:
  enabled: true
  retention: 15d

High Availability Setup

API Layer

Deploy 3+ replicas of core-api behind load balancer. Kubernetes automatically distributes requests. Health checks every 5 seconds.

Database Replication

PostgreSQL with 3-node synchronous replication. At least 2 replicas must acknowledge writes before commit. Automatic failover via Patroni.

Redis Cluster

6-node Redis cluster (3 primary, 3 replica). Automatic shard rebalancing. Cluster mode persistence enabled.

TLS/SSL

All services communicate over TLS. Certificates managed by cert-manager. Auto-renewal 30 days before expiry.

Security Hardening

Network Policies

# Only allow ingress from Ingress Controller
kubectl apply -f network-policies/

Restrict pod-to-pod communication. Only API service exposed externally.

Resource Limits

Set CPU and memory requests/limits on all pods. Prevent resource contention and noisy neighbor issues.

requests:
  cpu: 500m
  memory: 512Mi
limits:
  cpu: 2000m
  memory: 2Gi

RBAC

Implement least-privilege Kubernetes RBAC. Placino pods run with minimal permissions. Service accounts scoped per namespace.

Pod Security Policy

Enforce: no root containers, read-only root filesystem, no privileged mode, no host networking.

Encryption Key Management

Store encryption keys in AWS KMS, Google Cloud KMS, or Azure Key Vault. Never store in etcd or ConfigMaps. Rotate keys every 90 days.

Monitoring & Observability

Prometheus Metrics

All microservices expose Prometheus metrics on /metrics. Scrape interval: 30 seconds. Retention: 15 days.

Grafana Dashboards

Pre-built dashboards for: Query latency, Privacy budget consumption, Ingestion throughput, Database replication lag, Redis cluster health.

Alerting Rules

Critical alerts: API unavailable, DB replication lag >10s, encryption key rotation failed, quota exceeded. Sent to Slack/PagerDuty.

Logging

Structured JSON logs shipped to ELK/Datadog. Log levels: DEBUG, INFO (default), WARN, ERROR. Audit logs separate with immutable Merkle-chain.

Scaling Strategies

Horizontal Pod Autoscaling

kubectl autoscale deployment core-api \
  --min=3 --max=10 \
  --cpu-percent=70

Scale from 3 to 10 replicas based on CPU usage. Average target: 70%.

Database Scaling

PostgreSQL read replicas for query-heavy workloads. Separate schema node (smaller) from data nodes. Connection pooling via PgBouncer.

Cost Optimization

Use spot instances for stateless services (query processors). Reserved instances for databases. Enable cluster autoscaler on node pools.

Backup & Disaster Recovery

Database Backups

Automated daily backups of PostgreSQL via pg_dump. Stored in S3 with cross-region replication. Test restore monthly.

RTO & RPO

RTO (Recovery Time Objective): <1 hour via Kubernetes failover. RPO (Recovery Point Objective): <15 minutes via streaming replication.

Disaster Recovery Plan

Documented playbook for: Total cluster failure, Database corruption, Encryption key loss, Ransomware attack. Test quarterly.

Verification & Testing

Post-deployment validation:

# Check pod status
kubectl get pods -n placino
# Test API endpoint
curl https://placino.your-domain.com/health
# Verify database replication
kubectl exec -it pg-primary-0 -- psql -c "\dx"
# Load test with 100 concurrent users
ab -n 10000 -c 100 https://placino.your-domain.com/api/v1/projects