Complete Cloud-Native DevOps Implementation
Microservices β’ Docker β’ Kubernetes β’ Terraform β’ CI/CD β’ Monitoring β’ Security
# 1. Deploy everything with Terraform
cd terraform && terraform init && terraform apply -auto-approve
# 2. Configure /etc/hosts (sudo password once)
cd .. && chmod +x configure-hosts.sh && ./configure-hosts.sh
# 3. Access applications
curl http://vote.local # Vote: Cats vs Dogs
curl http://result.local # Real-time results⨠Simple, secure, and automated!
- Project Overview
- Project Structure
- How to Build & Deploy
- Architecture
- Quick Command Reference
- Problems Faced & Solutions
- Documentation Links
A distributed microservices voting application demonstrating modern cloud-native DevOps practices. Users vote between Cats and Dogs, with real-time results displayed across multiple services.
| Service | Technology | Purpose | Port |
|---|---|---|---|
| Vote | Python 3.11 + Flask | User voting interface | 8080 |
| Result | Node.js 18 + Socket.io | Real-time results display | 8081 |
| Worker | .NET 8.0 + C# | Process votes from queue | - |
| Redis | Redis 7 Alpine | Message queue | 6379 |
| PostgreSQL | Postgres 15 Alpine | Persistent storage | 5432 |
| Seed | Shell script | Generate 3000 test votes | - |
Development:
- Languages: Python, JavaScript, C#
- Frameworks: Flask, Express, Socket.io, Gunicorn
- Databases: Redis, PostgreSQL
DevOps:
- Containers: Docker, Docker Compose
- Orchestration: Kubernetes (Minikube), Terraform
- CI/CD: GitHub Actions
- Security: Snyk (SAST, SCA, Container, IaC)
- Monitoring: Prometheus, Grafana
tactful-votingapp-cloud-infra/
βββ vote/ # Python Flask voting service
β βββ app.py # Main application logic
β βββ requirements.txt # Python dependencies
β βββ Dockerfile # Multi-stage build
β βββ templates/index.html # Voting UI
β βββ static/stylesheets/ # CSS styling
β
βββ result/ # Node.js result service
β βββ server.js # Express + Socket.io server
β βββ package.json # Node dependencies
β βββ Dockerfile # Multi-stage build
β βββ views/ # Result dashboard UI
β
βββ worker/ # .NET worker service
β βββ Program.cs # Vote processing logic
β βββ Worker.csproj # .NET project file
β βββ Dockerfile # Multi-stage build
β
βββ seed-data/ # Test data generator
β βββ generate-votes.sh # Shell script for votes
β βββ Dockerfile # Alpine with curl
β
βββ terraform/ # Infrastructure as Code
β βββ main.tf # Terraform configuration
β βββ cluster.tf # Minikube cluster setup
β βββ seed.tf # Optional seed job
β βββ variables.tf # Input variables
β βββ outputs.tf # Output values
β
βββ k8s/ # Kubernetes manifests
β βββ manifests/ # Raw Kubernetes YAML
β β βββ 00-namespace.yaml # Namespace with PSA
β β βββ 01-secrets.yaml # DB credentials
β β βββ 02-configmap.yaml # App configuration
β β βββ 03-postgres.yaml # StatefulSet
β β βββ 04-redis.yaml # Deployment
β β βββ 05-vote.yaml # 2 replicas
β β βββ 06-result.yaml # 2 replicas
β β βββ 07-worker.yaml # 1 replica
β β βββ 08-network-policies.yaml # DB isolation
β β βββ 09-ingress.yaml # vote.local, result.local
β β βββ 10-seed.yaml # Job for test data
β βββ helm/ # Helm values
β β βββ postgresql-values-dev.yaml
β β βββ redis-values-dev.yaml
β β βββ voting-app/ # Custom Helm chart
β βββ monitoring/ # Prometheus/Grafana configs
β
βββ .github/workflows/ # CI/CD pipelines
β βββ ci-cd.yml # Main build pipeline
β βββ security-scanning.yml # Snyk scans
β βββ docker-compose-test.yml # Integration tests
β
βββ healthchecks/ # Health check scripts
β βββ redis.sh # Redis health check
β βββ postgres.sh # PostgreSQL health check
β
βββ docker-compose.yml # Local development setup
βββ setup-sudoers.sh # Passwordless sudo config
βββ test-e2e.sh # End-to-end tests
βββ snyk-full-scan.sh # Security scanningThis section provides complete step-by-step build and deployment instructions for each phase.
Description: Build and run all services locally using Docker Compose for development and testing.
Requirements:
- Docker Engine 20.10+
- Docker Compose 2.0+
- 4GB+ RAM available
- Ports 8080, 8081, 5432, 6379 available
Step-by-Step Build & Deploy:
# Step 1: Navigate to project directory
cd /home/omar/Projects/tactful-votingapp-cloud-infra
# Step 2: Create .env file (optional, uses defaults if not provided)
cat > .env << EOF
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=postgres
REDIS_PASSWORD=
EOF
# Step 3: Build all Docker images
docker compose build
# This builds: vote, result, worker, seed-data
# Uses multi-stage builds for optimization
# Step 4: Start all services in detached mode
docker compose up -d
# Step 5: Verify all services are running
docker compose ps
# All services should show "Up" status
# Redis and PostgreSQL should show "(healthy)"
# Step 6: Test the application
# Vote interface
curl http://localhost:8080
# Or open in browser
# Result interface
curl http://localhost:8081
# Or open in browser
# Submit a test vote
curl -X POST http://localhost:8080 \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "vote=a"
# Step 7: (Optional) Generate test data
docker compose --profile seed up seed-data
# Generates 3000 votes: 2000 for Cats, 1000 for Dogs
# Step 8: View logs
docker compose logs -f
# Or specific service: docker compose logs -f vote
# Step 9: Cleanup when done
docker compose down -v
# Removes containers, networks, and volumesWhat Gets Built:
voteimage (Python 3.11 + Flask + Gunicorn)resultimage (Node.js 18 + Express + Socket.io)workerimage (.NET 8.0 + C# service)seed-dataimage (Alpine + curl)
Expected Results:
- Vote UI: http://localhost:8080
- Result UI: http://localhost:8081
- 5 running containers: vote, result, worker, redis, postgres
- Real-time vote updates visible in result dashboard
Description: Deploy complete application to local Minikube cluster using Terraform automation.
Requirements:
- Minikube 1.30+
- kubectl 1.27+
- Terraform 1.0+
- Docker 20.10+
- 8GB+ RAM available
- 4+ CPU cores
Step-by-Step Build & Deploy:
# Step 1: One-time setup for passwordless deployment
sudo ./setup-sudoers.sh
# This configures sudoers for /etc/hosts updates
# Only needs to be run once
# Step 2: Navigate to terraform directory
cd terraform
# Step 3: Initialize Terraform
terraform init
# Downloads required providers (Minikube, Docker, Kubernetes, Null)
# Step 4: Validate configuration
terraform validate
# Ensures all HCL syntax is correct
# Step 5: Review deployment plan (optional)
terraform plan
# Shows what will be created
# Step 6: Deploy everything automatically
terraform apply -auto-approve
# This will:
# - Create Minikube cluster (voting-app-dev)
# - Build Docker images in Minikube's daemon
# - Deploy PostgreSQL via Helm
# - Deploy Redis via Helm
# - Deploy vote, result, worker services
# - Configure ingress (vote.local, result.local)
# - Update /etc/hosts automatically
# Step 7: Wait for all pods to be ready
kubectl get pods -n voting-app --watch
# Press Ctrl+C when all pods show Running
# Step 8: Verify deployment
kubectl get all -n voting-app
# Should show: deployments, pods, services
# Step 9: Test the application
curl http://vote.local
curl http://result.local
# Or open in browser
# Step 10: (Optional) Generate test data
kubectl apply -f ../k8s/manifests/10-seed.yaml
# Creates a Job that generates 3000 test votes
# Step 11: Monitor pods
kubectl logs -f deployment/vote -n voting-app
kubectl logs -f deployment/result -n voting-app
kubectl logs -f deployment/worker -n voting-app
# Step 12: Cleanup when done
terraform destroy -auto-approve
# Removes all resources including Minikube clusterWhat Gets Built:
- Minikube cluster (profile: voting-app-dev)
- Docker images (built in Minikube's daemon):
tactful-votingapp-cloud-infra-vote:latesttactful-votingapp-cloud-infra-result:latesttactful-votingapp-cloud-infra-worker:latest
- Kubernetes resources:
- Namespace: voting-app
- PostgreSQL StatefulSet (via Helm)
- Redis Deployment (via Helm)
- Vote Deployment (2 replicas)
- Result Deployment (2 replicas)
- Worker Deployment (1 replica)
- Services (ClusterIP and LoadBalancer)
- Ingress (vote.local, result.local)
Expected Results:
- Vote UI: http://vote.local
- Result UI: http://result.local
- All pods running:
kubectl get pods -n voting-app - Services accessible via ingress
Troubleshooting:
# If pods are not ready
kubectl describe pod <pod-name> -n voting-app
# If ingress not working
kubectl get ingress -n voting-app
minikube addons enable ingress -p voting-app-dev
# If /etc/hosts not updated
cat /etc/hosts | grep local
# Should see: <MINIKUBE_IP> vote.local result.localDescription: Automated build, test, scan, and container registry push via GitHub Actions.
Requirements:
- GitHub account
- GitHub repository
- GitHub Container Registry enabled
- Snyk account (for security scanning)
Step-by-Step Setup & Trigger:
# Step 1: Configure GitHub Secrets
# Go to: Settings β Secrets and variables β Actions
# Add these secrets:
# - SNYK_TOKEN (from snyk.io)
# Step 2: Enable GitHub Container Registry
# Go to: Settings β Packages β Enable Container Registry
# Step 3: Make code changes
git checkout -b feature/my-changes
# Edit files as needed
# Step 4: Commit changes
git add .
git commit -m "feat: add new feature"
# Step 5: Push to trigger pipeline
git push origin feature/my-changes
# Step 6: Monitor pipeline in GitHub UI
# Go to: Actions tab in GitHub repository
# Watch the ci-cd.yml workflow
# Step 7: Review build results
# Pipeline will:
# - Build all 3 Docker images
# - Scan with Snyk for vulnerabilities
# - Run docker-compose integration tests
# - Tag images with commit SHA
# - Push to GitHub Container Registry (ghcr.io)
# Step 8: Create Pull Request
gh pr create --title "My Feature" --body "Description"
# Step 9: Merge when tests pass
gh pr merge --squash
# Step 10: Verify images in registry
# Go to: Packages tab in GitHub
# Should see: vote, result, worker packagesWhat Gets Built:
- Three Docker images tagged with commit SHA:
ghcr.io/<username>/vote:<commit-sha>ghcr.io/<username>/result:<commit-sha>ghcr.io/<username>/worker:<commit-sha>
- Images also tagged as
:latest
CI/CD Pipeline Jobs:
-
build-vote
- Builds vote service
- Scans with Snyk
- Pushes to GHCR
-
build-result
- Builds result service
- Scans with Snyk
- Pushes to GHCR
-
build-worker
- Builds worker service
- Scans with Snyk
- Pushes to GHCR
-
docker-compose-test
- Runs integration tests
- Validates all services work together
Expected Results:
- All CI/CD checks pass β
- Images available in GitHub Container Registry
- Security scan reports (no critical vulnerabilities)
- Docker Compose tests pass
Description: Deploy monitoring and observability stack to track application metrics and logs.
Requirements:
- Kubernetes cluster running (from Phase 2)
- Helm 3.12+
- kubectl access
- 4GB+ additional RAM
Step-by-Step Build & Deploy:
# Step 1: Ensure Kubernetes cluster is running
kubectl cluster-info
kubectl get nodes
# Step 2: Add Prometheus Helm repository
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
# Step 3: Create monitoring namespace
kubectl create namespace monitoring
# Step 4: Install Prometheus + Grafana stack
helm install prometheus prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--create-namespace \
--wait \
--timeout 10m
# This installs:
# - Prometheus (metrics collection)
# - Grafana (visualization)
# - AlertManager (alerting)
# - Node Exporter (host metrics)
# - Kube State Metrics (K8s metrics)
# Step 5: Verify monitoring pods are running
kubectl get pods -n monitoring
# Wait for all pods to be Running
# Step 6: Get Grafana admin password
kubectl get secret -n monitoring prometheus-grafana \
-o jsonpath="{.data.admin-password}" | base64 --decode
echo
# Step 7: Access Grafana dashboard
kubectl port-forward -n monitoring \
svc/prometheus-grafana 3000:80
# Open browser: http://localhost:3000
# Username: admin
# Password: (from Step 6)
# Step 8: Access Prometheus UI
kubectl port-forward -n monitoring \
svc/prometheus-kube-prometheus-prometheus 9090:9090
# Open browser: http://localhost:9090
# Step 9: Import voting-app dashboards
# In Grafana:
# - Go to Dashboards β Import
# - Use dashboard ID: 315 (Kubernetes cluster monitoring)
# - Use dashboard ID: 1860 (Node Exporter Full)
# Step 10: Create custom dashboard for voting app
# In Grafana:
# - Create new dashboard
# - Add panels to track:
# - Vote count per second
# - Result service response time
# - Worker processing rate
# - Redis queue length
# - PostgreSQL connections
# Step 11: View metrics
# Query examples in Prometheus:
# - container_memory_usage_bytes{namespace="voting-app"}
# - rate(container_cpu_usage_seconds_total{namespace="voting-app"}[5m])
# Step 12: Cleanup monitoring stack
helm uninstall prometheus -n monitoring
kubectl delete namespace monitoringWhat Gets Deployed:
- Prometheus (metrics database)
- Grafana (visualization)
- AlertManager (alerts)
- Node Exporter (node metrics)
- Kube State Metrics (K8s metrics)
- Service Monitors (metrics scraping)
Expected Results:
- Grafana UI: http://localhost:3000
- Prometheus UI: http://localhost:9090
- Real-time metrics from voting-app namespace
- Pre-built Kubernetes dashboards
- Alert rules configured
Useful Queries:
# CPU usage by pod
rate(container_cpu_usage_seconds_total{namespace="voting-app"}[5m])
# Memory usage by pod
container_memory_working_set_bytes{namespace="voting-app"}
# Pod restart count
kube_pod_container_status_restarts_total{namespace="voting-app"}
Execute phases in this order:
- Phase 1 β Develop and test locally with Docker Compose
- Phase 2 β Deploy to Kubernetes for production-like environment
- Phase 3 β Set up CI/CD for automated deployments
- Phase 4 β Add monitoring for observability
Each phase builds on the previous one, creating a complete DevOps pipeline.
User β Vote Service β Redis Queue β Worker β PostgreSQL β Result Service β UserDetailed Flow:
- User submits vote via http://vote.local
- Vote service stores vote in Redis queue (LPUSH)
- Worker polls Redis (BLPOP) and processes vote
- Worker inserts/updates vote in PostgreSQL (UPSERT by voter_id)
- Result service queries PostgreSQL every 1 second
- Result service pushes updates via WebSocket (Socket.io)
- User sees real-time results
Docker Compose:
- Frontend Network: vote, result (exposed to host)
- Backend Network: worker, redis, postgres (internal only)
Kubernetes:
- Namespace:
voting-app - NetworkPolicies: Postgres/Redis isolated, only worker can access
- Ingress: vote.local (vote service), result.local (result service)
# Deploy
docker compose up --build -d
docker compose ps
# Test
curl http://localhost:8080 # Vote
curl http://localhost:8081 # Result
# Seed 3000 test votes
docker compose --profile seed up seed-data
# Cleanup
docker compose down -v# One-time setup
sudo ./setup-sudoers.sh
# Deploy everything
cd terraform
terraform init
terraform apply -auto-approve
# Test
kubectl get pods -n voting-app
curl http://vote.local
curl http://result.local
# Seed data
kubectl apply -f ../k8s/manifests/10-seed.yaml
# Cleanup
terraform destroy -auto-approve# Trigger pipeline
git push origin main
# Check status
gh run list --limit 5
gh run view --log# Deploy monitoring stack
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus prometheus-community/kube-prometheus-stack \
-n monitoring --create-namespace --wait
# Access dashboards
kubectl port-forward -n monitoring svc/prometheus-grafana 3000:80
# http://localhost:3000 (admin/prom-operator)Issue: Flask app running with debug=True exposes sensitive information
- Severity: Medium
- File:
vote/app.py - Solution: Changed to
debug=False
# Fixed in vote/app.py
if __name__ == "__main__":
app.run(host='0.0.0.0', port=80, debug=False) # Was TrueIssue: Npgsql 8.0.3 had SQL injection vulnerabilities (CVE-2024-32056)
- Severity: High
- File:
worker/Worker.csproj - Solution: Upgraded to Npgsql 8.0.5
<!-- Fixed in worker/Worker.csproj -->
<PackageReference Include="Npgsql" Version="8.0.5" />Issue: Multiple npm package vulnerabilities
- express 4.19.2 β 4.21.1
- body-parser 1.20.2 β 1.20.3
- ws 8.17.0 β 8.18.0
- Solution: Updated
package.jsonand rannpm update
Issue: Terraform couldn't configure /etc/hosts without password
- Impact: Blocked automation, required manual intervention
- Solution: Created
setup-sudoers.shfor one-time passwordless sudo setup
# One-time setup
sudo ./setup-sudoers.sh
# Creates /etc/sudoers.d/terraform-hosts with permissions for:
# - sudo sed -i '/vote\.local/d' /etc/hosts
# - sudo tee -a /etc/hostsIssue: Seed job was running during every terraform apply
- Impact: 3000 test votes added every time
- Solution: Made seed job optional with variable control
# seed.tf
variable "run_seed" { default = false }
resource "null_resource" "run_seed" {
count = var.run_seed ? 1 : 0
# ...
}Issue: Pods failing with "must not set securityContext.runAsUser to 0"
- Solution: Set
runAsNonRoot: trueandrunAsUser: 1000in all manifests
# Fixed in all k8s manifests
securityContext:
runAsNonRoot: true
runAsUser: 1000
allowPrivilegeEscalation: falseIssue: PostgreSQL pod stuck in Pending state
- Cause: No storage class configured in Minikube
- Solution: Minikube automatically provisions hostPath PVs
# Verify PVC bound
kubectl get pvc -n voting-app
# Should show: Bound statusIssue: vote.local and result.local not accessible
- Cause: /etc/hosts not configured, ingress addon not enabled
- Solution:
- Enable ingress:
minikube addons enable ingress -p voting-app-dev - Configure /etc/hosts: Terraform now does this automatically
- Enable ingress:
For more detailed step-by-step guides, see:
- TEST-ALL-PHASES.md - Complete testing guide for all phases with detailed commands and validation steps
Note: This README provides complete build instructions for each phase. The TEST-ALL-PHASES.md file contains additional testing scenarios and troubleshooting steps.