Kubernetes Container Orchestration: Complete Production Guide
Deploy and manage containerized applications at scale with Kubernetes orchestration, auto-scaling, service discovery, and rolling updates.
Kubernetes Container Orchestration: Complete Production Guide
Master Kubernetes for deploying, scaling, and managing containerized applications in production environments.
Kubernetes Architecture
Control Plane Components:
- API Server: Frontend for Kubernetes control plane
- etcd: Distributed key-value store for cluster data
- Scheduler: Assigns pods to nodes based on resources
- Controller Manager: Runs controller processes
- Cloud Controller Manager: Cloud-specific control logic
Node Components:
- kubelet: Agent ensuring containers run in pods
- kube-proxy: Network proxy managing network rules
- Container Runtime: Docker, containerd, or CRI-O
Key Concepts:
- Pod: Smallest deployable unit (one or more containers)
- Deployment: Manages replica sets and rolling updates
- Service: Stable network endpoint for pods
- Namespace: Virtual cluster for resource isolation
- ConfigMap: Configuration data for applications
- Secret: Sensitive data like passwords and tokens
Getting Started
Local Development
Minikube:
# Install minikube
brew install minikube
# Start local cluster
minikube start --cpus 4 --memory 8192
# Check cluster status
kubectl cluster-info
kubectl get nodes
Kind (Kubernetes in Docker):
# Create cluster
kind create cluster --name dev-cluster
# Load local image
kind load docker-image myapp:latest --name dev-cluster
Cloud Kubernetes
Google GKE:
gcloud container clusters create production \
--num-nodes 3 \
--machine-type n1-standard-4 \
--region us-central1
AWS EKS:
eksctl create cluster \
--name production \
--region us-east-1 \
--nodegroup-name standard-workers \
--node-type t3.medium \
--nodes 3
Azure AKS:
az aks create \
--resource-group myResourceGroup \
--name production \
--node-count 3 \
--node-vm-size Standard_DS2_v2
Deploying Applications
Basic Deployment
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
labels:
app: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: app
image: myregistry/web-app:v1.0
ports:
- containerPort: 8080
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-credentials
key: url
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
Deploy:
kubectl apply -f deployment.yaml
kubectl get deployments
kubectl get pods
kubectl logs pod-name
Service Exposure
LoadBalancer Service:
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: LoadBalancer
selector:
app: web
ports:
- protocol: TCP
port: 80
targetPort: 8080
Ingress Controller:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- myapp.example.com
secretName: myapp-tls
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
Configuration Management
ConfigMaps
Create from file:
kubectl create configmap app-config \
--from-file=config.properties
Use in deployment:
spec:
containers:
- name: app
env:
- name: CONFIG_PATH
value: /config/config.properties
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
configMap:
name: app-config
Secrets
Create secret:
kubectl create secret generic db-credentials \
--from-literal=username=admin \
--from-literal=password=SecurePassword123
Use as environment variables:
env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
Mount as files:
volumeMounts:
- name: credentials
mountPath: /etc/secrets
readOnly: true
volumes:
- name: credentials
secret:
secretName: db-credentials
Auto-Scaling
Horizontal Pod Autoscaler (HPA)
CPU-based scaling:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: web-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: web-app
minReplicas: 3
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
Custom metrics scaling:
metrics:
- type: Pods
pods:
metric:
name: http_requests_per_second
target:
type: AverageValue
averageValue: "1000"
Vertical Pod Autoscaler (VPA)
Automatic resource adjustment:
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: web-app-vpa
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: web-app
updatePolicy:
updateMode: "Auto"
Cluster Autoscaler
Node-level scaling:
- Adds nodes when pods can’t be scheduled
- Removes underutilized nodes
- Cloud-provider specific implementation
- GKE, EKS, AKS support built-in
Rolling Updates and Rollbacks
Rolling Update Strategy
spec:
replicas: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 2
Update image:
kubectl set image deployment/web-app \
app=myregistry/web-app:v1.1
kubectl rollout status deployment/web-app
Blue-Green Deployment
Blue version (current):
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app-blue
spec:
replicas: 3
selector:
matchLabels:
app: web
version: blue
Green version (new):
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app-green
spec:
replicas: 3
selector:
matchLabels:
app: web
version: green
Switch traffic:
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
selector:
app: web
version: green # Changed from blue to green
Canary Deployment
Gradual rollout with traffic splitting:
# 90% traffic to stable
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app-stable
spec:
replicas: 9
# 10% traffic to canary
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app-canary
spec:
replicas: 1
Rollback
# View rollout history
kubectl rollout history deployment/web-app
# Rollback to previous version
kubectl rollout undo deployment/web-app
# Rollback to specific revision
kubectl rollout undo deployment/web-app --to-revision=3
Storage
Persistent Volumes
PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: fast-ssd
Use in pod:
spec:
containers:
- name: postgres
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-storage
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pvc
StatefulSets
For stateful applications:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 3
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:14
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi
Networking
Network Policies
Restrict ingress traffic:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-policy
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
Service Mesh (Istio)
Traffic management, security, observability:
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: web-app
spec:
hosts:
- web-app
http:
- match:
- headers:
user-agent:
regex: ".*Chrome.*"
route:
- destination:
host: web-app
subset: v2
weight: 50
- destination:
host: web-app
subset: v1
weight: 50
Monitoring and Logging
Prometheus for Metrics
ServiceMonitor:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: web-app-metrics
spec:
selector:
matchLabels:
app: web
endpoints:
- port: metrics
interval: 30s
Grafana Dashboards
Pre-built dashboards:
- Kubernetes cluster monitoring
- Pod resource usage
- Node performance
- Application-specific metrics
Centralized Logging
EFK Stack (Elasticsearch, Fluentd, Kibana):
- Fluentd collects logs from nodes
- Elasticsearch stores and indexes
- Kibana provides visualization
Loki (Grafana Loki):
- Lightweight log aggregation
- Integrates with Prometheus
- Cost-effective alternative to EFK
Security Best Practices
RBAC (Role-Based Access Control)
Service Account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-service-account
Role:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
RoleBinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
subjects:
- kind: ServiceAccount
name: app-service-account
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
Pod Security Standards
Baseline security:
apiVersion: v1
kind: Pod
metadata:
name: secure-app
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 2000
containers:
- name: app
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
Image Security
Private registry:
imagePullSecrets:
- name: registry-credentials
Image scanning:
- Trivy for vulnerability scanning
- Policy enforcement with OPA/Gatekeeper
- Admission controllers for validation
Helm Package Manager
Chart Structure
mychart/
Chart.yaml # Chart metadata
values.yaml # Default values
templates/ # Kubernetes manifests
deployment.yaml
service.yaml
ingress.yaml
Deployment with Helm
# Install chart
helm install myapp ./mychart
# Upgrade with new values
helm upgrade myapp ./mychart \
--set image.tag=v1.1 \
--set replicas=5
# Rollback
helm rollback myapp 1
values.yaml
replicaCount: 3
image:
repository: myregistry/app
tag: v1.0
pullPolicy: IfNotPresent
service:
type: LoadBalancer
port: 80
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 250m
memory: 256Mi
CI/CD Integration
GitOps with ArgoCD
Application manifest:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: web-app
spec:
project: default
source:
repoURL: https://github.com/company/app-manifests
targetRevision: HEAD
path: k8s/production
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
Jenkins Pipeline
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'docker build -t myapp:${BUILD_NUMBER} .'
}
}
stage('Push') {
steps {
sh 'docker push myapp:${BUILD_NUMBER}'
}
}
stage('Deploy') {
steps {
sh 'kubectl set image deployment/web-app app=myapp:${BUILD_NUMBER}'
}
}
}
}
Troubleshooting
Common Issues
Pods not starting:
kubectl describe pod pod-name
kubectl logs pod-name
kubectl logs pod-name --previous # Previous container logs
Networking issues:
kubectl exec -it pod-name -- ping service-name
kubectl exec -it pod-name -- nslookup service-name
Resource constraints:
kubectl top nodes
kubectl top pods
kubectl describe node node-name
Event debugging:
kubectl get events --sort-by=.metadata.creationTimestamp
kubectl get events --field-selector involvedObject.name=pod-name
Production Checklist
- Multi-zone cluster deployment
- Resource requests and limits defined
- Liveness and readiness probes configured
- HPA enabled for scalability
- Network policies implemented
- RBAC properly configured
- Secrets encrypted at rest
- Image pull secrets configured
- Monitoring and alerting setup
- Centralized logging enabled
- Backup and disaster recovery plan
- Cost optimization reviewed
- Security scanning integrated
- CI/CD pipeline automated
Bottom Line
Kubernetes provides robust container orchestration at scale. Steep learning curve justified by operational benefits: automated scaling, self-healing, and declarative configuration. Start small with managed services (GKE, EKS, AKS) to minimize operational overhead. Use Helm for package management and ArgoCD for GitOps workflows. Invest in monitoring, logging, and security from day one.
Ready to Transform Your Business?
Let's discuss how our AI and technology solutions can drive revenue growth for your organization.