API Gateway Patterns for Microservices Architecture
Master API gateway patterns including authentication, rate limiting, routing, and service mesh integration. Build scalable, secure microservices with Kong, Nginx, and AWS API Gateway.
API Gateway Patterns for Microservices Architecture
API gateways serve as the single entry point for microservices architectures, handling cross-cutting concerns like authentication, rate limiting, and request routing. Well-designed gateways reduce latency, improve security, and simplify client integration.
Core Responsibilities
Request Routing
Direct traffic to appropriate backend services:
Path-Based Routing:
# nginx.conf
location /api/v1/users {
proxy_pass http://user-service:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location /api/v1/orders {
proxy_pass http://order-service:8080;
}
location /api/v1/products {
proxy_pass http://product-service:8080;
}
Header-Based Routing (A/B Testing):
-- Kong plugin
local version = kong.request.get_header("X-API-Version")
if version == "2.0" then
kong.service.set_upstream("api-v2-service")
else
kong.service.set_upstream("api-v1-service")
end
Authentication & Authorization
Centralize security enforcement:
JWT Validation:
// Express middleware
const jwt = require('jsonwebtoken');
function authenticateToken(req, res, next) {
const authHeader = req.headers['authorization'];
const token = authHeader && authHeader.split(' ')[1];
if (!token) {
return res.status(401).json({ error: 'Authentication required' });
}
jwt.verify(token, process.env.JWT_SECRET, (err, user) => {
if (err) {
return res.status(403).json({ error: 'Invalid token' });
}
req.user = user;
next();
});
}
app.use('/api', authenticateToken);
OAuth 2.0 Integration:
- Validate access tokens with identity provider
- Support multiple grant types (authorization code, client credentials)
- Token introspection for revocation checking
- Refresh token handling
Rate Limiting
Protect services from overload:
Token Bucket Algorithm (Kong):
plugins:
- name: rate-limiting
config:
minute: 100
hour: 1000
policy: local
fault_tolerant: true
hide_client_headers: false
Redis-Based Distributed Rate Limiting:
import redis
from datetime import datetime, timedelta
redis_client = redis.Redis(host='redis', port=6379)
def rate_limit(user_id, limit=100, window=60):
"""
Sliding window rate limiter
"""
key = f"rate_limit:{user_id}"
now = datetime.now().timestamp()
window_start = now - window
# Remove old entries
redis_client.zremrangebyscore(key, 0, window_start)
# Count requests in window
request_count = redis_client.zcard(key)
if request_count >= limit:
return False
# Add current request
redis_client.zadd(key, {str(now): now})
redis_client.expire(key, window)
return True
Advanced Patterns
Request/Response Transformation
Modify payloads for client compatibility:
Request Transformation:
-- Kong plugin: transform legacy format to new API
local body = kong.request.get_body()
if body.old_field then
body.new_field = body.old_field
body.old_field = nil
end
kong.service.request.set_body(body)
Response Aggregation (Backend for Frontend):
async function getOrderDetails(orderId) {
const [order, customer, items] = await Promise.all([
orderService.getOrder(orderId),
customerService.getCustomer(customerId),
inventoryService.getOrderItems(orderId)
]);
return {
order: {
...order,
customer: {
name: customer.name,
email: customer.email
},
items: items.map(item => ({
...item,
availableQuantity: item.inventory
}))
}
};
}
Circuit Breaking
Prevent cascading failures:
// Go circuit breaker with gobreaker
import "github.com/sony/gobreaker"
cb := gobreaker.NewCircuitBreaker(gobreaker.Settings{
Name: "ProductService",
MaxRequests: 3,
Interval: time.Minute,
Timeout: 5 * time.Second,
ReadyToTrip: func(counts gobreaker.Counts) bool {
failureRatio := float64(counts.TotalFailures) / float64(counts.Requests)
return counts.Requests >= 3 && failureRatio >= 0.6
},
})
// Use circuit breaker
result, err := cb.Execute(func() (interface{}, error) {
return productService.GetProduct(productId)
})
Fallback Responses:
async function getProduct(productId) {
try {
return await circuitBreaker.fire(
() => productService.getProduct(productId)
);
} catch (err) {
// Return cached data or default response
return await cache.get(`product:${productId}`) || {
id: productId,
available: false,
message: "Product details temporarily unavailable"
};
}
}
Caching
Reduce backend load and improve latency:
HTTP Caching Headers:
location /api/v1/products {
proxy_pass http://product-service;
proxy_cache product_cache;
proxy_cache_valid 200 5m;
proxy_cache_valid 404 1m;
proxy_cache_key "$request_uri";
add_header X-Cache-Status $upstream_cache_status;
}
GraphQL Response Caching:
const responseCachePlugin = require('apollo-server-plugin-response-cache');
const server = new ApolloServer({
typeDefs,
resolvers,
plugins: [
responseCachePlugin({
sessionId: (requestContext) =>
requestContext.request.http.headers.get('user-id') || null,
shouldReadFromCache: (requestContext) => {
return requestContext.request.http.method === 'GET';
},
shouldWriteToCache: (requestContext) => {
return requestContext.request.http.method === 'GET';
}
})
],
cache: new RedisCache({
host: 'redis',
port: 6379
})
});
Service Mesh Integration
Istio Gateway
Service mesh for advanced traffic management:
# virtual-service.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: product-service
spec:
hosts:
- product.example.com
gateways:
- product-gateway
http:
- match:
- headers:
x-api-version:
exact: v2
route:
- destination:
host: product-service
subset: v2
weight: 100
- route:
- destination:
host: product-service
subset: v1
weight: 100
---
# gateway.yaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: product-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: product-cert
hosts:
- product.example.com
mTLS Enforcement
Secure service-to-service communication:
# peer-authentication.yaml
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: production
spec:
mtls:
mode: STRICT
Observability
Distributed Tracing
Track requests across services:
OpenTelemetry Integration:
const { trace } = require('@opentelemetry/api');
const { JaegerExporter } = require('@opentelemetry/exporter-jaeger');
const tracer = trace.getTracer('api-gateway');
app.use(async (req, res, next) => {
const span = tracer.startSpan('http_request', {
attributes: {
'http.method': req.method,
'http.url': req.url,
'http.user_agent': req.headers['user-agent']
}
});
// Inject trace context
req.traceContext = span.spanContext();
res.on('finish', () => {
span.setAttribute('http.status_code', res.statusCode);
span.end();
});
next();
});
Metrics Collection
Monitor gateway performance:
Prometheus Metrics:
import (
"github.com/prometheus/client_golang/prometheus"
)
var (
requestDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "gateway_request_duration_seconds",
Help: "Request duration in seconds",
Buckets: prometheus.DefBuckets,
},
[]string{"method", "endpoint", "status"},
)
requestCounter = prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "gateway_requests_total",
Help: "Total number of requests",
},
[]string{"method", "endpoint", "status"},
)
)
func metricsMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
start := time.Now()
// Capture response status
recorder := &statusRecorder{ResponseWriter: w, status: 200}
next.ServeHTTP(recorder, r)
duration := time.Since(start).Seconds()
requestDuration.WithLabelValues(
r.Method,
r.URL.Path,
strconv.Itoa(recorder.status),
).Observe(duration)
requestCounter.WithLabelValues(
r.Method,
r.URL.Path,
strconv.Itoa(recorder.status),
).Inc()
})
}
Security Best Practices
Input Validation
Prevent injection attacks:
const Joi = require('joi');
const schemas = {
createUser: Joi.object({
email: Joi.string().email().required(),
name: Joi.string().min(2).max(100).required(),
age: Joi.number().integer().min(0).max(120)
}),
getProduct: Joi.object({
productId: Joi.string().uuid().required()
})
};
function validate(schema) {
return (req, res, next) => {
const { error } = schema.validate(req.body);
if (error) {
return res.status(400).json({
error: 'Validation failed',
details: error.details.map(d => d.message)
});
}
next();
};
}
app.post('/api/users', validate(schemas.createUser), createUser);
CORS Configuration
Control cross-origin requests:
const cors = require('cors');
const corsOptions = {
origin: function (origin, callback) {
const allowedOrigins = [
'https://app.example.com',
'https://admin.example.com'
];
if (!origin || allowedOrigins.includes(origin)) {
callback(null, true);
} else {
callback(new Error('Not allowed by CORS'));
}
},
credentials: true,
maxAge: 86400 // 24 hours
};
app.use(cors(corsOptions));
Technology Comparison
Kong:
- Plugin ecosystem
- High performance (OpenResty/Nginx)
- GraphQL and gRPC support
- Enterprise features (RBAC, analytics)
AWS API Gateway:
- Fully managed
- Native AWS service integration
- Auto-scaling
- Pay-per-request pricing
Nginx:
- Maximum performance
- Flexible configuration
- Proven at scale
- Requires more manual setup
Traefik:
- Native Kubernetes integration
- Automatic service discovery
- Let’s Encrypt integration
- Modern, cloud-native design
Implementation Strategy
Phase 1: Basic Gateway (Week 1-2)
- Deploy gateway infrastructure
- Configure basic routing
- Implement authentication
Phase 2: Resilience (Week 3-4)
- Add rate limiting
- Configure circuit breakers
- Implement caching
Phase 3: Observability (Week 5-6)
- Distributed tracing
- Metrics collection
- Dashboards and alerts
Phase 4: Optimization (Week 7-8)
- Performance tuning
- Security hardening
- Documentation
API gateways are critical infrastructure for microservices. Partner with experts to design and implement production-grade gateways that scale with your business.
Ready to Transform Your Business?
Let's discuss how our AI and technology solutions can drive revenue growth for your organization.