Ingress vs Load Balancer in Kubernetes: When to Use Each
When exposing applications in Kubernetes, you have multiple options for routing external traffic to your services. Two commonly confused concepts are Ingress and LoadBalancer services. While both handle external traffic, they serve different purposes and operate at different layers of the networking stack. Understanding when to use each approach will help you design more efficient and cost-effective Kubernetes architectures.
Prerequisites
You'll need a running Kubernetes cluster with kubectl configured. Basic knowledge of Kubernetes services and networking concepts will help you understand the examples in this guide.
Understanding LoadBalancer Services
A LoadBalancer service is a Kubernetes service type that provisions an external load balancer from your cloud provider. This creates a direct path from the internet to your application:
apiVersion: v1
kind: Service
metadata:
name: web-app-loadbalancer
spec:
type: LoadBalancer
selector:
app: web-app
ports:
- port: 80
targetPort: 8080
protocol: TCP
When you create this service, your cloud provider automatically provisions a load balancer with a public IP address. This load balancer forwards traffic directly to your pods through the service.
Understanding Ingress
Ingress is a Kubernetes resource that manages external HTTP and HTTPS access to services within your cluster. Unlike LoadBalancer services, Ingress operates at the application layer (Layer 7) and requires an Ingress Controller to function:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-app-ingress
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-app-service
port:
number: 80
This Ingress resource defines routing rules but doesn't create infrastructure by itself. You need an Ingress Controller like NGINX, Traefik, or cloud-specific controllers to implement these rules.
Key Differences Explained
The fundamental difference lies in their scope and capabilities. LoadBalancer services work at the network layer (Layer 4), handling raw TCP/UDP traffic. They provide a 1:1 mapping between external load balancers and services:
# Each LoadBalancer service gets its own external IP
apiVersion: v1
kind: Service
metadata:
name: frontend-lb
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- port: 80
targetPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: api-lb
spec:
type: LoadBalancer
selector:
app: api
ports:
- port: 80
targetPort: 8080
In contrast, Ingress operates at the application layer (Layer 7) and can route multiple services through a single entry point based on hostnames, paths, and other HTTP attributes:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: multi-service-ingress
spec:
rules:
- host: frontend.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 8080
Cost and Resource Implications
LoadBalancer services can become expensive quickly because each service provisions a separate cloud load balancer. If you have ten services that need external access, you'll pay for ten load balancers:
# This creates multiple expensive cloud load balancers
kubectl apply -f service1-loadbalancer.yaml
kubectl apply -f service2-loadbalancer.yaml
kubectl apply -f service3-loadbalancer.yaml
# ... and so on
Ingress typically uses a single load balancer (for the Ingress Controller) and routes traffic to multiple services based on rules. This approach significantly reduces costs when exposing multiple applications:
# One Ingress can handle multiple services
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cost-effective-ingress
spec:
rules:
- host: shop.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: shop-frontend
port:
number: 80
- path: /api
pathType: Prefix
backend:
service:
name: shop-api
port:
number: 8080
- host: blog.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: blog-service
port:
number: 80
Advanced Features and Capabilities
Ingress controllers offer sophisticated routing capabilities that LoadBalancer services cannot provide. You can implement SSL termination, URL rewriting, and request routing based on headers or other HTTP attributes:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: advanced-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: 'true'
cert-manager.io/cluster-issuer: 'letsencrypt-prod'
spec:
tls:
- hosts:
- api.example.com
secretName: api-tls-secret
rules:
- host: api.example.com
http:
paths:
- path: /v1(/|$)(.*)
pathType: Prefix
backend:
service:
name: api-v1
port:
number: 8080
- path: /v2(/|$)(.*)
pathType: Prefix
backend:
service:
name: api-v2
port:
number: 8080
LoadBalancer services handle traffic more simply but cannot perform application-layer operations like SSL termination or path-based routing.
When to Use LoadBalancer Services
LoadBalancer services work best for specific scenarios where you need direct, low-level network access. Use LoadBalancer services when:
- You need to expose non-HTTP protocols (like databases, message queues, or custom TCP/UDP applications)
- You require the highest possible performance with minimal latency
- You're running a single service that needs external access
- You need to preserve client IP addresses without additional configuration
Here's an example of exposing a PostgreSQL database:
apiVersion: v1
kind: Service
metadata:
name: postgres-external
spec:
type: LoadBalancer
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
protocol: TCP
When to Use Ingress
Ingress excels at managing HTTP/HTTPS traffic for web applications and APIs. Choose Ingress when:
- You're exposing multiple HTTP/HTTPS services
- You need advanced routing based on hostnames, paths, or headers
- You want to terminate SSL at the edge
- You need to implement rate limiting, authentication, or other middleware
- Cost optimization is important (multiple services through one load balancer)
This example shows a typical web application setup:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-app-ingress
annotations:
nginx.ingress.kubernetes.io/rate-limit: '100'
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
spec:
tls:
- hosts:
- myapp.example.com
secretName: myapp-tls
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
- path: /api
pathType: Prefix
backend:
service:
name: backend-api
port:
number: 8080
- path: /admin
pathType: Prefix
backend:
service:
name: admin-panel
port:
number: 3000
Hybrid Approaches
You can combine both approaches in the same cluster. Use LoadBalancer services for specific needs and Ingress for general web traffic:
# LoadBalancer for database access
apiVersion: v1
kind: Service
metadata:
name: redis-external
spec:
type: LoadBalancer
selector:
app: redis
ports:
- port: 6379
targetPort: 6379
---
# Ingress for web applications
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-services
spec:
rules:
- host: www.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-frontend
port:
number: 80
Performance Considerations
LoadBalancer services typically offer better performance for high-throughput applications because they operate at the network layer with fewer processing overhead. The traffic path is more direct:
Internet → Cloud Load Balancer → Kubernetes Service → Pod
Ingress introduces additional processing layers, especially when using features like SSL termination or complex routing rules:
Internet → Cloud Load Balancer → Ingress Controller → Kubernetes Service → Pod
However, modern Ingress controllers are highly optimized and the performance difference is often negligible for most applications.
Security and Compliance
Ingress controllers offer more security features out of the box, including SSL termination, authentication, and rate limiting. You can implement Web Application Firewall (WAF) rules directly in the Ingress configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: secure-app
annotations:
nginx.ingress.kubernetes.io/whitelist-source-range: '10.0.0.0/8,192.168.0.0/16'
nginx.ingress.kubernetes.io/limit-rps: '10'
spec:
rules:
- host: secure.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: secure-app
port:
number: 80
LoadBalancer services require additional components or cloud provider features to implement similar security measures.
Migration Strategies
If you're currently using multiple LoadBalancer services and want to reduce costs, you can migrate to Ingress gradually. Start by identifying HTTP/HTTPS services that can benefit from Ingress routing:
# First, install an Ingress controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml
# Create Ingress resources for your HTTP services
kubectl apply -f your-ingress-config.yaml
# Test the new routing
curl -H "Host: yourapp.example.com" http://ingress-controller-ip/
# Once verified, remove the old LoadBalancer services
kubectl delete service old-loadbalancer-service
Next Steps
You now understand the key differences between Ingress and LoadBalancer services in Kubernetes. Consider exploring specific Ingress controllers like NGINX, Traefik, or cloud-specific options to find the best fit for your use case. Look into service mesh technologies like Istio for even more advanced traffic management capabilities.
Good luck with your Kubernetes networking setup!
Found an issue?