Load Balancing and HTTP Routing
Overview
Load balancing and HTTP routing in CloudOps for Kubernetes is accomplished by using AWS network load balancers with the HAProxy Ingress Controller.
To understand how these technologies work together and handle a request, look at how the ActiveMQ and Cortex services inside of CloudOps for Kubernetes use them.
ActiveMQ
To support certain integrations which require access to ActiveMQ and because ActiveMQ is a TCP based service, Elastic Path provides access to it using an external load balancer.
In the preceding diagram, you can see the single ActiveMQ pod running in the Live Namespace. The Deployment of ActiveMQ is paired with a Service and an Ingress similar to the following example:
apiVersion: v1
kind: Service
metadata:
name: ep-activemq-production-service
labels:
app: ep-activemq-production
spec:
type: ClusterIP
ports:
- name: ep-activemq-port-61616
port: 61616
protocol: TCP
- name: ep-activemq-port-8161
port: 8161
protocol: TCP
selector:
app: ep-activemq-production
------------------------
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/load-balance: "least_conn"
ingress.kubernetes.io/secure-backends: "false"
ingress.kubernetes.io/use-proxy-protocol: "true"
ingress.kubernetes.io/whitelist-source-range: var.activemq_allowed_cidr
kubernetes.io/ingress.class: "haproxy"
ingress.kubernetes.io/waf: "modsecurity"
name: "activemq-ingress"
spec:
tls:
The HAPRoxy ingress controller forwards the connections from its network load balancer to the ClusterIP
Kubernetes Service. Connections to the ClusterIP
are forwarded to the ActiveMQ pod. The network load balancer is configured to forward external traffic to the correct port on the correct node in the Kubernetes cluster. The request will terminate with the ActiveMQ Pod.
The ingress created for accessing ActiveMQ is only accessible to addresses within an IP address CIDR (Classless Inter-Domain Routing) range. It is provided as a parameter to the create-or-delete-activemq-container
Jenkins job. The service requiring access to ActiveMQ must be within this IP address CIDR range.
For Pods inside the Kubernetes cluster, such as Cortex, the pods can connect to the ClusterIP
Service by name. This forwards the connection to a random ActiveMQ Pod behind the Service. Connections are only forwarded if the Pod’s readiness probe is passing.
For more information about Kubernetes Services, see Publishing Services (ServiceTypes)
Cortex
The Cortex API is the entrypoint into the Self Managed Commerce solution. It provides an API and therefore requires HTTP routing.
In the preceding diagram, there are two Ingress Controller Pods running. A load balancer forwards external connection to ports 80 and 443 on the Ingress Controller Pods.
The Ingress Controller Pods provide HTTP routing for Services running inside the Kubernetes cluster. When an Ingress Controller Pod receives a connection, it checks the received Host header for a matching Host rule. It checks in all of the Ingresses in all the namespaces of the Kubernetes cluster. If a matching Ingress is found, the Ingress Controller Pod proxies the HTTP connection to the backend Service.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: production-ingress
namespace: live
annotations:
ingress.kubernetes.io/load-balance: "least_conn"
ingress.kubernetes.io/secure-backends: "false"
ingress.kubernetes.io/use-proxy-protocol: "true"
ingress.kubernetes.io/whitelist-source-range: var.cortex_allowed_cidrs
kubernetes.io/ingress.class: "haproxy"
ingress.kubernetes.io/waf: "modsecurity"
spec:
tls:
- hosts:
- live.production.ep.mycompany.com
rules:
- host: live.production.ep.mycompany.com
http:
paths:
- path: /cortex
backend:
serviceName: ep-cortex-service
servicePort: 8080
In the preceding YAML, there is an example Ingress for Cortex. For example, if you had a connection on https://live.production.ep.mycompany.com/cortex
, the Ingress Controller would perform the following steps:
- Find the Hosts header of the request and check for a matching
spec.tls.hosts
value. If there is a matching value, it terminates the TLS connection and proxy an unencrypted connection to a backend. In the example, there is a match onlive.production.ep.mycompany.com
- Compare the Hosts header against the
spec.rules.host
field. If no match is found, the request is proxied to the default backend. In the example, there is a match onlive.production.ep.mycompany.com
- Check the request path against the
spec.rules.http.paths
value. If a match is found, the request is proxied to the listed Kubernetes Service. If no match is found, it is proxied to the default backend. In this example, there is a match on/cortex
, so the HTTP connection is proxied to theep-cortex-service
Service.
This connection on the ClusterIP Service for Cortex results in a random Cortex pod being selected to receive the connection. The Cortex pods that are selectable must have a passing readiness probe. As a result, only Cortex pods that have the capacity to handle the additional connection will receive the connection.