Load balancing and HTTP routing in CloudOps for Azure are accomplished by using Azure’s load balancers with the HAProxy Ingress Controller.
To understand how these technologies work together and handle a request, look at how the ActiveMQ and Cortex services inside of CloudOps for Azure use them.
To support certain integrations which require access to ActiveMQ and because ActiveMQ is a TCP based service, Elastic Path provides access to it using an external load balancer.
In the preceding diagram, you can see the single ActiveMQ pod running in the Live Namespace. The Deployment of ActiveMQ is paired with a Service that looks like:
apiVersion: v1 kind: Service metadata: name: ep-activemq-production-service labels: app: ep-activemq-production spec: type: LoadBalancer ports: - name: ep-activemq-port-61616 port: 61616 protocol: TCP - name: ep-activemq-port-8161 port: 8161 protocol: TCP selector: app: ep-activemq-production loadBalancerSourceRanges: - 188.8.131.52/32
Because the ActiveMQ Kubernetes Service is set to
ClusterIP Services for ActiveMQ are automatically created. An Azure external load balancer is also created and the load balancer is configured to forward traffic to the
NodePort Service. The
NodePort Service binds the ports of the ActiveMQ Pod to the IP of each virtual machine that it runs on. As a result, the Azure external load balancer knows which virtual machine to send connections to and those connections are forwarded from the virtual machine into the Pod.
This external load balancer is only accessible to addresses within an IP address CIDR (Classless Inter-Domain Routing) range provided as a parameter to the
deploy-or-delete-ep-stack Jenkins job, which is set as the
loadBalancerSourceRanges value in the preceding YAML example. The service requiring access to ActiveMQ would be within this IP address CIDR range.
For Pods inside the AKS cluster, such as Cortex, the pods can connect to the
ClusterIP Service by name. This forwards the connection to a random Pod behind the Service, where the Pod’s readiness probe indicates that it is able to handle an additional connection.
The Cortex API is the entry point into the Elastic Path Commerce solution. It provides an API and therefore requires HTTP routing.
In the preceding diagram, there are two Ingress Controller Pods running in the Ingress Controller Namespace. Similar to ActiveMQ, the Service for the Ingress Controllers is a
LoadBalancer type. This results in a load balancer for ports 80 and 443 being created for the Ingress Controller Pods.
The Ingress Controllers provide HTTP routing for Services running inside the AKS cluster. Whenever it receives a connection, it checks the received Host header for a matching Host rule in all of the Ingresses in all the namespaces of the Kubernetes cluster. If a matching Ingress is found, the Ingress Controller proxies the connection to the backend Service.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: production-ingress namespace: live annotations: ingress.kubernetes.io/whitelist-source-range: 184.108.40.206/32 ingress.kubernetes.io/secure-backends: "false" ingress.kubernetes.io/load-balance: "least_conn" ingress.kubernetes.io/use-proxy-protocol: "true" kubernetes.io/ingress.class: "haproxy" spec: tls: - hosts: - live.productionaks.ep.mycompany.com rules: - host: live.productionaks.ep.mycompany.com http: paths: - path: /studio backend: serviceName: ep-cortex-service servicePort: 8080 - path: /cortex backend: serviceName: ep-cortex-service servicePort: 8080
In the preceding YAML, there is an example Ingress for Cortex. If you had a connection on
https://live.productionaks.ep.mycompany.com/cortex, the Ingress Controller would do the following:
- Find the Hosts header of the request and check for a matching
spec.tls.hostsvalue. If one is found, it would terminate the TLS connection and proxy an unencrypted connection to a backend. In this example there is a match on
- Compare the Hosts header against the
spec.rules.hostfield. If no match is found, the request is proxied to the default backend. In this example there is a match on
- Check the request path against the
spec.rules.http.pathsvalue. If a match is found, the request is proxied to the listed Service. If no match is found, it is proxied to the default backend. In this example there is a match on
This connection on the ClusterIP Service for Cortex results in a random Cortex pod being selected to receive the connection. The Cortex pods that are selectable must have a passing readiness probe. As a result, only Cortex pods that have the capacity to handle the additional connection will receive the connection.
In cases where the connection is proxied to the default backend, this refers to a particular Service and Deployment, both named
haproxy-ingress-global-default-backend, in the
haproxy-ingress-global Namespace. This Deployment and Service is created by the HAProxy Ingress Controller Helm chart when the Ingress Controller is installed during the bootstrap process.