Load balancing and HTTP routing in CloudOps for Kubernetes is accomplished by using Azure load balancers or AWS network and application load balancers with the Ambassador API Gateway.
To understand how these technologies work together and handle a request, look at how the ActiveMQ and Cortex services inside of CloudOps for Kubernetes use them.
To support certain integrations which require access to ActiveMQ and because ActiveMQ is a TCP based service, Elastic Path provides access to it using an external load balancer.
In the preceding diagram, you can see the single ActiveMQ pod running in the Live Namespace. The Deployment of ActiveMQ is paired with a Service that looks like:
apiVersion: v1 kind: Service metadata: annotations: getambassador.io/config: | --- apiVersion: ambassador/v2 kind: Mapping name: activemq_mapping prefix: / service: ep-activemq-epactivemq-service.default:8161 host: activemq.commercelive.productionaks.ep.mycompany.com name: ep-activemq-production-service labels: app: ep-activemq-production spec: type: LoadBalancer ports: - name: ep-activemq-port-61616 port: 61616 protocol: TCP externalTrafficPolicy: Cluster selector: app: ep-activemq-production
Due to the ActiveMQ Kubernetes Service type being set to
ClusterIP Kubernetes Services for ActiveMQ are automatically created.
NodePort Service configures networking on each node in the Kubernetes cluster. Each node forwards the connections from its network interface to the
ClusterIP Kubernetes Service. Connections to the
ClusterIP are forwarded to the ActiveMQ pod.
A load balancer is created and configured to forward external traffic to the correct port on the correct node in the Kubernetes cluster. The request will terminate with the ActiveMQ Pod.
This external load balancer is only accessible to addresses within an IP address CIDR (Classless Inter-Domain Routing) range. It is provided as a parameter to the
create-or-delete-activemq-container Jenkins job. The service requiring access to ActiveMQ must be within this IP address CIDR range.
For Pods inside the Kubernetes cluster, such as Cortex, the pods can connect to the
ClusterIP Service by name. This forwards the connection to a random ActiveMQ Pod behind the Service. Connections are only forwarded if the Pod’s readiness probe is passing.
For more information about Kubernetes Services, see Publishing Services (ServiceTypes)
The Cortex API is the entrypoint into the Elastic Path Commerce solution. It provides an API and therefore requires HTTP routing.
In the preceding diagram, there are two Ingress Controller Pods running. A load balancer forwards external connection to ports 80 and 443 on the Ingress Controller Pods.
The Ingress Controller Pods provide HTTP routing for Services running inside the Kubernetes cluster. When an Ingress Controller Pod receives a connection, it checks the received Host header for a matching Host rule. It checks in all of the Ingresses in all the namespaces of the Kubernetes cluster. If a matching Ingress is found, the Ingress Controller Pod proxies the HTTP connection to the backend Service.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: production-ingress namespace: live annotations: kubernetes.io/ingress.class: "ambassador" spec: tls: - hosts: - live.productionaks.ep.mycompany.com rules: - host: live.productionaks.ep.mycompany.com http: paths: - path: /cortex backend: serviceName: ep-cortex-service servicePort: 8080
In the preceding YAML, there is an example Ingress for Cortex. If you had a connection on
https://live.productionaks.ep.mycompany.com/cortex, the Ingress Controller would do the following:
- Find the Hosts header of the request and check for a matching
spec.tls.hostsvalue. If one is found, it would terminate the TLS connection and proxy an unencrypted connection to a backend. In this example there is a match on
- Compare the Hosts header against the
spec.rules.hostfield. If no match is found, the request is proxied to the default backend. In this example there is a match on
- Check the request path against the
spec.rules.http.pathsvalue. If a match is found, the request is proxied to the listed Kubernetes Service. If no match is found, it is proxied to the default backend. In this example, there is a match on
/cortex, so the HTTP connection is proxied to the
This connection on the ClusterIP Service for Cortex results in a random Cortex pod being selected to receive the connection. The Cortex pods that are selectable must have a passing readiness probe. As a result, only Cortex pods that have the capacity to handle the additional connection will receive the connection.