Preparing to Setup CloudOps for Kubernetes
The following section describes the preparation you must complete before you will be ready to initialize the infrastructure in your AWS account. You will need to prepare some tools and artifacts, and you will need to gather some values that you will later enter into the docker-compose.override.yml
file. We suggest making notes of the required values as you go along.
Prerequisite
Ensure that you have setup an operations workstation where the setup process will be run. As part of the preparation you will need to update files on that workstation. For more details about creating an operations workstation, see Operations Workstation.
Obtain Access to Elastic Path Artifacts
Ensure that you have access to clone the Elastic Path Git repositories containing the Elastic Path code. Those are read-only repositories provided by Elastic Path and are hosted at Elastic Path Source Code Repositories.
Also ensure that you have credentials that are able to log in to the Self Managed Commerce public Nexus artifact repository.
If you do not have access then contact your project lead or contact Elastic Path at access@elasticpath.com
.
Confirm Compatibility
Select Self Managed Commerce and CloudOps for Kubernetes versions that are compatible by reviewing the Compatibility of CloudOps for Kubernetes documentation.
Host Copies of the Elastic Path Code
You must clone the Git repositories for Self Managed Commerce and CloudOps for Kubernetes and host them in a Git repository hosting service of your choice. Hosting these copies is necessary and allows you to extend or customize the code to meet your unique requirements. The hosting service must be accessible from systems inside your AWS account and not be blocked by a firewall.
CloudOps for Kubernetes Git Repositories
The below two Git repositories comprise CloudOps for Kubernetes. You must clone them and then host them in your Git repository hosting service.
Make note of the ssh URL to clone the repositories and the branches your will use.
Self Managed Commerce Git Repository
The following Git repository provides the Self Managed Commerce base product. Your Self Managed Commerce development will also use this repository for ongoing Self Managed Commerce development. You or your Self Managed Commerce development team must clone it and then host it in your Git repository hosting service.
note
Identify and make note of the Self Managed Commerce version that you will use with CloudOps for Kubernetes.
Provisioning a Git SSH Key
The provided Jenkins jobs will check out CloudOps and Self Managed Commerce source code from your Git repositories. To do so, they will require an SSH key that has read access to those repositories. Identify or create a read-only Git SSH key that the Jenkins jobs will use as part of the CloudOps for Kubernetes build system.
Ensure that the Git key is long-lived and active as long as you use CloudOps for Kubernetes. When bootstrapping CloudOps for Kubernetes, the Git SSH key is added to Jenkins and will be used by the Jenkins instance after you complete initialization. Selecting the SSH key is an important consideration for the longer-term stability and functionality of CloudOps for Kubernetes.
We suggest using a key that is not associated with a particular user, but instead use a deploy or deployment key.
note
Do not use a passphrase or passcode to protect the SSH Key. CloudOps for Kubernetes does not support password-protected SSH keys.
Copy the Git SSH Private Key
Place a copy of the Git ssh private key file on to the operations machine. Make note of the full path to the file. Later you will update the volumes section of the docker-compose.override.yml
file by replacing /path/to/kubernetes_git_id_rsa
with the actual path to your Git ssh private key.
Clone the CloudOps for Kubernetes Code
Using Git, clone a local copy of the cloudops-for-kubernetes
repository to a directory or folder on the operations machine.
Basic Cloud Prerequisites
The following are common cloud-related requirements for all CloudOps for Kubernetes accounts.
Create a New AWS Account
To ensure that CloudOps for Kubernetes does not modify existing infrastructure, it is strongly recommended that a new AWS Account or Sub Account is created. Make note of the Account ID.
For more information, see the AWS documentation on creating an AWS account in your organization.
Create an IAM User and Access Key
The CloudOps for Kubernetes bootstrap process requires an Identity Access Management (IAM) user and Access Key. The Access Key is used during the bootstrap process to create AWS cloud resources, such as:
- Elastic Kubernetes Service (EKS) cluster in your account
- Elastic Container Registry (ECR) repositories
- Route53 zone for DNS
Create an Identity Access Management (IAM) user in your AWS account, then create an AccessKey
and SecretKey
for the IAM user. For more information, see IAM User and Permissions documentation.
Select an AWS Region
Select the AWS region that you will use for the CloudOps infrastructure. Only regions that support the required services can be used. For a list of all of the AWS services, see Requirements for Running CloudOps for Kubernetes.
note
Elastic Path does not recommend the us-east-1 (North Virginia)
US Standard Region. This region operates in slightly different ways causing subtle issues not experienced in other regions. If you require data locality on the east coast, use us-east-2 (Ohio).
Increase AWS vCPU Limit
New AWS accounts have default limits on resources, setting a maximum on the amount of resources users can consume. One of the limits that Elastic Path customers can hit is the vCPU limit: AWS may prevent the infrastructure deployment process from provisioning the desired number of EC2 instances. If you are using a new AWS account then your limits may be set low. As a precaution and to avoid future troubles you can request that your vCPU limit be increased.
If you are uncertain how many vCPU cores you will need then a good initial vCPU limit is 96 vCPUs cores. If you are using the c5.2xlarge
instance type, each instance has eight (8) vCPUs. The minimum cluster size might be one (1) node, but depending on how many Self Managed Commerce environments you deploy the cluster may scale up past 10 nodes.
When you make your request you will need to specify the AWS region. For more information about quotas and how to request quota increases, see AWS Service Quotas.
Generating an SSH Key Pair for the Kubernetes Cluster Nodes
The Amazon Elastic Kubernetes Service (EKS) allows for compute nodes to be accessed using an SSH key pair. The public key needs to be provided when the Kubernetes cluster is created by the bootstrap process.
To enable your Kubernetes nodes to be accessed using SSH:
Create an SSH key pair.
For more information about how to create an SSH key pair, see:
- Linux or macOS: https://www.ssh.com/ssh/keygen/
- Windows: https://www.ssh.com/ssh/putty/windows/puttygen
Use the newly created public key to configure the CloudOps for Kubernetes bootstrap process.
You will later specify this value in the aws_eks_ssh_key
setting of the docker-compose.override.yml
file used by the bootstrap process.
Domain Name and DNS Considerations
Decide on a DNS sub-domain to use for the CloudOps resources that you will deploy in this account. An example of a subdomain is epc-non-prod.mycompany.com
.
Be prepared to create a DNS NS record in the parent DNS domain, which in the above example is mycompany.com
. You need access to modify the parent domain. Configure the NS record after the CloudOps setup steps have completed. The NS record in the parent domain will delegate DNS for the sub-domain to the Amazon Route53 hosted zone that will be created during setup.
Basic Security and Access Control Prerequisites
CloudOps for Kubernetes deploys Jenkins and Nexus servers, which are required for Self Managed Commerce build, continuous integration, and deploy. Access to these services is controlled by allow-lists. Each service has its own allow-list for additional flexibility.
Jenkins Access
Identify one or more CIDR values to control who will be allowed to access the Jenkins service that will be deployed by CloudOps for Kubernetes.
If it will just be you to begin with, then:
- Determine your public IP address. For example you may simply search Google for "My IP". That Google search will return your public IP.
- Append
"/32"
to your public IP address. For example if your public IP address is1.2.3.4
then the correct CIDR value is1.2.3.4/32
You will later specify this value in the TF_VAR_jenkins_allowed_cidr
setting of the docker-compose.override.yml
file used by the bootstrap process.
Nexus Access
Identify one or more CIDR values to control who will be allowed to access the Nexus service that can be deployed by CloudOps for Kubernetes.
If it will just be you to begin with, then:
- Determine your public IP address. For example you may simply search Google for "My IP". That Google search will return your public IP.
- Append
"/32"
to your public IP address. For example if your public IP address is1.2.3.4
then the correct CIDR value is1.2.3.4/32
You will later specify this value in the TF_VAR_nexus_allowed_cidr
setting of the docker-compose.override.yml
file used by the bootstrap process.
Optional Configuration Choices
The following are optional configurations available to consider.
(Optional) Deploy with ModSecurity WAF
CloudOps for Kubernetes bundles a web application firewall (WAF) that you may include. This can be configured during bootstrap but can also be enabled later via a Jenkins job. For more information, see the Manage the Web Application Firewall section.
(Optional) Provide a TLS Certificate and Key File
By default Let’s Encrypt TLS certificate is created and installed. The Let’s Encrpt certificate is managed by a local Kubernetes service called certificate-manager and will be auto-renewed.
Elastic Path recommends using the default Let’s Encrypt TLS certificate, However, you may specify your own pre-provisioned TLS certificate instead and have the bootstrap process add it into the Kubernetes cluster and assign it to the Ingress Controllers.
warning
If you use your own certificate then you will be responsible to renew and re-install the certificate. Using your own certificate is optional.
If you choose to used a pre-provisioned certifiate you must ensure the following:
- The certificate must be a wildcard certificate
- That the certificate must be in PEM format. For more information, see the OpenSSL Cookbook.
- You must conpy the certificate and the private key to the operations workstation.
The HTTPS certificate must be for a wildcard DNS domain, not for a single DNS domain name. The Common Name
field, is used to refer to DNS domain names. In the wildcard certificate, the Common Name
field combines the TF_VAR_kubernetes_cluster_name
and TF_VAR_domain
values in the docker-compose.yml
file used when bootstrapping the cluster. The wildcard common name must be in the following pattern: *.${TF_VAR_kubernetes_cluster_name}.${TF_VAR_domain}
. For example, if you choose the name hub
for the Kubernetes cluster and use the domain epc-non-prod.mycompany.com
, the common name for the certificate should be *.hub.epc-non-prod.mycompany.com
The wildcard certificate must be valid for the following domain name patterns:
*.${TF_VAR_domain}
*.${TF_VAR_kubernetes_cluster_name}.${TF_VAR_domain}
*.central${TF_VAR_kubernetes_cluster_name}.${TF_VAR_domain}
*.commerce${TF_VAR_kubernetes_cluster_name}.${TF_VAR_domain}
For general information about TLS certificates see the OpenSSL documentation.
To have the bootstrap process use the certificate, look for the certificate related settings within the volumes
section at the end of the docker-compose.override.yml
file used by the bootstrap process.