Post Bootstrap Tasks
Update DNS Name Servers
The bootstrap process creates a DNS zone within the cloud being used. To get DNS lookups working against the bootstrapped environment, you must create a NS (name server) record. Create a NS with the DNS provider that controls the parent domain of the domain name chosen during bootstrapping CloudOps for Kubernetes.
For example, you control the domain mycompany.com
and if the TF_VAR_domain
value of the docker-compose.yml
file used during the bootstrap process is set to azure-dev.mycompany.com
. Create a NS record in the DNS servers for mycompany.com
. This points the NS record to the Azure DNS servers listed in the NS record named @
in the Azure DNS zone that serves records for azure-dev.mycompany.com
.
warning
AWS Only: If you are using CloudOps for Kubernetes to acquire a publicly signed SSL certificate, you must re-run docker-compose
. Do this after updating the DNS nameservers and before deploying any other Elastic Path infrastructure. If you do not perform this step, your bootstrap workspace might enter an inconsistent state, preventing you from re-running docker-compose
later to acquire updates.
Azure Specific Steps
Finding Name Server values Using Azure CLI
- Find the name server values of your DNS zone using the Azure CLI, run the following command, adjusting the parameter values as you require:
az network dns zone show
--resource-group value_from_TF_VAR_azure_resource_group_name
--name value_from_TF_VAR_domain
If value_from_TF_VAR_azure_resource_group_name
was substituted with azuredevrg
and value_from_TF_VAR_domain
was substituted with azure-dev.mycompany.com
, it would produce a response such as:
{
"etag": "00000000-0000-0000-1538-8eL0kmVU0C2v",
"id": "/subscriptions/1oSE9fN0-3X0g-sY54-3Fh4-8eL0kmVU0C2v/resourceGroups/azuredevrg/providers/Microsoft.Network/dnszones/my.domain.name",
"location": "global",
"maxNumberOfRecordSets": 5000,
"name": "azure-dev.mycompany.com",
"nameServers": [
"ns1-02.azure-dns.com.",
"ns2-02.azure-dns.net.",
"ns3-02.azure-dns.org.",
"ns4-02.azure-dns.info."
],
"numberOfRecordSets": 3,
"registrationVirtualNetworks": null,
"resolutionVirtualNetworks": null,
"resourceGroup": "azuredevrg",
"tags": {},
"type": "Microsoft.Network/dnszones",
"zoneType": "Public"
}
Find Name Server Values Using Azure Portal
To find the name server values through the Azure Portal, check the DNS zone resource created by the bootstrap process as shown below:
AWS Specific Steps
Finding the Name Server Values Using AWS CLI
- Find the name server values of your DNS zone using the AWS CLI, run the following command, adjusting the parameter values as you require:
aws route53 get-hosted-zone --id output_aws_route53_zone_id_from_bootstrap_container
The output_aws_route53_zone_id_from_bootstrap_container
value should be substituted with the value of the aws_route53_zone_id
key output by the bootstrap container in either setup
or show
mode. Alternatively, use the aws route53 list-hosted-zones
to find the Zone ID and use the string after the last /
in the value for Id
.
The aws route53 get-hosted-zone
command should output a response such as:
{
"HostedZone": {
"Id": "/hostedzone/Z8L2YASH0EP3DV",
"Name": "aws-dev.mycompany.com.",
"CallerReference": "terraform-201905164829483200000001",
"Config": {
"Comment": "Managed by Terraform",
"PrivateZone": false
},
"ResourceRecordSetCount": 3
},
"DelegationSet": {
"NameServers": [
"ns-1300.awsdns-34.org",
"ns-513.awsdns-00.net",
"ns-448.awsdns-56.com",
"ns-1706.awsdns-21.co.uk"
]
}
}
Find Name Server Values Using AWS Console
To find the name server values through the AWS Portal, check the DNS zone resource created by the bootstrap process as shown below:
Log On To the Kubernetes Cluster
One useful tool for managing resources inside Kubernetes cluster is kubectl
. It allows users to:
- View and edit the configuration of resources deployed into Kubernetes
- Show lifecycle events for Kubernetes resources
- View container logs
- Run commands inside running containers
- Create secure tunnels from a developer workstation to ports on running containers
To use kubectl
, you must first log on to the Kubernetes cluster.
Logging On To AKS Cluster
To log on to the AKS (Azure Kubernetes Service) cluster:
Replace
value_from_TF_VAR_azure_resource_group_name
andTF_VAR_kubernetes_cluster_name
with thedocker-compose.yml
file used by the bootstrap container:az aks get-credentials --resource-group value_from_TF_VAR_azure_resource_group_name --name TF_VAR_kubernetes_cluster_name
note
This becomes the default cluster for all kubectl commands unless otherwise defined.
Validate the cluster logon, run:
kubectl config get-contexts
.It will produce an output similar to the following example:
CURRENT NAME CLUSTER AUTHINFO NAMESPACE * azuredev azuredev clusterUser_azuredevrgrg_azuredev
Define which AKS cluster to use in the kubectl CLI, either:
- Run individual commands with the
--context <name>
flag (replacing the<name>
value with theNAME
value from the table above) - Change the default AKS cluster by running
kubectl config use-context <name>
(replacing the<name>
value with theNAME
value from the table above)
For more information, see Configure Access to Multiple Clusters.
- Run individual commands with the
Logging On To EKS Cluster
To log on to the EKS (Elastic Kubernetes Service) cluster:
Collect the following values from the
docker-compose.yml
file used by the bootstrap process:- TF_VAR_aws_access_key_id
- TF_VAR_aws_secret_access_key
- TF_VAR_kubernetes_cluster_name
- TF_VAR_aws_region
Configure the AWS CLI with a profile that uses the same
TF_VAR_aws_access_key_id
andTF_VAR_aws_secret_access_key
values from the previous step.Run the following command, substituting values for the parameters as shown:
eksctl utils write-kubeconfig
--profile=profile_name_from_second_step
--name value_from_TF_VAR_kubernetes_cluster_name
--region value_from_TF_VAR_aws_region
Accessing Kubernetes Dashboard
The Kubernetes Dashboard is a service running inside a Pod in the Kubernetes cluster. It provides a way to:
- See which Kubernetes resources exist
- View and edit the configuration of some of those Kubernetes resources
- View the logs of some of the Kubernetes resources
- View course metrics on the Pods running in the cluster
To access the dashboard:
- Complete the cloud appropriate section of Logging On To the Kubernetes Cluster. This only needs to be done once per Kubernetes cluster.
- Open a new shell/terminal (as the shell will be unavailable for other commands to run)
- Run
kubectl proxy
- Open your web browser pointing to:
- Azure:
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
- AWS:
http://127.0.0.1:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
At this point, the dashboard login screen appears:
To login to the dashboard, you will first need to get the login token. To do so:
For Azure:
- For steps on accessing the Azure dashboard, see Sign in to the dashboard (Kubernetes 1.16+).
note
There is an open issue with the Azure Kubernetes dashboard where the dashboard displays all items as empty. To fix this, run the following commands:
kubectl delete clusterrolebinding kubernetes-dashboard
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard --user=clusterUser
For AWS:
- Collect the
TF_VAR_kubernetes_cluster_name
andTF_VAR_aws_region
values from thedocker-compose.yml
file used during the bootstrap process. - Configure the AWS CLI with a profile that has the same
TF_VAR_aws_access_key_id
andTF_VAR_aws_secret_access_key
values as from thedocker-compose.yml
file used during the bootstrap process. - Run:
aws eks get-token
--cluster-name value_from_TF_VAR_kubernetes_cluster_name
--profile profile_from_previous_step
--region value_from_TF_VAR_aws_region
The command should show output similar to:
{"kind": "ExecCredential", "apiVersion": "client.authentication.k8s.io/v1alpha1",
"spec": {}, "status": {"expirationTimestamp": "2019-07-29T21:35:27Z",
"token": "k8s-aws-v1.DC3OSFFtrGuRgQ9eabSXLykOChsc7iKWaHk_J3SOVIFA6Jzwnne9jQFbHgtGc
4CEepuX4xxS4MQOwbfQSapGyVCt3WGjTozyK0o8vJ690rj3K5tBAAQgN9j8TZDS6rl6mZG7YxxUKIzrKAN
CcBYbg3BKmM9eg7pshOwo8cPguosNi9S6yJiD4gvfVNQ7EAyh3h6kYXvGMXOV1frmNff1seH5qRoeEOEJM
MAvseiqCmL3wN05r0Jd0sfxFxxeEaHLQNMfbnPOF3PSqDJFix3IS2Y67BVQRdipXd5qcHgYK2yhSvRpCXl
04vr84cn8ec5kEQmaEmOWk4oHbPkn5ut1e9zV8xv07m6lDDq0y2kqpg8BuXIBpi3aJS9BuceQxI9qMjgWV
zrwIh85NJ4MK7asIK9TMmuI9ZX5jbGsFBfrrrkp8hGAGlOXGOaA61I6E4jtrasmV0CmfDUeDaV16L6s"}}
- Copy the
token
value. - Return to your web browser showing the Kubernetes Dashboard login screen.
- Select "Token" and paste the copied token into the
Enter token
field. Press theSIGN IN
button.
Accessing Nexus
A Nexus 2 pod is deployed into the Kubernetes cluster as part of the bootstrap process. This Nexus server is used to cache artifacts from external Maven repositories and to store Elastic Path artifacts.
To access the Nexus server:
- Complete the steps in Updating DNS Name Servers
- Collect the TF_VAR_domain and the TF_VAR_kubernetes_cluster_name settings in the docker-compose.yml file for the bootstrap container
- Substitute the settings value into a URL with the format
https://nexus.central<TF_VAR_kubernetes_cluster_name>.<TF_VAR_domain>/nexus
- For example, if
TF_VAR_kubernetes_cluster_name
ishub
andTF_VAR_domain
isep.cloud.mycompany.com
, the Nexus URL would become:https://nexus.centralhub.ep.cloud.mycompany.com/nexus
- For example, if
To login to Nexus, use the following credentials as set in Sonatype’s official Nexus 2 Docker image:
- User:
admin
- Password:
admin123
Accessing Jenkins
A Jenkins server is also deployed during the bootstrap process. To access the web UI for Jenkins.
Ensure that you have completed the steps listed in the Updating DNS Name Servers documentation.
Collect the
TF_VAR_domain
andTF_VAR_kubernetes_cluster_name
settings provided in thedocker-compose.yml
file for the bootstrap containerSubstitute these values into a URL with the format
http://jenkins.central<TF_VAR_kubernetes_cluster_name>.<TF_VAR_domain>
.For example: if
TF_VAR_kubernetes_cluster_name
ishub
andTF_VAR_domain
isep.cloud.mycompany.com
, the Jenkins URL would become:http://jenkins.centralhub.ep.cloud.mycompany.com
.
To login to Jenkins, use the following credentials:
- User:
admin
- Password:
El4stic123
The Jenkins configuration file where the user and password is stored is cloudops-for-kubernetes/bootstrap/jenkins-helm-values.yaml.tmpl
. The file is a Terraform templated version of a Helm chart values.yaml
file.
If you access Jenkins immediately after the bootstrap process completes, you will see a Jenkins job named "bootstrap" running. This job is responsible for populating the Jenkins server with the other jobs and for triggering the build of the Jenkins agent Docker images. You can find the source for the bootstrap job in Jenkins at: cloudops-for-kubernetes/jenkins/jobs/bootstrap/bootstrap.groovy
.
Next Steps
With the post-bootstrap tasks complete, continue on to building the deployment package