Manage Terraform Workspaces
After you deploy or upgrade CloudOps for Kubernetes, there might be resources that terraform workspaces manage that no longer use the current workspace naming conventions. These resources cannot be managed by other Jenkins jobs. To manage these workspaces and resources, you can use the following Jenkins jobs.
List Terraform Workspaces
While managing CloudOps for Kubernetes, you can use multiple backends to manage terraform states. You can also see a list of all of the terraform workspaces that you have created while managing CloudOps for Kubernetes.
To perform this task, run the Jenkins job
list-terraform-workspaces
with the following parameters:cloudOpsForKubernetesRepoURL
cloudOpsForKubernetesBranch
The job will output to the log console a list of the Terraform workspaces that currently exist in the Terraform backend.
Copy Terraform Workspaces
Since terraform workspace naming changed over time, you might need to update the workspace name for a particular terraform workspace. The reason for this is to enable the updated Jenkins jobs to manage it. To update a workspace name, first copy it into a new terraform workspace that follows the updated naming convention using the Jenkins job copy-terraform-workspace
, which takes the following parameters:
sourceWorkspaceName
destWorkspaceName
cloudOpsForKubernetesRepoURL
cloudOpsForKubernetesBranch
The sourceWorkspaceName
parameter should correspond to a current terraform workspace that exists and manages resources. The destWorkspaceName
parameter corresponds to the new Terraform workspace name that you want to use to manage those resources.
warning
If you specify a destWorkspaceName
parameter that is not empty, this job will overwrite the old Terraform workspace files. This can lead to problems managing those files, creating Terraform locks that are not released, or other unpredictable behaviour.
Delete Terraform Workspaces
After copying resources to a new Terraform workspace, clean up old Terraform workspaces that previously managed those resources. This is to avoid having the resources get out of sync with Terraform. To delete a Terraform workspace while preserving the resources that it managed, you can use the Jenkins job delete-terraform-workspace
, which takes the following parameters:
targetWorkspaceName
cloudOpsForKubernetesRepoURL
cloudOpsForKubernetesBranch
The targetWorkspaceName
parameter is the Terraform workspace that you want to delete.
warning
Using this job on a Terraform workspace that is the only one managing a set of resources will remove the ability to manage those resources using CloudOps for Kubernetes.
Delete Terraform Managed Resources
You may have leftover Terraform workspaces and resources that you are unable to delete through individual management jobs. You can delete them using the Jenkins job delete-terraform-resources
, which include the following parameters:
targetWorkspaceName
clusterName
cloudOpsForKubernetesRepoURL
cloudOpsForKubernetesBranch
The targetWorkspaceName
parameter is the Terraform workspace that you want to delete. The clusterName
parameter refers to the Kubernetes cluster that manages the resources you want to delete.
warning
This job deletes the terraform workspace and all underlying resources.
warning
Do not use this job to delete the bootstrap
workspace. This might delete your CloudOps for Kubernetes installation, including your ability to manage any existing CloudOps for Kubernetes infrastructure.
Destroy Terraform Remote State
After running docker-compose to update from CloudOps for Kubernetes v2.5.x, you must run the job terraform-destroy-remote-state. This job is required because the Terraform provider does not resolve legacy Terraform workspaces correctly. When you run this job, the process traverses all CloudOps for Kubernetes Terraform workspaces and deletes the root module output variables obtained from the bootstrap workspace. These data objects are re-created on subsequent runs of each individual resource management Jenkins job. This job does not create any terraform resources, and will not delete any existing Terraform resource objects from your Terraform back-end.
note
You do not need to run this job if you are only updating the bootstrap and back-end workspaces.
Run the Jenkins job terraform-destroy-remote-state
with the following parameters:
cloudOpsForKubernetesRepoURL
cloudOpsForKubernetesBranch
Backing up the Terraform Remote State
The Terraform state is an important component of the CloudOps for Kubernetes infrastructure-as-code implementation. It is good practice to create a backup of the Terraform remote state occasionally, and especially before making significant changes or applying CloudOps for Kubernetes updates. Creating a backup allows for state recovery in the case of any accidents during the upgrade process. For information about Terraform state, see the Terraform documentation.
The Terraform remote state is stored in the AWS S3 bucket first created by the CloudOps for Kubernetes setup process.
Create a new S3 bucket through the S3 console using the
Copy settings from existing bucket - optional
configuration. This will mirror the configuration of the source bucket.note
Give the backup bucket a significant name to separate it from the Terraform controlled S3 bucket.
Ensure that the bucket is created with the Access Control List (ACL) enabled and
object_ownership
is set toObjectWriter
.Run the AWS cli command, substituting values for the parameters as shown:
aws s3 sync s3://<SOURCE_S3_ORIGINAL_BUCKET> s3://<TARGET_S3_BACKUP_BUCKET>
In the case where something unexpected happens during the upgrade process and the Terraform remote state is lost, you can use the backup bucket to return your CloudOps for Kubernetes environment to a state before the upgrade.
In the
docker-compose.override.yml
file, set the value ofTF_VAR_aws_backend_s3_bucket
to the name of the backup bucket.Add a lifecycle configuration block with
prevent_destroy
set totrue
inside the terraform resourceaws_s3_bucket.terraform_backend
in thecloudops-for-kubernetes/bootstrap/terraform-backend/aws.tf
file. It should look like:resource "aws_s3_bucket" "terraform_backend" { bucket = var.aws_backend_s3_bucket tags = { Name = var.aws_backend_s3_bucket } lifecycle { prevent_destroy = true } }
Run
docker-compose up --build
withTF_VAR_bootstrap_mode
insetup
mode.
Connecting Directly to Terraform Backend
You can connect to the Terraform backend created by CloudOps for Kubernetes. Configure the connection using the docker-compose.override.yml
file used to set up the environment.
In the
docker-compose.override.yml
file, set the value ofTF_VAR_bootstrap_mode
tocreate-terraform-files
.At the end of the
docker-compose.override.yml
file, follow the instructions to mount the CloudOps for Kubernetes code to the pathlocalcode
within the setup container.Run
docker-compose up --build
. This process generates acredentials.tfvars
,bootstrap.tfvars
, andbackend.tf
file that you can use to connect to the terraform backend that was created during the setup process.note
Before you can manage the resources on your Kubernetes cluster, you must authenticate with the EKS cluster using the following command:
$ eksctl utils write-kubeconfig \ --cluster ${TF_VAR_kubernetes_cluster_name} \ --region ${TF_VAR_aws_region} \ --timeout=10m \
Managing the bootstrap workspace
After you create the Terraform files, you can manage the bootstrap workspace.
Go to the
bootstrap/terraform
folder.You will see the newly generated
bootstrap.tfvars
andcredentials.tf
files, and the de-templatizedaks.tf
file.Run
terraform apply
.You must add a variable to the
bootstrap.tfvars
file, specifying the git key that you used during setup.note
You can find this value appended to the end of the file. For example:
git_credential_private_key_path = "~/cloud-ops-kubernetes/id_rsa"