Migrating a Database from CloudOps for AWS to CloudOps for Kubernetes
If you have an existing database created from an existing CloudOps for AWS based Self Managed Commerce deployment, you can migrate the database and its existing schema into CloudOps for Kubernetes.
note
Ensure that the database is suitable for use with CloudOps for Kubernetes. The external database instance must be a MySQL variant and is one of the MySQL variants compatible with the version of Self Managed Commerce.
For more information on the database requirements for the latest Commerce version, see Databases.
Ensure that the schema and data from the existing database are for the same Commerce version as the CloudOps For Kubernetes deployment of Self Managed Commerce to avoid compatibility issues.
The schema must have been created by the data-pop tool to ensure that the required Liquibase records are present. This allows the data-pop tool in CloudOps for Kubernetes to be able to update it in the future.
- Run the
create-or-delete-database-and-user-in-external-database-instance
Jenkins Job with the following parameters set:
TF_VAR_use_existing_schema
set totrue
TF_VAR_database_name
set to the schema used in the Commerce deployment from CloudOps for AWSTF_VAR_database_username
set to the username used in the Commerce deployment from CloudOps for AWSTF_VAR_database_password
set to the password of the username used in the Commerce deployment from CloudOps for AWS
warning
TF_VAR_use_existing_schema
must be set to true
when migrating the database. Otherwise a new schema will be created by this job and the data does not get copied into the created schema.
Once you have migrated the database to CloudOps for Kubernetes, do not run the run-data-pop-tool
Jenkins job with the job parameter dataPopToolCommand
set to reset-db
on the database. If you do the schema will be recreated and the original data will be lost.