Creating and using a staging deployment¶
When working on LTD Keeper, it’s sometimes useful to create a separate Kubernetes deployment for testing that’s isolated from production. For example, a staging deployment can be used to test breaking changes like SQL migrations. Ideally this would be an automated integration test. But for now, this page gives a playbook for creating an isolated staging deployment.
Duplicating the production database¶
Creating the staging CloudSQL instance¶
Create a CloudSQL instance for staging called ltd-sql-staging
, if one is not already available.
Visit the project’s CloudSQL console dashboard, https://console.cloud.google.com/sql/, and click Create instance.
Choose a second generation of CloudSQL.
Configure to match the production CloudSQL instance:
Name:
ltd-sql-staging
.DB version: MySQL 5.6.
Region:
us-central1-b
.Type:
db-g1-small
.Disk: 10 GB.
Disable backups and binary logging (not needed for staging).
Note
This staging DB inherits the credentials from the production database. There’s no need to create a new root user password.
Restore the production DB backup to the staging DB¶
Follow CloudSQL documentation on restoring to a different instance.
The backup should be a recent one from the production database.
The target instance is ltd-sql-staging
.
Since this is a temporary staging DB, it’s safe to overwrite any existing data.
You can connect to this staging database by following:
Further reading¶
Creating and using a Kubernetes namespace for staging¶
Create a namespace¶
Check what namespaces currently exist in the cluster:
kubectl get namespaces
Currently, production is deployed to the default
namespace; we use an ltd-staging
(or similar) namespace for integration testing.
To create the ltd-staging
namespace using the kubernetes/ltd-staging-namespace.yaml
resource template in the Git repository:
kubectl create -f ltd-staging-namespace.yaml
Confirm that an ltd-staging
namespace exists:
kubectl get namespaces --show-labels
Create the context¶
Contexts allow you to switch between clusters and namespaces when working with kubectl
.
First, look at the existing contexts (these are configured locally):
kubectl config get-contexts
If there’s only a context for the lsst-the-docs
cluster and default namespace, then we can create a context for the ltd-staging
namespace.
kubectl config set-context ltd-staging --namespace=ltd-staging --cluster=$CLUSTER_NAME --user=$CLUSTER_NAME
where $CLUSTER_NAME
are the same as that for the existing default context.
It’s also convenient to create a name for the default namespace:
kubectl config set-context ltd-default --cluster=$CLUSTER_NAME --user=$CLUSTER_NAME
Switch to the staging context¶
kubectl config use-context ltd-staging
You can confirm what namespace you’re working in with:
kubectl config current-context
Further reading¶
Deploying to the staging namespace¶
LTD Keeper can be deployed into the ltd-staging
namespace using the same pattern described in Configuring LTD Keeper on Kubernetes and Initial Kubernetes deployment.
Some modifications, described below, are needed to re-configure the deployment for staging.
Modifying configuration and secrets¶
Secrets and other resources need to be customized for the staging namespace:
Modifications to
kubernetes/keeper-secrets-staging.yaml
:db-url
should point to the newltd-sql-staging
database.
Modifications to
kubernetes/keeper-config-staging.yaml
:server-name
should point to a staging URL, likekeeper-staging.lsst.codes
. Remember to create a new DNS record pointing to thenginx-ssl-proxy
.cloud-sql-instance
: should point to the newltd-sql-staging
database.
Note
It may be necessary to update kubernetes/ssl-proxy-secrets.yaml
if you aren’t using a wildcard cert.
Warning
With the staging deployment, as currently implemented, the database is independent, but resources in the S3 bucket are not, since the S3 bucket is specified in DB tables that are replicated from the production DB.