Stopping and Starting the Cluster while Keeping the Cluster State Intact

It is possible to stop and start a cluster while keeping the cluster state intact.

Stateful Sets

TIBCO Cloud™ API Management - Local Edition uses the followings stateful sets that use persistent volumes to store data:
  • NoSQL stateful sets store the registry information, OAuth token data and the content management service data (CMS).
  • MySQL stateful sets store the API configuration data.
  • Log Service stateful sets store the different kind of logs and metrics, including traffic access logs.

When a new API Management - Local Edition cluster is created on Kubernetes, the persistent volumes get attached to the respective pods. Local Edition provides a script to stop the cluster. Stopping the cluster will bring down the Kubernetes pods without deleting the persistent data (stop-tml-cluster.sh). A corresponding script to restart a stopped cluster is also provided (start-tml-cluster.sh)

For "tethered" mode of configuration, the API configuration data can be recovered even if the MySQL persistent volume is deleted as it gets synchronized from the MOM account. However, in "untethered" mode of configuration, the data is completely lost if the MySQL persistent volume is deleted.

Any logs collected in the Log Service persistent volume will be lost if the log volume is deleted. Logs can be forwarded to external destinations so that there is external backup of the same. Any OAuth data collected in the NoSQL persistent volume will be lost if the NoSQL volume is deleted.

In order to completely delete the Local Edition cluster, including persistent volumes, use the delete-tml-cluster.sh script. This will remove all the Local Edition pods and persistent volumes. This action can not be reverted and should be used with the express intention of deleting the cluster.

Kubernetes specification provides mechanism to backup the volume data. Consult the cloud Kubernetes provider's documentation for this information.

Stopping a Cluster

To stop the Local Edition cluster, run the 'stop-tml-cluster.sh' script:

For example:
user-MBP:manifest-single-zone user$ ./stop-tml-cluster.sh
secret "cm-property" deleted
secret "cm-jks" deleted
secret "cm-crt" deleted
secret "cm-key" deleted
deployment.apps "cm-deploy-0" deleted
secret "tm-property" deleted
secret "tm-jks" deleted
deployment.apps "tm-deploy-0" deleted
secret "cache-property" deleted
statefulset.apps "cache-set-0" deleted
secret "sql-property" deleted
statefulset.apps "mysql-set-0" deleted
secret "log-property" deleted
statefulset.apps "log-set-0" deleted
secret "nosql-property" deleted
statefulset.apps "cass-set-0" deleted
service "cm-svc-0" deleted
service "tm-svc" deleted
service "cache-svc-0" deleted
service "mysql-svc-0" deleted
service "log-svc-0" deleted
service "cass-svc-0" deleted

Persistent Volumes Check

Persistent volumes should be in 'Bound' state after stopping the cluster.

For example:
user-MBP:manifest-single-zone user$ kubectl get pvc 
NAME                   STATUS          VOLUME        CAPACITY         ACCESSMODES        STORAGECLASS          AGE cachevol-cache-set-0-0 Bound  cache-1-local-pv-node-1   2Gi              RWO            cache-storage-class     9m logvol-log-set-0-0     Bound  log-1-local-pv-node-1     2Gi              RWO            log-storage-class      11m nosqlvol-cass-set-0-0  Bound  cass-1-local-pv-node-1    2Gi              RWO            nosql-storage-class    13m sqlvol-mysql-set-0-0   Bound  db-1-local-pv-node-1      2Gi              RWO            sql-storage-class      10m

Starting a Cluster

To start the Local Edition cluster, run the 'start-tml-cluster.sh' script:

For example:
user-MBP:manifest-single-zone user$ ./start-tml-cluster.sh 
 
Starting cassandra service.
service/cass-svc-0 created
Deploying cassandra pods.
secret/nosql-property created
statefulset.apps/cass-set-0 createdcass-set-0-0   0/1     Running   0          10scass-set-0-0   0/1     Running   0          15scass-set-0-0   1/1     Running   0          20sStarting CM service.
service/cm-svc-0 created
Deploying CM pods.
secret/cm-property created
secret/cm-jks created
secret/cm-crt created
secret/cm-key created
deployment.apps/cm-deploy-0 createdcm-deploy-0-86c7d7dcbf-zz7kt   1/1     Running   0          10s
 
Starting log service.
service/log-svc-0 created
Deploying log pods.
secret/log-property created
Error from server (AlreadyExists): secrets "log-resource" already exists
statefulset.apps/log-set-0 createdlog-set-0-0                    0/1     ContainerCreating   0          0slog-set-0-0                    1/1     Running   0          5s
 
Starting DB service.
service/mysql-svc-0 created
Deploying DB pods.
secret/sql-property created
statefulset.apps/mysql-set-0 createdmysql-set-0-0                  0/1     ContainerCreating   0          1smysql-set-0-0                  1/1     Running   0          6s
 
Starting memcache service.
service/cache-svc-0 created
Deploying memcache pods.
secret/cache-property created
statefulset.apps/cache-set-0 createdcache-set-0-0                  0/1     ContainerCreating   0          0scache-set-0-0                  1/1     Running   0          5s