Deployment Sequence

The following section provides information on the sequence to be followed for deploying a API Management - Local Edition cluster using custom scripts or any combined practices of Continuous Integration (CI) and Continuous Delivery (CD).

Note:
  • TIBCO Cloud™ API Management - Local Edition depends on a proper sequence due to dependencies between each of the pods. Each pod also advertises its state with the registry. Because of this registry requirement the NoSQL pods must be deployed before any other pod.
  • In case deployment scripts provided by TIBCO are used, the following deployment sequence can be ignored.

Prerequisites

  • The service pods must be created in the following order:
    1. NoSQL
    2. Configuration Manager
    3. Log
    4. MySQL
    5. Cache
    6. Traffic Manager
    7. Reporting
  • Run compose.sh with the topology specific manifest file and generate the deployment scripts.
  • Depending on the deployment topology, the deployment scripts are located at the following locations:
    1. Single zone : <tml-base>/docker-deploy/<platform-type>/k8s/manifest-single-zone
    2. Multi zone: <tml-base>/docker-deploy/<platform-type>/k8s/manifest-multi-zone
    3. Single Host : <tml-base>/docker-deploy/<platform-type>/k8s/manifest-single-host
  • For multi-zone deployment for tml-type apply resources start from index 0, then 1 and so on for every zone.
Note: For OpenShift replace kubectl with oc.

Procedure

  1. Create storage class.
    kubectl apply -f storage-classes-${index}.yaml

    Check if the storage classes are created successfully as follows:

    kubectl get storageclass
    Note: For multi-zone deployment apply storage classes from index 0, then 1 and so on for each zone.

    Output will be as follows:

    cache-storage-0       kubernetes.io/aws-ebs   9d
    cache-storage-1       kubernetes.io/aws-ebs   9d
    gp2 (default)         kubernetes.io/aws-ebs   10d
    log-storage-0         kubernetes.io/aws-ebs   9d
    log-storage-1         kubernetes.io/aws-ebs   9d
    nosql-storage-0       kubernetes.io/aws-ebs   9d
    nosql-storage-1       kubernetes.io/aws-ebs   9d
    reporting-storage-0   kubernetes.io/aws-ebs   9d
    reporting-storage-1   kubernetes.io/aws-ebs   9d
    sql-storage-0         kubernetes.io/aws-ebs   9d
    sql-storage-1         kubernetes.io/aws-ebs   9d
  2. Create resource secrets.
    kubectl create secret generic nosql-resource --from-file=resources/tml-nosql
    kubectl create secret generic cm-resource --from-file=resources/tml-cm
    kubectl create secret generic log-resource --from-file=resources/tml-log
    kubectl create secret generic sql-resource --from-file=resources/tml-sql
    kubectl create secret generic cache-resource --from-file=resources/tml-cache
    kubectl create secret generic tm-resource --from-file=resources/tml-tm
  3. Create secret as follows:
    kubectl create secret generic cluster-property --from-file=tml_cluster_properties.json
    kubectl create secret generic zones-property --from-file=tml_zones_properties.json
    kubectl create secret generic papi-property --from-file=tml_papi_properties.json
    kubectl create secret generic nosql-property --from-file=tml_nosql_properties.json
    kubectl create secret generic cm-property --from-file=tml_cm_properties.json
    kubectl create secret generic cm-jks --from-file=tml-cm.jks
    kubectl create secret generic cm-crt --from-file=tml-cm-crt.pem
    kubectl create secret generic cm-key --from-file=tml-cm-key.pk8
    kubectl create secret generic tm-property --from-file=tml_tm_properties.json
    kubectl create secret generic tm-jks --from-file=tml-tm.jks
    kubectl create secret generic tm-trust-jks --from-file=tml-tm-trust.jks
    kubectl create secret generic sql-property --from-file=tml_sql_properties.json
    kubectl create secret generic log-property --from-file=tml_log_properties.json
    kubectl create secret generic cache-property --from-file=tml_cache_properties.json
    For deploying tml-reporting create the following secrets.
    kubectl create secret generic reporting-resource --from-file=resources/tml-reporting
    kubectl create secret generic fluentd-conf --from-file=resources/tml-reporting/fluentd/conf
    kubectl create secret generic fluentd-plugin --from-file=resources/tml-reporting/fluentd/plugin
    kubectl create secret generic grafana-resource --from-file=resources/tml-reporting/grafana
    kubectl create secret generic grafana-user-dashboards --from-file=resources/tml-reporting/grafana/dashboards/CustomDashboards
    kubectl create secret generic grafana-operations --from-file=resources/tml-reporting/grafana/dashboards/MasheryReporting/operations
    kubectl create secret generic grafana-summary --from-file=resources/tml-reporting/grafana/dashboards/MasheryReporting/summary
    kubectl create secret generic prometheus-resource --from-file=resources/tml-reporting/prometheus
    kubectl create secret generic loki-resource --from-file=resources/tml-reporting/loki
  4. Optional: If you want to create an image pull secret.
    kubectl create secret docker-registry \
        --docker-server=<registry host> \
        --docker-username=<registry username> \
        --docker-password=<registry password> \
        tmgc-registry-key
  5. Deploy all services as follows:
    kubectl apply -f cass-svc-${index}.yaml
    kubectl apply -f log-svc-${index}.yaml
    kubectl apply -f mysql-svc-${index}.yaml
    kubectl apply -f cache-svc-${index}.yaml
    kubectl apply -f cm-svc-${index}.yaml
    kubectl apply -f tm-svc.yaml
    Note: Reporting service can be deployed only in a single zone.

    Sample commands for us-central1-a zone is as follows:

    kubectl apply -f reporting-svc-us-central1-a.yaml
    kubectl apply -f reporting-svc-external-us-central1-a.yaml
  6. Check if the services have been successfully created as follows:
    kubectl get services

    Output will be as follows:

    NAME              TYPE           CLUSTER-IP      EXTERNAL-IP                                                               PORT(S)                                                       AGE
    cache-svc-0       ClusterIP      None            <none>                                                                    11211/TCP,11212/TCP,11213/TCP,11214/TCP,11215/TCP,11216/TCP   9d
    cass-svc-0        ClusterIP      None            <none>                                                                    9042/TCP                                                      9d
    cm-svc-0          LoadBalancer   10.100.59.255   aa5a3174ced1f4813b1ee93c4d811f6b-1898460514.us-east-1.elb.amazonaws.com   8443:31577/TCP,7080:31954/TCP,7443:31677/TCP                  9d
    kubernetes        ClusterIP      10.100.0.1      <none>                                                                    443/TCP                                                       10d
    log-svc-0         ClusterIP      None            <none>                                                                    24224/TCP,24220/TCP,24221/TCP,24222/TCP                       9d
    mysql-svc-0       ClusterIP      None            <none>                                                                    3306/TCP,33061/TCP                                            9d
    reporting-app-0   LoadBalancer   10.100.175.69   a6987a86fa3ff4bfc8f27073ee2217db-774902445.us-east-1.elb.amazonaws.com    3000:32563/TCP                                                7d
    reporting-svc-0   ClusterIP      None            <none>                                                                    3000/TCP                                                      9d
    tm-svc            LoadBalancer   10.100.174.66   a010ad409036a45a2ab99782f4a8706f-169206996.us-east-1.elb.amazonaws.com    80:32479/TCP,443:31313/TCP,8083:31806/TCP                     9d
    tm-svc-internal   LoadBalancer   10.100.54.230   a830c47aea66f4ac1be79c3875d51f21-1622402344.us-east-1.elb.amazonaws.com   80:30371/TCP,443:30671/TCP,8083:31015/TCP                     9d
  7. Deploy NoSQL service.
    kubectl apply -f nosql-pod-${index}.yaml
    Note: After applying NoSQL pod you will see the following log output indicating Cassandra nodes are up and running.
    kubectl logs cass-set-0-0

    Output will be as follows:

    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:file:/opt/mashery/containeragent/lib/tpcl/ch.qos.logback.classic_1.2.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/opt/mashery/containeragent/lib/tpcl/logback-classic-1.2.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    INFO  [main] 2021-08-07 01:17:01,138 Gossiper.java:1701 - No gossip backlog; proceeding
    SLF4J: Actual binding is of type [ch.qos.logback.classic.util.ContextSelectorStaticBinder]
    containeragent (pid 1276) is running...

    If there are multiple Cassandra pods, the below log output can be seen.

    INFO  [main] 2021-07-30 07:47:23,503 StorageService.java:1449 - JOINING: Starting to bootstrap...
    INFO  [main] 2021-07-30 07:47:23,545 StreamResultFuture.java:90 - [Stream #60732d20-f10a-11eb-9dee-6fb0c9b1b7b6] Executing streaming plan for Bootstrap
    INFO  [StreamConnectionEstablisher:1] 2021-07-30 07:47:23,554 StreamSession.java:266 - [Stream #60732d20-f10a-11eb-9dee-6fb0c9b1b7b6] Starting streaming to /192.168.8.107
    INFO  [StreamConnectionEstablisher:1] 2021-07-30 07:47:23,557 StreamCoordinator.java:264 - [Stream #60732d20-f10a-11eb-9dee-6fb0c9b1b7b6, ID#0] Beginning stream session with /192.168.8.107
    INFO  [STREAM-IN-/192.168.8.107:7000] 2021-07-30 07:47:23,568 StreamResultFuture.java:187 - [Stream #60732d20-f10a-11eb-9dee-6fb0c9b1b7b6] Session with /192.168.8.107 is complete
    INFO  [StreamConnectionEstablisher:2] 2021-07-30 07:47:23,575 StreamSession.java:266 - [Stream #60732d20-f10a-11eb-9dee-6fb0c9b1b7b6] Starting streaming to /192.168.19.160
    INFO  [StreamConnectionEstablisher:2] 2021-07-30 07:47:23,580 StreamCoordinator.java:264 - [Stream #60732d20-f10a-11eb-9dee-6fb0c9b1b7b6, ID#0] Beginning stream session with /192.168.19.160
    INFO  [STREAM-IN-/192.168.19.160:7000] 2021-07-30 07:47:23,588 StreamResultFuture.java:187 - [Stream #60732d20-f10a-11eb-9dee-6fb0c9b1b7b6] Session with /192.168.19.160 is complete
    INFO  [STREAM-IN-/192.168.19.160:7000] 2021-07-30 07:47:23,593 StreamResultFuture.java:219 - [Stream #60732d20-f10a-11eb-9dee-6fb0c9b1b7b6] All sessions completed
    INFO  [main] 2021-07-30 07:47:23,599 StorageService.java:1449 - JOINING: Finish joining ring
    INFO  [STREAM-IN-/192.168.19.160:7000] 2021-07-30 07:47:23,606 StorageService.java:1508 - Bootstrap completed! for the tokens [7635535438842737552, 6244423451386580644, -807892735552018602, 17969032
    38882583306, -959993705797373391, -4431754111306089197, -7456837306080462267, -5393060984937455814, 7879194275152297156, -6332960200985977913, -566523580936125161, -2328256701323223797, 353452392273
    0543018, 3768424131407991242, -2080329565188323460, -1951378018385099350, 4110905109422783803, -1123641547224386520, -957528312907090424, -31625905873932128, 6347044696158661417, 2642191943803334943
    , -1292908263869730037, -6202545713550248129, 2028717097417634416, 2283340219657285471, 5821394578554330785, 6084908708426866506, 5343046259693124960, -4159072616317009065, 6907637449959885635, -594
    1635556830205750]
  8. Deploy the Configuration Manager.
    kubectl apply -f cm-pod-${index}.yaml
  9. Wait for Cassandra SQL database to get ready and then check the registry log.
    [2021-08-09T06:40:00,889Z] RegistryService [QuartzScheduler_Worker-2] INFO  com.tibco.mashery.tmgc.registry.cron.CassandraStatusCheckerJob - total nodes 3, healthy nodes 3, percentage 100%
    [2021-08-09T06:40:00,889Z] RegistryService [QuartzScheduler_Worker-2] INFO  com.tibco.mashery.tmgc.registry.cron.CassandraStatusCheckerJob - Cassandra is healthy and Registry keyspaces available : true
    [2021-08-09T06:40:30,008Z] RegistryService [QuartzScheduler_Worker-3] INFO  com.tibco.mashery.tmgc.registry.cron.OrphanTmgcCheckerJob - Orphan check completed successfully in the clusterId 6753d591-e0d1-4b6d-a96f-cd4b169bf1e8 and zoneId d8dffe8d-5f27-45f8-8162-6d1c6783bf8f
    [2021-08-09T06:42:00,864Z] RegistryService [QuartzScheduler_Worker-2] INFO  com.tibco.mashery.tmgc.registry.cron.CassandraStatusCheckerJob - nodetool status
     Datacenter: dc1
    ===============
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address         Load       Tokens       Owns (effective)  Host ID                               Rack
    UN  192.168.6.8     3.06 MiB   32           100.0%            588018d1-0fe6-4e7d-9f51-67661048822d  rack1
    UN  192.168.30.112  2.97 MiB   32           100.0%            bd0f3a65-791b-49bd-bc50-23b6ef92b97a  rack1
    UN  192.168.18.112  3.05 MiB   32           100.0%            61f63784-573d-4772-ae46-cc3cea5d3ea1  rack1
  10. Deploy the Log pod.
    kubectl apply -f log-pod-${index}.yaml
  11. Deploy the MySQL pod.
    kubectl apply -f sql-pod-${index}.yaml
  12. Deploy the Cache pod.
    kubectl apply -f cache-pod-${index}.yaml
  13. Deploy the Traffic Manager pod.
    kubectl apply -f tm-pod-${index}.yaml
  14. Optional: If you want to deploy reporting services carry out the following steps.
    • Label one of the node as kubectl label node <node> node-name=reporting.
      Note: For multi-zones, the node must start from the first zone.
    • Deploy reporting pods.

      kubectl apply -f reporting-pod-0.yaml

Result

Check the status of all the pods.
kubectl get pods
Output
NAME                           READY   STATUS    RESTARTS   AGE
cache-set-0-0                  1/1     Running   0          2d5h
cache-set-0-1                  1/1     Running   0          2d5h
cass-set-0-0                   1/1     Running   0          2d5h
cass-set-0-1                   1/1     Running   0          2d5h
cass-set-0-2                   1/1     Running   0          2d5h
cm-deploy-0-7b6b96bbf5-sbxmd   1/1     Running   0          2d5h
log-set-0-0                    1/1     Running   0          2d5h
log-set-0-1                    1/1     Running   0          2d5h
mysql-set-0-0                  1/1     Running   0          2d5h
reporting-set-0-0              1/1     Running   0          2d5h
tm-deploy-0-64fb5f79b4-qsjc4   1/1     Running   0          2d5h
tm-deploy-0-64fb5f79b4-rhvzn   1/1     Running   0          2d5h
tm-deploy-0-64fb5f79b4-s6cr4   1/1     Running   0          2d5h