Validating a Mashery Local Cluster

To verify K8S and TIBCO Mashery Local clusters:
  1. Open a new terminal and start the k8s proxy for the k8s dashboard by executing the command:
    kubectl proxy
    You can access the dashboard by accessing the following URL: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login. Click skip if asked for authentication.
  2. You can also view all the Mashery Local containers on the dashboard.

    Click on Pods.

    Select the Cluster Manager pod (name starts with cm-deploy).

    Click on Exec to open a Cluster Manager terminal.

    Run the following commands to get started with the Cluster Manager CLI:
    alias cm=clustermanager
    cm help
    You will see the list of commands you can execute with the CLI. Here is a common set of commands you may want to use:
    [root@cm-deploy-5b47b8b757-vnznb builder]# cm list clusters
    Cluster ID                            Cluster Name       
    ------------------------------------- --------------------
    4a883074-d4d1-48f6-b742-e6fb36a03af1  Tibco Mashery Gat...
    [root@cm-deploy-5b47b8b757-vnznb builder]# cm use cluster 4a883074-d4d1-48f6-b742-e6fb36a03af1
    Using cluster [Tibco Mashery Local Reference Cluster]
    [root@cm-deploy-5b47b8b757-vnznb builder]# cm list zones
    Using cluster [Tibco Mashery Local Reference Cluster]
    Zone ID                               Zone Name          
    ------------------------------------- --------------------
    77110662-e471-4fa5-9e1d-8c90847440c0  local              
    [root@cm-deploy-5b47b8b757-vnznb builder]# cm use zone 77110662-e471-4fa5-9e1d-8c90847440c0
    Using cluster [Tibco Mashery Local Reference Cluster]
    Using Zone [local]
    [root@cm-deploy-5b47b8b757-vnznb builder]# cm list components
    Using cluster [Tibco Mashery Local Reference Cluster]
    Using Zone [local]
    Component ID                          Component Type       Component Name       Component Status     Component Host       Component Agent Port   Component Service Port(s) 
    ------------------------------------- -------------------- -------------------- -------------------- -------------------- ---------------------- ---------------------------
    a80da2cc-f8dd-4576-a774-107e6cfcc565  sql                  sql                  registered           10.244.1.5           9080                   3306                      
    6efff6cc-c8df-4bde-ad74-8c856b2992fd  nosql                nosql-2ece1ca0-08... unused_tml_id                            0                                                
    86bf8edf-ee46-4b4f-9cf5-b2fa47f118b5  cache                cache                registered           10.244.1.6           9080                   11212,11211,11213,11214,11215,11216
    873e1450-622c-45a8-84c6-565dd0bbc1c0  trafficmanager       tm                   ACTIVE               10.244.1.8           9080                   8080                      
    be69180c-5602-4977-81d8-a17bd51e3434  trafficmanager       tm                   registered           10.244.1.7           9080                   8080                      
    855dfab0-9894-4931-a14f-3a6849082c50  logservice           log                  registered           10.244.1.4           9080                   24224                     
    [root@cm-deploy-5b47b8b757-vnznb builder]#
To verify that the sample data is loaded successfully:
  1. Login into the sql container and do a ls in the directory /var/lib/mysql.
  2. In the results, a "data.zip_<timestamp+randomnumber>" should be present in that location. For example:
    [root@mysql-set-0-0builder]# ls /var/lib/mysql | grep
    data.zipdata.zip_20180918205248

Logging into a Kubernetes Node

Open a new terminal on your local machine, navigate to tmgc-deploy/onprem/k8s/manifest-single-zone and then run vagrant ssh <machine name>.