Installing and Running Mashery Local for Docker Using Kubernetes

To install and run Mashery Local for Docker using Kubernetes on Amazon Web Services (AWS) cloud, ensure your configuration meets the proper pre-requisites, then follow the steps below. The following instructions use Kubernetes Operations (kops), the recommended tool for creating and managing Kubernetes clusters.

Prerequisites

  • Mac OS or Linux local working environment
  • AWS account with full access to the AWS APIs
  • AWS EC2 Console
  • AWS Command Line Interface (CLI) installed and configured
    • AWS configurations set up, such as default region, access key, and secret key
    • Verify the command "aws" is on your path and that you can do some simple AWS CLI commands, for example:
      aws ec2 describe-vpcs
  • Local Docker environment ready (either connect to a docker-machine that is up and running, or run docker host on the machine).
    Note: You need this to upload docker images even if you are using pre-built images from S3.
  • Mashery Local for Docker images built locally (For instructions on building Mashery Local for Docker images, see Installing Mashery Local for Docker.
    Note: If you have custom adapters, see the Customizing for Kubernetes section.
  • Docker images verified (no critical errors during the build and the images can be seen with the command "docker images".

Procedure

  1. Install Kops.
    On OS X, use the following command:
    brew update && brew install kops
  2. Create the cluster.
    1. Use the following commands as an example to create the cluster definition:
      export NAME=kubeml411.rkdemo.com
      export KOPS_STATE_STORE=s3://rkdemo-state-store
      kops create cluster --zones us-east-1a $NAME
      Note: The cluster name must use a public domain suffix. In this example, rkdemo.com is a registered domain.
    2. Update the slave nodes' instance type by editing the cluster node configuration:
      kops edit ig --name=$NAME nodes

      The default size listed under the spec.machineType property is t2.medium; change it to m3.large or higher. Additionally, the values of the maxSize and minSize properties should be increased from 2 to 3, so that each Mashery Local node (the master and two slaves) will be deployed on a separate host.

      Warning! Leaving the minSize and maxSize property values at 2 will result in both slaves being deployed to the same physical host, resulting in TCP port conflicts.

      Tip! Use vi commands to save and exit the editor.



      Finally, apply the changes using the following command:
      kops update cluster $NAME --yes

      If successful, your EC2 Console on AWS should look like this:

  3. Install the Kubernetes Dashboard UI
    1. Use the following command to install the Dashboard service:
      kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
    2. Access the dashboard app at https://api.<cluster-name>/ui, for example, https://api.kubeml411.rkdemo.com/ui
      Note: Use the following command to obtain the password for the admin user:
      export PATH=<path to kubernetes-directory>/client/bin:.:$PATH
    3. You can access the Kubernetes console UI with the following URL: <cluster server url>/ui
      In the previous example, this is: https://34.205.42.112/ui
  4. Deploy Mashery Local.
    1. Create a private Amazon EC2 Container Registry (ECR) for Mashery Local for Docker, for example:
      aws ecr create-repository --repository-name <registry name>
       
      for example
      aws ecr create-repository --repository-name tibco/mlce
      Note: If you have never used AWS ECS before, you will need to go to the AWS ECS dashboard and follow the "Getting Started" step.
    2. Go to the directory examples/kubernetes extracted from the Mashery Local for Docker 4.1 (or later) release, modify the aws-env.sh with the planned configuration, and set the environment variables with the command:
      . aws-env.sh
      Note:

      The ML_REGISTRY_NAME is the registry name used in step 4a.

      The ML_REGISTRY_HOST can be found with the following command:
      aws ecr get-login --registry-ids `aws ecr describe-repositories --repository-names "$ML_REGISTRY_NAME" |grep registryId |cut -d ":" -f 2|tr -d ' ",'`|awk -F'https://' '{print $2}'
      
      Or, from the AWS ECS dashboard, go to Repositories > Repository URI. For example, with repository URI "12345603243.dkr.ecr.us-east-1.amazonaws.com/tibco/mlce", the ML_REGISTRY_NAME is tibco/mlce, and the ML_REGISTRY_HOST is 12345603243.dkr.ecr.us-east-1.amazonaws.com.
    3. Add or set login credentials in <home>/.docker/config.json using the command:
      aws ecr get-login --no-include-email | sh -
      Note: For the get-login command, the --no-include-email option must be specified for Docker version 17.06 or later, otherwise the command will fail.
    4. Load Docker images.
      1. Verify Mashery Local for Docker images with the correct tag are in the current docker host with the command:
      docker images
      The tag should match the env. variable ML_IMAGE_TAG.
      2. Execute the following script to load images to the ECR Docker registry:
      upload-images.sh
    5. Execute the following script to store the Docker registry key as Kubernetes "Secret":
      set-registry-key.sh
    6. Execute the following script to store MOM host and key as Kubernetes "Secret":
      set-mom-secret.sh create <MOM key> <MOM secret>
      Note: If you want to enable HTTPS or OAuth, see the section Customizing for Kubernetes for additional configuration steps.
    7. Create storage classes for Mashery Local for Docker persistent stores:
      set-storage-classes.sh
    8. Create Mashery Local Traffic Manager service and Mashery Local Master service:
      set-ml-services.sh
      You can check the services with the following commands:
      kubectl describe service ml-traffic-manager
      kubectl describe service ml-master

      The ml-traffic-manager is configured with load balancer. You can find the load balancer DNS name with the following command:

      kubectl describe service ml-traffic-manager|grep Ingress|awk -F' ' '{print $3}'
      
      The load balancer can also be found on the AWS EC2 dashboard Load Balancers list.
      Note: API invocation should be done solely via the AWS ELB (Elastic Load Balancer). The ELB configuration uses the internal IPs of the customer nodes for load balancing, so invoking API calls directly via the public IP addresses of the master or slave nodes is not an option.
    9. Deploy Mashery Local master instance:
      deploy-master.sh
      You can check the ML instance pods with the command:
      kubectl get pods
      The ML master pod has a name like ml-master-...... When it's fully up, you should see 4/4 under the READY column with STATUS "Running" for the master instance pod.
      You can check the startup init instance log with the following command:
      kubectl exec -ti `kubectl get pods |grep ml-master |cut -d " " -f 1` -c ml-cm -- cat /var/log/mashery/init-instance.log
      
      When it's fully ready to serve traffic, you should see something like the following:
      ....
       
      Register status: Content-Type: application/json Status: 200 {"results": [{"results": [{"address": "10.0.22.98"}], "error": null}, {"results": [{"area_name": "Roger"}], "error": null}, {"results": [{"credentials_updated": true}], "error": null}, {"results": [{"name": "ml-master-4209822619-sxq40", "id": 0}], "error": null}, {"results": [{"is_master": true}], "error": null}], "error": null}
       
      ****  04/06 05:27:38  Register instance succeeded
       
      Load service result:
       
      Load service result:
       
      Load service result: 70a0b42e-2b9a-4f60-a4d6-8c5503894043 [SERVICES] 04/06/17 05:27:45 - 04/06/17 05:27:47: 254 records (Success) 70a0b42e-2b9a-4f60-a4d6-8c5503894043 [KEYS] 04/06/17 05:27:47 - 04/06/17 05:27:55: 10963 records (Success) 70a0b42e-2b9a-4f60-a4d6-8c5503894043 [APPS] 04/06/17 05:27:55 - 04/06/17 05:28:23: 6884 records (Success) 70a0b42e-2b9a-4f60-a4d6-8c5503894043 [CLASSES] 04/06/17 05:28:23 - 04/06/17 05:28:23: 0 records (Success) 70a0b42e-2b9a-4f60-a4d6-8c5503894043 [PACKAGES] 04/06/17 05:28:23 - 04/06/17 05:29:54: 28824 records (Success) 70a0b42e-2b9a-4f60-a4d6-8c5503894043 [PACKAGEKEYS] 04/06/17 05:29:54 - 04/06/17 05:30:17: 5553 records (Success)
       
      ****  04/06 05:30:17  Service info loaded
       
      Load cache output first ten lines: - Trying to load mapi data for spkey: m8hxx3wxy5wjyjhfzc328wqh key: MAPI_m8hxx3wxy5wjyjhfzc328wqh::2011w25DeveloperJay key: MAPI_m8hxx3wxy5wjyjhfzc328wqh::2011w25DeveloperRoger key: MAPI_m8hxx3wxy5wjyjhfzc328wqh::3skjegt4ddpam6a5r8sfgpkz key: MAPI_m8hxx3wxy5wjyjhfzc328wqh::4q5t7z4gduy388z9nk5tmptm key: MAPI_m8hxx3wxy5wjyjhfzc328wqh::4tzw5p5h5mx8gr8ez6m34wak key: MAPI_m8hxx3wxy5wjyjhfzc328wqh::5s8ds7dcyj7cjz4h9h5tv7ev key: MAPI_m8hxx3wxy5wjyjhfzc328wqh::5yy6dkjbq7sr922j4wt6u2hc key: MAPI_m8hxx3wxy5wjyjhfzc328wqh::6mbcz48nabrz682xn2hdmhzn key: MAPI_m8hxx3wxy5wjyjhfzc328wqh::8tng6tk5bzhpfqexn525cqnj
       
      ****  04/06 05:31:01  Cache Loaded
       
      ****  04/06 05:31:01  Ping Traffic Manager succeeded
       
      ****  04/06 05:31:01  Setting status ready
      When the ML master instance containers are up, you can find the ML master instance node public IP with the following command:
      kubectl describe node `kubectl get pods -o wide |grep ml-master |awk -F' ' 'ml-master'`|\
      awk '/Addresses/ {for(i=1; i<=6; i++) {getline; print}{print "\n"}}'
       
       
      InternalIP:    172.20.41.66
      LegacyHostIP:  172.20.41.66
      ExternalIP:    184.73.16.126
      InternalDNS:   ip-172-20-41-66.ec2.internal
      ExternalDNS:   ec2-184-73-16-126.compute-1.amazonaws.com
      Hostname:      ip-172-20-41-66.ec2.internal
       
       
      InternalIP:    172.20.54.44
      LegacyHostIP:  172.20.54.44
      ExternalIP:    54.160.43.6
      InternalDNS:   ip-172-20-54-44.ec2.internal
      ExternalDNS:   ec2-54-160-43-6.compute-1.amazonaws.com
      Hostname:      ip-172-20-54-44.ec2.internal
       
       
      InternalIP:    172.20.58.180
      LegacyHostIP:  172.20.58.180
      ExternalIP:    34.207.81.153
      InternalDNS:   ip-172-20-58-180.ec2.internal
      ExternalDNS:   ec2-34-207-81-153.compute-1.amazonaws.com
      Hostname:      ip-172-20-58-180.ec2.internal
      If you need to access the Mashery Local instance Cluster Manager UI, you need to open the port 5480 for UI access. For convenience, you can open the port for all minion nodes in the cluster with the following command:
      aws ec2 authorize-security-group-ingress --group-id `aws ec2 describe-security-groups \
      --filters "Name=group-name,Values=nodes.${NAME}" | jq -r '.SecurityGroups[0].GroupId'` \
      --protocol tcp --port 5480 --cidr 0.0.0.0/0
      
      Or you can open the port individually as needed with additional security group through AWS UI or CLI.

      Then you can login to the ML master instance Cluster Manager UI with https://< ML master instance node ip>:5480.

      You can get into any ML master instance container with the following command:
      kubectl exec -ti `kubectl get pods |grep ml-master |cut -d " " -f 1` -c <container name> -- /bin/bash
      

      The container names are: ml-db, ml-mem, ml-tm, ml-cm.

      You can also execute some simple remote command on a container directly:
      kubectl exec -ti `kubectl get pods |grep ml-master |cut -d " " -f 1` -c <container name> -- <remote command>
       
      for example:
      
      kubectl exec -ti `kubectl get pods |grep ml-master |cut -d " " -f 1` -c ml-tm -- ls -l /var/log/trafficmgr/access
      

      At any time, you could also get in the Kubernetes dashboard UI to check the progress, such as checking the deployment, replica sets, services, pods, containers and their logs.

    10. Deploy Mashery Local slave instances:
      deploy-slaves.sh
      You can check the Mashery Local instance pods with the command:
      kubectl get pods
      The Mashery Local slaves instance pods are named with ml-slave-0, ml-slave-1, ml-slave-2.

      When it's fully up, you should see 4/4 under the READY column with STATUS "Running" for the slave instance pod.

      You can check the startup init instance log with the following command:
      kubectl exec -ti `kubectl get pods |grep <slave pod name> |cut -d " " -f 1` -c ml-cm -- cat /var/log/mashery/init-instance.log
       
      for example:
       
      kubectl exec -ti `kubectl get pods |grep ml-slave-0 |cut -d " " -f 1` -c ml-cm -- cat /var/log/mashery/init-instance.log
      
      You can find the Mashery Local slave instance node IP with the following command:
      kubectl describe node `kubectl get pods -o wide |grep <slave pod name> |awk -F' ' '{print $7}'` |grep Addresses |cut -d "," -f 3
      Then, login to the ML slave instance Cluster Manager UI with https://<ML slave instance node ip>:5480
      Note: If you didn't open the port 5480 for all nodes in the previous step, you need to open the port for each ML slave instance individually with additional security group through AWS UI or CLI.
      You can get into any ML slave instance container with the following command:
      kubectl exec -ti `kubectl get pods |grep <slave pod name> |cut -d " " -f 1` -c <container name> -- /bin/bash
      
      The container names are: ml-db, ml-mem, ml-tm, ml-cm.
      You can also execute some simple remote command on a container directly:
      kubectl exec -ti `kubectl get pods |grep <slave pod name> |cut -d " " -f 1` -c <container name> -- <remote command>
      
      for example:
       
      kubectl exec -ti `kubectl get pods |grep ml-slave-0 |cut -d " " -f 1` -c ml-tm -- ls -l /var/log/trafficmgr/access
      
      At any time, you could also get into the Kubernetes dashboard UI to check the progress, such as checking the stateful sets, services, pods, and containers and their logs.

      By default, it's configured to run two slave instances.

      You can use the following command to increase or reduce the number of slaves:
      kubectl patch statefulset ml-slave --type='json' -p='[{"op": "replace", "path": "/spec/replicas", "value":<the desired replica number>}]'
      
      However, you must have enough worker nodes to run all the slave instances.
    11. Test the traffic, using the following example commands:
      export LB=`kubectl describe service ml-traffic-manager|grep Ingress|awk -F' ' '{print $3}'` && echo $LB
       
      curl -H 'Host: roger.api.perfmom.mashspud.com' http://$LB/testep?api_key=funjsgx8m5bsew2jngpdanxf
    12. Cleanup or undeploy Mashery Local instances.
      To undeploy Mashery Local slave instances:
      deploy-slaves.sh delete
      To undeploy Mashery Local master instances:
      deploy-master.sh delete
    13. Shut down Kubernetes cluster using the following command:
      kubernetes/cluster/kube-down.sh