Installing and Running Mashery Local for Docker with Kubernetes

To install and run Mashery Local for Docker with Kubernetes on Amazon Web Services (AWS) cloud, ensure your configuration meets the proper pre-requisites, then follow the steps below.

Prerequisites

  • Mac OS or Linux local working environment
  • AWS account with full access to the AWS APIs
  • AWS Command Line Interface (CLI) installed and configured
    • AWS configurations set up, such as default region, access key, and secret key
    • Verify the command "aws" is on your path and that you can do some simple AWS CLI commands, for example:
      aws ec2 describe-vpcs
  • Local Docker environment ready (either connect to a docker-machine that is up and running, or run docker host on the machine).
    Note: You need this to upload docker images even if you are using pre-built images from S3.
  • Mashery Local for Docker images built locally (For instructions on building Mashery Local for Docker images, see Installing Mashery Local for Docker.
    Note: If you have custom adapters, see the Customizing for Kubernetes section.
  • Docker images verified (no critical errors during the build and the images can be seen with the command "docker images".

Procedure

  1. Install Kubernetes and set up the cluster.

    Since Kubernetes clusters could be set up in many different ways and they can be shared by many applications, it's the users responsibility to have the clusters set up and be ready for deployment. Verify that there are enough m3.large (or larger) nodes for all planned Mashery Local instances (each Mashery Local instance requires a node).

    The following steps are an example of how to set up a Kubernetes cluster on AWS, based on instructions from Kubernetes Getting Started Guide for AWS.
    Note: This setup only works with Kubernetes 1.5 releases and does not work for 1.6+ releases.
    1. Deploy the Kubernetes cluster on AWS. Follow the instructions in the Kubernetes Getting Started Guide for AWS.
      Note: If you have already installed minikube or kubetctl before, it's better to remove those to do a clean install (remove any trace of previous kubernetes installation):
      • remove/rename the kubectl
      • remove/rename ~/.kube
      • remove/rename ~/.ssh kube*
      • install on a clean directory
    2. You can override some Kubernetes default settings by setting some environment variables. Export those variables before the install. Typically, you would need to set the following:
      export KUBERNETES_RELEASE=v1.5.3
      Note:

      You must set KUBERNETES_RELEASE to v1.5.3 if using the installing steps from the Kubernetes Getting Started Guide for AWS.

      There are other ways to install Kubernetes, such are using Kubernetes Operations (kops), that is outside the scope of this step.

      export AWS_ACCESS_KEY_ID=<your AWS_ACCESS_KEY_ID>
      export AWS_SECRET_ACCESS_KEY=<your AWS_SECRET_ACCESS_KEY>
      export AWS_DEFAULT_REGION=us-east-1
      
      export KUBERNETES_PROVIDER=aws
      export NUM_NODES=3
      export MASTER_SIZE=m3.medium
      export NODE_SIZE=m3.large
      export AWS_S3_REGION=us-east-1
      export KUBE_AWS_ZONE=us-east-1e
      export AWS_S3_BUCKET=masheryml-kubernetes-artifacts
      export KUBE_AWS_INSTANCE_PREFIX=k8s
      Note:

      Check the AWS zone availability first (from AWS EC2 dashboard > Service Health > Availability Zone Status) . You must choose a zone that is available. Otherwise you may get the following error:"An error occurred (InternetGatewayLimitExceeded) when calling the CreateInternetGateway operation: The maximum number of internet gateways has been reached."

      For the Kubernetes worker nodes, the NODE_SIZE needs be set to m3.large or larger to run MLCE instance, unless you restrict the resources each instance can allocate in the deployment configuration.

      For the Kubernetes master node, use m3.medium for clusters of less than 5 nodes, use m3.large for 6-10 nodes, and use m3.xlarge for more than 10 nodes.

    3. For a first-time installation, go to the desired directory to install Kubernetes. Execute either:
      #Using wget
      wget -q -O - https://get.k8s.io | bash
      or
      #Using cURL
      curl -sS https://get.k8s.io | bash
    4. Add the appropriate binary folder to your shell PATH to access kubectl:
      export PATH=<path to kubernetes-directory>/client/bin:.:$PATH
    5. Verify the cluster setup with the following command:
      kubectl config view
      You should see something similar to the following:
      apiVersion: v1
      clusters:
      - cluster:
          certificate-authority-data: REDACTED
          server: https://34.205.42.112
        name: aws_k8s
       
      ...
       
       
      contexts:
      - context:
          cluster: aws_k8s
          user: aws_k8s
        name: aws_k8s
       
      ...
       
      current-context: aws_k8s
      kind: Config
      preferences: {}
      users:
      - name: aws_k8s
        user:
          client-certificate-data: REDACTED
          client-key-data: REDACTED
          token: lL5WiMPIjxBfdl1e9OkoCx23zM1Mwep8
      - name: aws_k8s-basic-auth
        user:
          password: TNYNPNf6LgCCEfQW
          username: admin
       
      ...
      If it didn't create and start the cluster, use the command kubernetes/cluster/kube-up.sh to create and start the cluster.
    6. You can access the Kubernetes console UI with the following URL: <cluster server url>/ui
      In the previous example, this is: https://34.205.42.112/ui
    7. Login to the console with the aws_k8s-basic-auth user "admin" and its password.
      Note:
      For your convenience, you can use the following command to find the server URL directly:
      kubectl config view -o=json|jq '.clusters[] | select(.name=="aws_'${KUBE_AWS_INSTANCE_PREFIX}'") |.cluster.server' |sed -e 's/"//g'
      
      Use the following command to find the password for the admin user:
      kubectl config view -o=json|jq '.users[] | select(.name=="aws_'${KUBE_AWS_INSTANCE_PREFIX}'-basic-auth") |.user.password' |sed -e 's/"//g'
      
  2. Create a private Amazon EC2 Container Registry (ECR) for Mashery Local for Docker, for example:
    aws ecr create-repository --repository-name <registry name>
     
    for example
    aws ecr create-repository --repository-name tibco/mlce
    Note: If you have never used AWS ECS before, you will need to go to the AWS ECS dashboard and follow the "Getting Started" step.
  3. Go to the directory examples/kubernetes extracted from the Mashery Local for Docker 4.1 release, modify the aws-env.sh with the planned configuration, and set the environment variables with the command:
    . aws-env.sh
    Note:

    The ML_REGISTRY_NAME is the registry name used in step 2.

    The ML_REGISTRY_HOST can be found with the following command:
    aws ecr get-login --registry-ids `aws ecr describe-repositories --repository-names "$ML_REGISTRY_NAME" |grep registryId |cut -d ":" -f 2|tr -d ' ",'`|awk -F'https://' '{print $2}'
    
    Or, from the AWS ECS dashboard, go to Repositories > Repository URI. For example, with repository URI "12345603243.dkr.ecr.us-east-1.amazonaws.com/tibco/mlce", the ML_REGISTRY_NAME is tibco/mlce, and the ML_REGISTRY_HOST is 12345603243.dkr.ecr.us-east-1.amazonaws.com.
  4. Add or set login credentials in <home>/.docker/config.json using the command:
    aws ecr get-login | sh -
  5. Load Docker images.
    1. Verify Mashery Local for Docker images with the correct tag are in the current docker host with the command:
      docker images
      The tag should match the env. variable ML_IMAGE_TAG.

    2. Execute the following script to load images to the ECR Docker registry:
      upload-images.sh
  6. Execute the following script to store the Docker registry key as Kubernetes "Secret":
    set-registry-key.sh
  7. Execute the following script to store MOM host and key as Kubernetes "Secret":
    set-mom-secret.sh create <MOM key> <MOM secret> [<MOM host>]
    Note:
    If you want to enable HTTPS or OAuth, see the section Customizing for Kubernetes for additional configuration steps.
  8. Create storage classes for Mashery Local for Docker persistent stores:
    set-storage-classes.sh
  9. Create Mashery Local Traffic Manager service and Mashery Local Master service:
    set-ml-services.sh
    You can check the services with the following commands:
    kubectl describe service ml-traffic-manager
    kubectl describe service ml-master

    The ml-traffic-manager is configured with load balancer. You can find the load balancer DNS name with the following command:

    kubectl describe service ml-traffic-manager|grep Ingress|awk -F' ' '{print $3}'
    
    The load balancer can also be found on the AWS EC2 dashboard Load Balancers list.
  10. Deploy Mashery Local master instance:
    deploy-master.sh
    You can check the ML instance pods with the command:
    kubectl get pods
    The ML master pod has a name like ml-master-...... When it's fully up, you should see 4/4 under the READY column with STATUS "Running" for the master instance pod.
    You can check the startup init instance log with the following command:
    kubectl exec -ti `kubectl get pods |grep ml-master |cut -d " " -f 1` -c ml-cm -- cat /var/log/mashery/init-instance.log
    
    When it's fully ready to serve traffic, you should see something like the following:
    ....
     
    Register status: Content-Type: application/json Status: 200 {"results": [{"results": [{"address": "10.0.22.98"}], "error": null}, {"results": [{"area_name": "Roger"}], "error": null}, {"results": [{"credentials_updated": true}], "error": null}, {"results": [{"name": "ml-master-4209822619-sxq40", "id": 0}], "error": null}, {"results": [{"is_master": true}], "error": null}], "error": null}
     
    ****  04/06 05:27:38  Register instance succeeded
     
    Load service result:
     
    Load service result:
     
    Load service result: 70a0b42e-2b9a-4f60-a4d6-8c5503894043 [SERVICES] 04/06/17 05:27:45 - 04/06/17 05:27:47: 254 records (Success) 70a0b42e-2b9a-4f60-a4d6-8c5503894043 [KEYS] 04/06/17 05:27:47 - 04/06/17 05:27:55: 10963 records (Success) 70a0b42e-2b9a-4f60-a4d6-8c5503894043 [APPS] 04/06/17 05:27:55 - 04/06/17 05:28:23: 6884 records (Success) 70a0b42e-2b9a-4f60-a4d6-8c5503894043 [CLASSES] 04/06/17 05:28:23 - 04/06/17 05:28:23: 0 records (Success) 70a0b42e-2b9a-4f60-a4d6-8c5503894043 [PACKAGES] 04/06/17 05:28:23 - 04/06/17 05:29:54: 28824 records (Success) 70a0b42e-2b9a-4f60-a4d6-8c5503894043 [PACKAGEKEYS] 04/06/17 05:29:54 - 04/06/17 05:30:17: 5553 records (Success)
     
    ****  04/06 05:30:17  Service info loaded
     
    Load cache output first ten lines: - Trying to load mapi data for spkey: m8hxx3wxy5wjyjhfzc328wqh key: MAPI_m8hxx3wxy5wjyjhfzc328wqh::2011w25DeveloperJay key: MAPI_m8hxx3wxy5wjyjhfzc328wqh::2011w25DeveloperRoger key: MAPI_m8hxx3wxy5wjyjhfzc328wqh::3skjegt4ddpam6a5r8sfgpkz key: MAPI_m8hxx3wxy5wjyjhfzc328wqh::4q5t7z4gduy388z9nk5tmptm key: MAPI_m8hxx3wxy5wjyjhfzc328wqh::4tzw5p5h5mx8gr8ez6m34wak key: MAPI_m8hxx3wxy5wjyjhfzc328wqh::5s8ds7dcyj7cjz4h9h5tv7ev key: MAPI_m8hxx3wxy5wjyjhfzc328wqh::5yy6dkjbq7sr922j4wt6u2hc key: MAPI_m8hxx3wxy5wjyjhfzc328wqh::6mbcz48nabrz682xn2hdmhzn key: MAPI_m8hxx3wxy5wjyjhfzc328wqh::8tng6tk5bzhpfqexn525cqnj
     
    ****  04/06 05:31:01  Cache Loaded
     
    ****  04/06 05:31:01  Ping Traffic Manager succeeded
     
    ****  04/06 05:31:01  Setting status ready
    When the ML master instance containers are up, you can find the ML master instance node public IP with the following command:
    kubectl describe node `kubectl get pods -o wide |grep ml-master |awk -F' ' '{print $7}'` |grep Addresses |cut -d "," -f 3
    
    If you need to access the Mashery Local instance Cluster Manager UI, you need to open the port 5480 for UI access. For convenience, you can open the port for all minion nodes in the cluster with the following command:
    aws ec2 authorize-security-group-ingress --group-id `aws ec2 describe-security-groups --filters "Name=group-name,Values=kubernetes-minion-${KUBE_AWS_INSTANCE_PREFIX}" |jq -r '.SecurityGroups[0].GroupId'` --protocol tcp --port 5480 --cidr 0.0.0.0/0
    
    Or you can open the port individually as needed with additional security group through AWS UI or CLI.

    Then you can login to the ML master instance Cluster Manager UI with https://< ML master instance node ip>:5480.

    You can get into any ML master instance container with the following command:
    kubectl exec -ti `kubectl get pods |grep ml-master |cut -d " " -f 1` -c <container name> -- /bin/bash
    

    The container names are: ml-db, ml-mem, ml-tm, ml-cm.

    You can also execute some simple remote command on a container directly:
    kubectl exec -ti `kubectl get pods |grep ml-master |cut -d " " -f 1` -c <container name> -- <remote command>
     
    for example:
    
    kubectl exec -ti `kubectl get pods |grep ml-master |cut -d " " -f 1` -c ml-tm -- ls -l /var/log/trafficmgr/access
    

    At any time, you could also get in the Kubernetes dashboard UI to check the progress, such as checking the deployment, replica sets, services, pods, containers and their logs.

  11. Deploy Mashery Local slave instances:
    deploy-slaves.sh
    You can check the Mashery Local instance pods with the command:
    kubectl get pods
    The Mashery Local slaves instance pods are named with ml-slave-0, ml-slave-1, ml-slave-2.

    When it's fully up, you should see 4/4 under the READY column with STATUS "Running" for the slave instance pod.

    You can check the startup init instance log with the following command:
    kubectl exec -ti `kubectl get pods |grep <slave pod name> |cut -d " " -f 1` -c ml-cm -- cat /var/log/mashery/init-instance.log
     
    for example:
     
    kubectl exec -ti `kubectl get pods |grep ml-slave-0 |cut -d " " -f 1` -c ml-cm -- cat /var/log/mashery/init-instance.log
    
    You can find the Mashery Local slave instance node IP with the following command:
    kubectl describe node `kubectl get pods -o wide |grep <slave pod name> |awk -F' ' '{print $7}'` |grep Addresses |cut -d "," -f 3
    Then, login to the ML slave instance Cluster Manager UI with https://<ML slave instance node ip>:5480
    Note: If you didn't open the port 5480 for all nodes in the previous step, you need to open the port for each ML slave instance individually with additional security group through AWS UI or CLI.
    You can get into any ML slave instance container with the following command:
    kubectl exec -ti `kubectl get pods |grep <slave pod name> |cut -d " " -f 1` -c <container name> -- /bin/bash
    
    The container names are: ml-db, ml-mem, ml-tm, ml-cm.
    You can also execute some simple remote command on a container directly:
    kubectl exec -ti `kubectl get pods |grep <slave pod name> |cut -d " " -f 1` -c <container name> -- <remote command>
    
    for example:
     
    kubectl exec -ti `kubectl get pods |grep ml-slave-0 |cut -d " " -f 1` -c ml-tm -- ls -l /var/log/trafficmgr/access
    
    At any time, you could also get into the Kubernetes dashboard UI to check the progress, such as checking the stateful sets, services, pods, and containers and their logs.

    By default, it's configured to run two slave instances.

    You can use the following command to increase or reduce the number of slaves:
    kubectl patch statefulset ml-slave --type='json' -p='[{"op": "replace", "path": "/spec/replicas", "value":<the desired replica number>}]'
    
    However, you must have enough worker nodes to run all the slave instances.
  12. Test the traffic, using the following example commands:
    export LB=`kubectl describe service ml-traffic-manager|grep Ingress|awk -F' ' '{print $3}'` && echo $LB
     
    curl -H 'Host: roger.api.perfmom.mashspud.com' http://$LB/testep?api_key=funjsgx8m5bsew2jngpdanxf
  13. Cleanup or undeploy Mashery Local instances.
    To undeploy Mashery Local slave instances:
    deploy-slaves.sh delete
    To undeploy Mashery Local master instances:
    deploy-master.sh delete
  14. Shut down Kubernetes cluster using the following command (if the example steps in Step 1 were used):
    kubernetes/cluster/kube-down.sh