Example Cloud Deployments with CLI

The following examples demonstrate Mashery Local for Docker deployment to cloud environments (Azure and AWS for now) using Command Line Interface (CLI). The advantage with CLI is that the deployment process can be done repeatedly with scripts and an environment can be easily replicated reliably.
Note: These examples are for illustrating the concepts, they are not production-grade deployments. The scripts used in these examples are in the examples directory.

In each example, two Mashery Local instances are created - one for Mashery Local master and one for Mashery Local slave. Each Mashery Local instance is deployed to a cloud virtual machine (VM) instance (each VM instance corresponding to a docker host). Then, both the master and slave are connected to Mashery On-Prem Manager and are ready to handle traffic.

Cloud Deployment Prerequisites

  • Docker, docker-machine, and docker-compose installed locally (for example, on a Mac).
  • AWS or Azure accounts have been set up and CLI installed.
  • Verify the command aws or az is on your path and you can do some simple CLI commands, for example:
    • az group list
    • aws ec2 describe-vpcs

Deploying Mashery Local to Cloud Environment

  1. Follow the instructions to build Mashery Local for Docker images locally with docker-machine. Verify the images are correct (no errors during the build and the images can be seen with the command docker images.
  2. Prepare a directory that has the three image gz files, the modified (optional) docker-compose.yml file, and azure-ml.sh, aws-ml.sh and init-ml.sh.

    export MOM_KEY="<MOM key>" export MOM_SECRET="<MOM secret>"

    chmod a+x *.sh

  3. Set the following environmental variables for Azure or AWS:
    For Azure:
    • export AZURE_USER="<azure user>"
    • export AZURE_PWD="<azure password>"
    • export AZURE_SUBSCRIPTION_ID="<your azure subscription id>"
    • export AZURE_IMAGE="<azure image for the location>"
    • (Default value "canonical:UbuntuServer:16.04.0-LTS:latest" for location "westus" if not specified.)
    • export ML_LOC_NAME="<azure location for ML>"
    • (Default value "westus" if not specified)
    • export ML_RESOURCE_GROUP="<azure resource group name for ML>"
    • (Default value "mlResourceGroup" if not specified)
    • export ML_DNS_NAME="<public IP DNS name>"
    • (Default value "testml" if not specified)
    For AWS:
    • export AWS_ACCESS_KEY_ID="<AWS_ACCESS_KEY_ID>"
    • export AWS_SECRET_ACCESS_KEY="<AWS_SECRET_ACCESS_KEY>"
    • export AWS_DEFAULT_REGION="<default AWS region>"
    • (Default value "us-east-1" if not specified)
    • export AWS_ZONE="<AWS zone>"
    • (Default value "e" if not specified)
    • export AWS_AVAILABILITY_ZONE="<AWS availability zone>"
    • (Default value "us-east-1e" if not specified)
    • export AWS_AMI="<AWS AMI id>"
    • (Default value "ami-efe09bf8" if not specified)
    Where the AWS AMI has to be an Ubuntu 16.04.* image available on the region
    Note: For eu-central-1, do NOT use ami-9346bcfc because there's some extra software running inside that would cause performance degradation. Any public AWI needs be carefully examined before being used.)

    And the AWS availability zone must exist for that region.

    Verify that the output format is json in your default configuration (check the file <home>/.aws/config or use the command "aws configure to verify).

  4. Run the following scripts:

    For Azure, run the script azure-ml.sh.

    For AWS, run the script aws-ml.sh.

    It takes some time for the scripts to run (about 45 minutes for Azure and 100 minutes for AWS) because images must be loaded to the cloud twice - once each for the master and slave.
    Note: It's better to save the output to a file for later examination, for example:

    ./aws-ml.sh 2>&1 |tee /tmp/aws-ml.out

    If the script hangs in docker loading images, try the following:
    1. Ctrl-C out of the script.
    2. Execute the following commands (they take more than an hour):
      execute the following commands (they take more than an hour)
      
      docker-machine scp ml_db.tar.gz aws-ml-master:/tmp
      docker-machine ssh aws-ml-master sudo docker load -i /tmp/ml_db.tar.gz
      docker-machine scp ml_mem.tar.gz aws-ml-master:/tmp
      docker-machine ssh aws-ml-master sudo docker load -i /tmp/ml_mem.tar.gz
      docker-machine scp ml_core.tar.gz aws-ml-master:/tmp
      docker-machine ssh aws-ml-master sudo docker load -i /tmp/ml_core.tar.gz
      
      docker-machine scp ml_db.tar.gz aws-ml-slave1:/tmp
      docker-machine ssh aws-ml-slave1 sudo docker load -i /tmp/ml_db.tar.gz
      docker-machine scp ml_mem.tar.gz aws-ml-slave1:/tmp
      docker-machine ssh aws-ml-slave1 sudo docker load -i /tmp/ml_mem.tar.gz
      docker-machine scp ml_core.tar.gz aws-ml-slave1:/tmp
      docker-machine ssh aws-ml-slave1 sudo docker load -i /tmp/ml_core.tar.gz
      
      ./init-ml.sh aws  2>&1 |tee /tmp/init-ml.out
    If something goes wrong with AWS and you need to rerun the aws-ml.sh, perform the following cleanup first:
    docker-machine rm aws-ml-master aws-ml-slave1
    docker-machine rm -f aws-ml-master aws-ml-slave1
    
    aws ec2 delete-key-pair --key-name aws-ml-master
    aws ec2 delete-key-pair --key-name aws-ml-slave1
    
    aws elb delete-load-balancer --load-balancer-name MLLB
    aws ec2 delete-vpc --vpc-id <vpc id value>
    If you just cleaned up the previous install and the AWS instances were "Terminated" but still not completely removed yet, you may get the following error when you rerun aws-ml.sh: "
    An error occurred (InvalidInstance) when calling the RegisterInstancesWithLoadBalancer operation: EC2 instance i-xxxx... is not in running state.
    "

    As a workaround if you don't want to wait till those AWS instances are completely removed, you can modify the aws-ml.sh and init-ml.sh to replace to words ML-Master and ML-Slave1 to something else, before executing the aws-ml.sh again.

Recreating Master and Slave

If docker-machines (AWS or Azure instances) are fine and Docker images are loaded, and you just want to have a clean new master and slave, perform the following steps.
  • For AWS: ./init-ml.sh aws 2>&1 |tee /tmp/init-ml.out
  • For Azure: ./init-ml.sh azure 2>&1 |tee /tmp/init-ml.out
Note:

A load balancer is created for both Azure and AWS cases, and traffic would go through the load balancer.

For Azure, to access the UI, you can use the docker host IP or load balancer IP with port 5480 for master and port 5481 for slave1. For AWS, to access the UI, use the instance's public IP. You can find the public IP from AWS UI or use the command for Master:

aws ec2 describe-instances --filters 'Name=tag:Name,Values=ML-Master'|grep PublicIpAddres |cut -d '"' -f 4

For Slave, (Slave1, for example), use the command:

aws ec2 describe-instances --filters 'Name=tag:Name,Values=ML-Slave1'|grep PublicIpAddres |cut -d '"' -f 4

There are other enhancements that could be made, for example, modify the security rules to restrict internal traffics from within the virtual network only, or create docker-machine without public IP address (all required access from outside could go through load balancer).