Creating a Mashery Local Cluster

Follow the steps below to create a Mashery Local cluster.

Note: In this environment, host pinning is used to achieve statefulness, that is, you need to specify the host name where you want to run Mashery Local stateful components (NoSQL, Log Service, SQL DB, and Cache) in the corresponding deployment file and run deployment step-by-step. Follow the steps below for each zone.
  1. Deploy NoSQL components first. The first NoSQL container is deployed as a seed.
    1. Open the tmgc-nosql.yml file and replace "${HOST_NAME}" with name of the host.

      The name of the host is the value shown in the "HOSTNAME" column in the output of the docker node ls command.

      docker stack deploy -c tmgc-nosql.yml nosqlstack --with-registry-auth
    2. Run docker service ls.
      Make sure the deployment is successful (showing "1/1" in 4th column of the output under Replicas) before proceeding. For example:
      k6rpc2tmey91        nosqlstack_nosql_seed   replicated          1/1                 mashbuilder-1.company.com/tml/tml-nosql:v5.2.0.160
      If you have more than one NoSQL component, and you want each of them to be deployed on a separate node, then open the file tmgc-nosql-ring.yml and replace "${HOST_NAME}" with the desired host and make replicas equal to 1 (replicas: 1 ) on line 9 and run below command after each change.
      docker stack deploy -c tmgc-nosql-ring.yml nosqlnonseedstack --with-registry-auth
      
      

      Sample successful deployment:

    Note: Make sure to provide a different stack name (should start with nosqlnonseedstack, nosqlnonseedstack2 for the second container, and so on) as well for each instance, using the following command:
    docker stack deploy -c tmgc-nosql-ring.yml nosqlnonseedstack2 --with-registry-auth
    :
  2. Deploy Configuration Manager (cm). As this is a stateless component, it can be run on any node using the following command:
    docker stack deploy -c tmgc-cm.yml cmstack --with-registry-auth

    Run "docker service ls" and output should be as shown below. Make sure it gets deployed successfully ( showing "1/1" in 4th column of the output) before proceeding.

    Sample successful output:
    j4x64bis9qmb   cmstack_tmlcm   replicated   1/1   mashbuilder-1.company.com/tml/tml-cm:v5.2.0.160
  3. Create a tar.gz of the resources folder by executing the following:
    tar -czf resources.tar.gz "resources/"
  4. Deploy the Log Service. Open the tmgc-log.yml file and replace "${HOST_NAME}" with the desired host for each instance of log deployment, make replicas equal to 1 (replicas: 1 ) on line 6. then run the following command after each change:
    docker stack deploy -c tmgc-log.yml logstack --with-registry-auth
    Note: Make sure to provide a different stack name (should start with logstack, logstack2 for the second container, and so on) as well for each instance, using the following command:
    docker stack deploy -c tmgc-log.yml logstack2 --with-registry-auth
  5. Deploy the SQL Service. Open the tmgc-sql.yml file and replace "${HOST_NAME}" with the desired host where you want to deploy the SQL Service, then run the following command (as per design, you can only have one SQL Service per zone):
    ./deploy-sql-pod.sh
  6. Deploy the Cache Service. Open the tmgc-cache.yml file and replace "${HOST_NAME}" with the desired host for each instance of cache deployment, make replicas equal to 1 (replicas: 1 ) on line 6, then run the following command after each change:
    docker stack deploy -c tmgc-cache.yml cachestack --with-registry-auth
    Note: Make sure to provide a different stack name (should start with cachestack, cachestack2 for the second container, and so on) as well for each instance, using the following command:
    docker stack deploy -c tmgc-cache.yml cachestack2 --with-registry-auth
  7. Deploy the Traffic Manager (TM) Service by running the following command. Traffic Manager will be distributed by the Swarm Manager among the available nodes.
    ./deploy-tm-pod.sh
  8. When the deployment is complete, check which nodes got deployed on the Master/client using the following command:
    docker ps

    Use the following command to log into any pod inside the node:

    ssh docker@<master-ip>

    Use docker exec to get into the pod:

When the Mashery Local cluster is created, it creates a load balancer: