Deploying a Cluster on On-Premises Bare Metal using Swarm

  1. Navigate to the folder tmgc-deploy/onprem/swarm/manifest-onprem-swarm. Put all the tml images in this folder that you downloaded earlier.
    Note:

    TIP

    If you built the TIBCO Mashery Local images using tml-installer, you need to copy those images from that build machine to this location on deployment host. If your build and deployment host are the same, you can copy those images from the tml-installer container into this location using docker cp command such as below (assuming build this is the first build):
    docker cp tml-installer:/var/jenkins_home/jobs/tml-build-docker/builds/1/archive/tml-docker/tml-nosql.tar.gz .
    docker cp tml-installer:/var/jenkins_home/jobs/tml-build-docker/builds/1/archive/tml-docker/tml-sql.tar.gz .
    docker cp tml-installer:/var/jenkins_home/jobs/tml-build-docker/builds/1/archive/tml-docker/tml-log.tar.gz .
    docker cp tml-installer:/var/jenkins_home/jobs/tml-build-docker/builds/1/archive/tml-docker/tml-cm.tar.gz .
    docker cp tml-installer:/var/jenkins_home/jobs/tml-build-docker/builds/1/archive/tml-docker/tml-cache.tar.gz .
    docker cp tml-installer:/var/jenkins_home/jobs/tml-build-docker/builds/1/archive/tml-docker/tml-tm.tar.gz .
  2. Prepare swarm cluster. The number of machines in cluster can be determined by the requirement. All the machines should be able to communicate with each other in the network. Port 2376, 2377 and 7946 should be open on each hosts for swarm cluster to work.
    Note: Mashery has observed an issue with IPv6, so it is recommended to run docker demon on IPv4 on each node. Otherwise, the cluster might not work as expected.
  3. SSH into the machine that you to run as swarm manager.
  4. Start swarm manager on the selected node. Execute the following:
    docker swarm init --advertise-addr <Manager IP>
    This produces an output command (starting with docker swarm join) that needs to be executed on all other worker nodes.
    You need to ssh into other nodes and run the docker swarm join command (output of step 4) and advertising node's IP. Use the following command as a reference to be executed on other nodes.
    docker swarm join --token <TOKEN> <Manger IP>:2377
    --advertise-addr <Node IP>
    Sample command based on above command output.
    Node Command
    
    
    
    docker swarm join --token SWMTKN-1-0xuxeo975vwq2dprcy77loazmyqqqr9muzw29yh0xvopq8398z-3zo8r2hb6o3xw51cyogz3z0k6 10.107.138.60:2377 --advertise-addr 10.107.138.62
    
    This node joined a swarm as a worker.
  5. Verify that the cluster is created successfully by running the following command on the manager. Status should be ready for all the nodes.

    Do not proceed until all the nodes are in active status.

  6. If the swarm cluster is created successfully, create an Overlay network "ml5" for the containers networking. Execute the following on manager node:
    docker network create -d overlay --attachable ml5
  7. Put all the images on each machine and load images into local docker. Run the following commands to load tml images on each machine in the cluster.

    Docker load image commands:

    docker load -i tml-nosql.tar.gz
    docker load -i tml-cm.tar.gz
    docker load -i tml-log.tar.gz
    docker load -i tml-sql.tar.gz
    docker load -i tml-cache.tar.gz
    docker load -i tml-tm.tar.gz