Creating a Local Edition Cluster

Important: The Docker Swarm cluster relies on host pinning to achieve statefulness, that is, you need to specify the host name where you want to run TIBCO Cloud™ API Management - Local Edition stateful components (NoSQL, Log, SQL and Cache) in the corresponding deployment file and run the deployment step by step.
  1. Create a tar file in the resources/<tml-type> folder by executing the following script:
    ./create_resources.sh

    List files using ls.

    The following files can be seen.



  2. Deploy the NoSQL components first. The first NoSQL component is deployed as a seed.
    1. Open the tmgc-nosql.yml file and replace "${HOST_NAME}" with the name of the host.

      The name of the host is the value shown in the "HOSTNAME" column in the output of the docker node ls command (see step 6 in Creating a Docker Swarm Cluster). (In case of an on premise virtual box, you can replace the host name with "master" as we create a VM with the name "master").

    2. Run the following command:
      docker stack deploy -c tmgc-nosql.yml nosqlstack --with-registry-auth
    3. Run the following command:
      docker service ls

      The output should appear as follows:

      k6rpc2tmey91   nosqlstack_nosql_seed   replicated    1/1    tmlbuilder.company.com/tml/tml-nosql:v5.2.0.160

      The "1/1" indicates it has deployed successfully. Do not proceed until it is showing "1/1".

      If you have more than one NoSQL component, and you want each of them to be deployed on a separate node, open the tmgc-nosql-ring.yml file and replace "${HOST_NAME}" with the desired host, and set replicas equal to 1 (replicas: 1), then run the following command after each change:

      docker stack deploy -c tmgc-nosql-ring.yml nosqlnonseedstack --with-registry-auth
      Make sure to provide a different stack name (should start with nosqlnonseedstack, nosqlnonseedstack2 for the second container, and so on) as well for each instance, using the following command:
      docker stack deploy -c tmgc-nosql-ring.yml nosqlnonseedstack2 --with-registry-auth
  3. Deploy the Configuration Manager.
    1. Run the following command:
      docker stack deploy -c tmgc-cm.yml cmstack --with-registry-auth 
    2. Run the following command:
      docker service ls

      Example output:

      j4x64bis9qmb   cmstack_tmlcm   replicated   1/1   tmlbuilder.company.com/tml/tml-cm:v5.2.0.160

      Ensure that the output shows "1/1" (which indicates a successful deployment) in column 4 before proceeding.

  4. Deploy the Log Service.
    1. Open the tmgc-log.yml file and replace "${HOST_NAME}" with the name of the host for each instance of log deployment, and set replicas equal to 1 (replicas: 1), then run the following command after each change .
    2. Run the following command:
      docker stack deploy -c tmgc-log.yml logstack --with-registry-auth
      Note: Make sure to provide a different stack name (should start with logstack, logstack2 for the second container, and so on) as well for each instance, using the following command:
      docker stack deploy -c tmgc-log.yml logstack2 --with-registry-auth
  5. Deploy the SQL Service.
    1. Open the tmgc-sql.yml file and replace "${HOST_NAME}" with the name of the host where you want deploy SQL service.
    2. Run the following command:
      ./deploy-sql-pod.sh

    Note that there can only be one SQL Service per zone.

  6. Deploy the Cache Service.
    1. Open the tmgc-cache.yml file and replace "${HOST_NAME}" with the name of the host for each instance of cache deployment, and set replicas equal to 1 (replicas: 1), then run the following command after each change .
    2. Run the following command:
      docker stack deploy -c tmgc-cache.yml cachestack --with-registry-auth
      Note: Make sure to provide a different stack name (should start with cachestack, cachestack2 for the second container, and so on) as well for each instance, using the following command:
      docker stack deploy -c tmgc-cache.yml cachestack2 --with-registry-auth
  7. Deploy the Traffic Manager Service by running the following command:
    ./deploy-tm-pod.sh

    Traffic Manager is distributed to the available nodes by Swarm Manager.