Deploying a New Cluster on On-premises VirtualBox using Swarm

Files to modify

Only Single zone deployment is supported for on-premise deployment.

After building the Mashery Local images, copy the /var/jenkins_home/tmgc-deploy folder in the tml-installer container to the desired host from where you want to run the deployment. If your build and deployment host are the same, the following sample command will copy the tmgc-deploy folder from the container to the current working directory on the local machine. Sample command:
docker cp tml-installer:/var/jenkins_home/tmgc-deploy .
The following file is a reference deployment manifest in tml-installer container for deployment in a single zone:
tmgc-deploy/onprem/swarm/manifest-onprem-swarm.json

Swarm Configuration

The On-prem deployment will create two Virtual Box instances - one for swarm manager and another worker node.

The cluster creation script (create-local-swarm-cluster.sh) creates one Kubernetes Master Node VM and the given number of worker VMs where ML containers will be deployed.

Each VM is allocated 3GB of memory. So ensure you have at least 6-7 GB memory available.

Edit the reference deployment file tmgc-deploy/onprem/swarm
"tml_cluster_name": "Tibco Mashery Local Reference
Cluster",
"tml_zones": ["local"],

Mashery Local Components Configuration

The following settings can be customized. The tml_image_tag must be updated to match your docker image build.
Variable Prefix Description
tml_cm Mashery Local Cluster Manager Component
tml_tm Mashery Local Traffic Manager Component
tml_cache Mashery Local Cache Component
tml_sql Mashery Local SQL Component
tml_log Mashery Local Log Component
tml_nosql Number of Cassandra Component
"tml_image_tag": "v5.0.0.1",

"tml_cm_count": 1,
"tml_tm_count": 1,
"tml_cache_count": 1,
"tml_sql_count": 1,
"tml_log_count": 1,
"tml_nosql_count": 1,


"tml_tm_http_enabled": "true",
"tml_tm_http_port": 80,
"tml_tm_https_enabled": "true",
"tml_tm_https_port": 443,
"tml_tm_oauth_enabled": "true",
"tml_tm_oauth_port": 8083
,
"cassandra_max_heap": "512M",
"cassandra_replication_factor": 1

Setting up Mashery Service Configuration Data

Mashery Local offers the option of importing service configuration data offline. A sample data.zip is provided with the TIBCO Mashery Local build that can be loaded into the database during TIBCO Mashery Local cluster creation.

To load the sample data:
  1. Copy the sample data.zip (located at tmgc-deploy/sample_data/data.zip) into the tmgc-deploy/properties/ folder.
  2. This ensures that the data from the data.zip will be automatically loaded in the database when the TIBCO Mashery Local cluster is created.
Note: Make sure that the apiKey and apiSecret is empty in the tml_sql_properties.json in the tmgc-deploy/properties/ folder if you want to use the offline data loading feature. Default is blank.
TIBCO Mashery Local also offers the capability to sync data from an active MoM host at the time of cluster creation. To load the data using the MoM sync configuration, specify the following three properties in the tml_sql_properties.json:
"mom-host": "<MOM_HOST>",
"apiKey": "<MOM_API_KEY>",
"apiSecret": "<MOM_API_SECRET>",
The tml_sql_properties.json is located in the tmgc-deploy/properties/ folder.
Note: Do not place the sample data.zip in the tmgc-deploy/properties/ folder if you are loading the data using the MoM sync configuration.

Generating Deployment Scripts and Configuration

For single-zone deployment, run the following command and find generated deployment scripts and configuration in the folder "manifest-onprem-swarm":
./compose.sh manifest-aws-swarm.json