Clustering Architecture and Components

Certain components of TIBCO MDM can be run as multiple instances to share the load, and can provide redundancy, whereas other components must be ran as one instance.

The following diagram depicts the clustering architecture of TIBCO MDM.

Clustering Architecture
Clustering Components
Clustering Component Description
Load Balancer To deploy clustered web servers, use a load balancer. A load balancer equally distributes HTTP requests from the browser and web service requests to the cluster members. This guide does not describe how to set up the load balancer. Consult the appropriate vendor documentation.

Clustering of web servers is optional, and if not used, a load balancer is not required.

Web Server Web Servers receives HTTP requests and forward them to the application server. As mentioned earlier, you need a load balancer to cluster web servers.

A single web server can be set up to load balance the HTTP requests to multiple application servers without any load balancer.

Application Server You can install one or more instances of the TIBCO MDM application on one computer running an application server, or install an instance of the TIBCO MDM application each running in an application server running on a different machine.

For clustering, multiple application servers must be deployed for load balancing and to provide redundancy. All application servers in the cluster must have the same JVM version and have compatible JVMs ensure that:

  • Each application server has an independent JNDI registry.
  • Each server has a unique port assigned for JNDI registry in the Configurator.
  • Each application server has its own logging setup, with separate logging configuration, and a setup where the log files are located in a directory on a local file system.
  • The configuration is centrally stored, where each application server instance pulls its configuration information from the central cluster configuration instance. The cluster configuration instance is referred to in the MQ_CONFIG_FILE environment variable and typically points to a file named ConfigValues.xml. Each application instance can pull the relevant configuration information out of the centrally configured configuration by identifying itself through its unique node ID.
  • The Node ID (or NODE_ID environment variable) is set uniquely for each application server instance and matches the member name in the Configurator.

    For cluster configuration with JBoss, refer to the following link:
https://docs.jboss.org/author/display/AS71/AS7+Cluster+Howto

Database Server A Database server persists and queries the TIBCO MDM data. All the application servers in the cluster must be connected to one active database instance.

Note: If you need to cluster the Oracle database using RAC, contact Customer Support.

If the application server (for example, WebSphere) supports transparent failover between active and standby database servers, the TIBCO MDM is able to connect to the standby database server. Any industry standard database clustering technology can be used to cluster databases. In the case of a database failover and restart, application servers are able to reconnect to the database without requiring a restart.

Messaging Server A Messaging server is used for internal application server synchronization purposes as well as external communication with backend systems.

All application servers should be connected to at least one active messaging server (for example, TIBCO EMS). The messaging servers themselves may be clustered. To configure clustering, refer to the relevant document for messaging servers.

Multiple standby messaging servers may be configured using the messaging configuration inside the ConfigValues.xml configuration instance. When the primary messaging server fails, all open connections to the server are transparently routed to the standby server. During the reconnection phase to the standby server, the TIBCO MDM server can encounter errors. However, typically the rollover operation to the standby server executes quickly.

If the messaging server goes down, the application servers can be configured to attempt reconnection to the messaging server for a certain configurable interval. After that time frame, the application server has to be restarted

Note: It is possible to configure the application in such a way that different instances can use a segregated, dedicated JMS server. This configuration may be used to create prioritized processing zones. Consult Customer Support for additional information.

If WebSphere MQ is used as the messaging server, the number of JMS sessions that can be created needs to be increased. This can be done by adding the following CHANNELS section to the qm.ini file that exists for Queue Manager used by the cluster (for example, on Linux or UNIX machines, qm.ini might exist in the /var/mqm/qmgrs/<QMgrName> directory).

CHANNELS:
MaxChannels=400
(or later, depending on the number of channels)

File Stores File Stores are described in detail as follows:
  • The MQ_COMMON_DIR directory is shared by all application servers. Ensure that all servers are set up to point to the same location: MQ_COMMON_DIR. The location can be mapped to a different logical directory name for each server.

    For example, one application server can mount MQ_COMMON_DIR to /home/mdm/common, and another one can mount MQ_COMMON_DIR to /export/vsamin/commondir. In addition, a Communicator running on its own machine can mount MQ_COMMON_DIR to /mdm6/commondir, provided all of them point to the same physical file store.

  • The MQ_HOME directory can be set up in any one of the following ways:
    • Each application server has its own MQ_HOME and it is not shared with other application servers.
    • MQ_HOME is shared for all application servers. This will typically involve a single install image for TIBCO MDM, which is shared throughout the cluster machines through a remote file system.
  • The MQ_CONFIG_FILE file represents the central configuration store for the entire cluster, containing the configuration for every instance. In order to set up the logging configuration for each cluster member, define the cluster in the Configurator, and define for each member the relevant logging configuration in Member > Logging. Also, define the MQ_LOG environment variable in the application server startup script so that it points to a directory in a local file system

Note: You can configure the message recovery system to write failed messages to a local file system or a network file system. For more details, refer to TIBCO MDM System Administration.