Copyright © Cloud Software Group, Inc. All Rights Reserved
Copyright © Cloud Software Group, Inc. All Rights Reserved


Chapter 1 Introduction to TIBCO API Exchange Gateway : Deployment Architecture

Deployment Architecture
TIBCO API Exchange Gateway is deployed as a cluster of engines that act as a single logical gateway. The engines in the cluster can run on a single server or in a distributed environment across multiple physical or virtual servers, providing fine-grained control over the cluster deployment topology.
TIBCO Rendezvous is used for communication between most of the runtime components of both the gateway operational layer and the gateway management layer. The gateway operational layer and all its components share a single set of configuration files. Therefore, the configuration files should be stored on a shared storage device that is accessible to each of the runtime components.
In case, if multiple instances of Core Engine are deployed in a cluster, then, the multiple cache-agents are also instantiated. This deployment has a single distributed cache that is shared across all the core engines to support association and response cache functionality.
Single Server Deployment Architecture
In its simplest non-high available form, TIBCO API Exchange Gateway can run as a single server. The following figure displays the deployment of the software components as a single server instance:
Figure 3 Single Server Deployment
This configuration provides the entire functionality, including the optional operational reporting and analytics provided by TIBCO Spotfire® Server components. TIBCO Spotfire Professional client software running on a Windows workstation and TIBCO Spotfire WebPlayer Server software running on a Windows server are not visible in the diagram.
The protocol termination components, The Module for Apache HTTP Server (Optional) and TIBCO Enterprise Messaging Service for JMS transport (Optional), need to be deployed and managed with their standard operations management tools.
The Module for Apache HTTP Server that is part of TIBCO API Exchange Gateway is deployed as a normal module for the Apache HTTP server. This module turns the Apache HTTP requests into TIBCO Rendezvous messages for communication with the Core Engine. In case, the Apache HTTP Server is deployed within a DMZ zone, you should configure TIBCO Rendezvous Routing Deamon. TIBCO Rendezvous Routing Deamon forwards the TIBCO Rendezvous messages from the DMZ network through the firewall to the internal network where TIBCO API Exchange Gateway components are deployed.
Runtime Components
The runtime components of TIBCO API Exchange Gateway are deployed as a single application that can span multiple host servers. The runtime components are as follows:
The management layer components, the Central Logger and the Global Throttle Manager communicate with the Core Engine using Rendezvous messages at run time.
The Core Engine publishes the messages as events on the Rendezvous bus. The Central Logger component receives these messages from the Rendezvous bus and stores them in the Central Logger database at appropriate intervals. The Global Throttle Manager also uses the Rendezvous bus to report the throttle usage data to the Central Logger.
The Global Throttle Manager controls the throttle allocation for each Core Engine. The Global Throttle Manager receives throttle reports from the Core Engines over the Rendezvous bus. It also sends the throttle grants back to the Core Engines over the Rendezvous bus.
The Global Throttle Manager treats the throttle usage events as the heartbeat interval of a Core Engine. In the absence of a configurable number of consecutive heartbeats, the Global Throttle Manager treats the Core Engine as dead and distributes the throttle limits of the dead instance equally to all the live Core Engines.
The Cache Cleanup Agent does not use the Rendezvous bus to interact with the Core Engine. It connects directly to one of the Cache agents to clear the cache so that it does not grow too large. This cleanup of the cache is called cache flushing.
To deploy and start the cluster of runtime components, use one of the following methods:
At the command line, specify the component unit to start and optionally, a custom CDD file to use.
Both of the deployment methods use two default resources: an EAR file and a cluster deployment descriptor (CDD), which is an XML file.
When the Core Engine (with or without caching agent), Global Throttle Manager, or Cache Cleanup Agent are started, they use the asg_core.ear and asg_core.cdd files in the ASG_HOME/bin directory.
When the Central Logger component is started, it uses the asg_core.ear and asg_cl.cdd files in the ASG_HOME/bin directory.
The Monitoring and Management Server and the GUI Configuration Server can be started at the command line.
Any configuration updates that are made through the GUI Configuration Server are persisted in the configuration files on the shared storage device. These configuration files need to be reloaded by the runtime components to be effectuated.
The optional operational reporting and analytics provided by the TIBCO Spotfire Server components interacts with the Central Logger through the Central Logger database using a standard JDBC connection.
Distributed Deployment Architecture
TIBCO API Exchange Gateway supports a distributed deployment environment, in which multiple instances of the Core Engines can be deployed. This architecture meets the requirements of high availability and scalability of the gateway components, which is recommended in a production environment.
Scaling and High Availability
TIBCO API Exchange Gateway provides a default site topology file, which is configured for a deployment with single instances for each Core Engine of the gateway cluster, all deployed on a single server host. Using this configuration you can quickly deploy the API Exchange Gateway in a development environment, though it typically does not meet availability and scalability requirements for a production deployment. See High Availability Deployment Of Runtime Components.
The Studio can be used to create production site topology configurations for your production environment including load balanced and fault-tolerant setups.
Load Balancing
TIBCO API Exchange Gateway can be rapidly scaled up and down through the addition or removal of additional instances of the Core Engines to the gateway cluster.
When multiple Core Engine instances are deployed in a gateway cluster, the key management functions including throttle management, cache management, cache clearing management, and Central Logger are co-ordinated across all the Core Engine instances. The components that provide the management functions do not need to be scaled to support the higher transaction volumes.
However, as transaction levels increase, it is likely that this is accompanied by a corresponding increase in management activity. To avoid the possible impact of the management activity on the Core Engines, these management components and the TIBCO Spotfire Servers should be moved onto separate servers.
The following diagram illustrates a simplified view of the scaled solution and depicts the deployment of various components in a distributed environment:
Figure 4 Deployment of Multiple Components in Distributed Environment
Increasing the number of the Core Engines in TIBCO API Exchange Gateway deployment provides a near linear increase in the maximum number of transactions that can be managed. This type of deployment reduces the impact of the failure of an individual Core Engine. TIBCO API Exchange Gateway uses a shared nothing model between the active Core Engines to ensure that there is no shared state.
To support a load balanced setup, the transport protocol termination components must be configured appropriately.
When the protocol termination components reach the limits of the scale they can provide, an IP load balancer can be added to the deployment in front of multiple Apache HTTP servers or JMS servers. The load balancer should be configured to make the Apache HTTP servers or JMS servers available on a single IP address.
High Availability of TIBCO API Exchange Gateway
For a high available setup of the TIBCO API Exchange Gateway deployment, the configuration setup of the components in the Gateway Operational Layer is different from the setup of components in the Gateway Management Layer.
Gateway Operational Layer
As the Core Engine and Apache HTTP server maintain no state, fault tolerance is provided by multiple engine instances running across sites and the host servers with the same configuration supporting a load balanced configuration. See Load Balancing.
A fault-tolerant setup for JMS endpoints of TIBCO API Exchange Gateway leverages the fault-tolerant setup capabilities of the TIBCO Enterprise Message Service. See TIBCO Enterprise Message Service™ User’s Guide for details.
Fault tolerance of Cache Agents is handled transparently by the object management layer. For the fault tolerance of cache data, the only configuration task is to define the number of backups you want to keep, and to provide sufficient storage capacity. The Cache Agents are used only to implement the association cache. The association cache is automatically rebuilt after complete failure when new transactions are handled by TIBCO API Exchange Gateway. Therefore, the Cache Agents do not require a backing store.
Gateway Management Layer
The components of the Gateway Management Layer must be deployed once in a primary-secondary group configuration. The Central Logger and the Global Throttle Manager must have a single running instance at all times to ensure that the Core Engine operates without loss of functionality.
Therefore the Central Logger and Global Throttle Manager must be deployed in fault-tolerant configuration with one active instance engine and one or more standby agents on separate host servers. Such fault-tolerant engine setup can be configured in the cluster deployment descriptor (CDD) file by specifying the maximum number of one active agent for either of the agent classes and by creating multiple processing unit configurations for both the Global Throttle Manager and the Central Logger agent. Deployed standby agents maintain a passive Rete network. They do not listen to events from channels and they do not update working memory. They take over from an active instance in case it fails.
The other components of the Gateway Management Layer have no direct impact on the functionality of an operating Core Engine instance and hence they can be deployed with a cold standy configuration. This applies to the following components:
Deploy multiple versions of these components across host servers with one instance running. If the running instance goes down, start one of the other instances to regain complete gateway functionality.

Copyright © Cloud Software Group, Inc. All Rights Reserved
Copyright © Cloud Software Group, Inc. All Rights Reserved