Single Server Deployment Architecture
This section explains the deployment of gateway components as a single server instance.
In its simplest non-high available form, TIBCO API Exchange Gateway can run as a single server. The following figure displays the deployment of the software components as a single server instance:
This configuration provides the entire functionality, including the optional operational reporting and analytics provided by TIBCO Spotfire® Server components. TIBCO Spotfire Professional client software running on a Windows workstation and TIBCO Spotfire WebPlayer Server software running on a Windows server are not visible in the diagram.
The protocol termination components, The Module for Apache HTTP Server (Optional) and TIBCO Enterprise Messaging Service for JMS transport (Optional), need to be deployed and managed with their standard operations management tools.
The Module for Apache HTTP Server that is part of TIBCO API Exchange Gateway is deployed as a normal module for the Apache HTTP server. This module turns the Apache HTTP requests into TIBCO Rendezvous messages for communication with the Core Engine. In case the Apache HTTP Server is deployed within a DMZ zone, you should configure TIBCO Rendezvous Routing Deamon. TIBCO Rendezvous Routing Deamon forwards the TIBCO Rendezvous messages from the DMZ network through the firewall to the internal network where TIBCO API Exchange Gateway components are deployed.
The runtime components of TIBCO API Exchange Gateway are deployed as a single application that can span multiple host servers. The runtime components are as follows:
The management layer components, the Central Logger and the Global Throttle Manager communicate with the Core Engine using Rendezvous messages at run time.
The Core Engine publishes the messages as events on the Rendezvous bus. The Central Logger component receives these messages from the Rendezvous bus and stores them in the Central Logger database at appropriate intervals. The Global Throttle Manager also uses the Rendezvous bus to report the throttle usage data to the Central Logger.
The Global Throttle Manager controls the throttle allocation for each Core Engine. The Global Throttle Manager receives throttle reports from the Core Engines over the Rendezvous bus. It also sends the throttle grants back to the Core Engines over the Rendezvous bus.
The Global Throttle Manager treats the throttle usage events as the heartbeat interval of a Core Engine. In the absence of a configurable number of consecutive heartbeats, the Global Throttle Manager treats the Core Engine as dead and distributes the throttle limits of the dead instance equally to all the live Core Engines.
The Cache Cleanup Agent does not use the Rendezvous bus to interact with the Core Engine. It connects directly to one of the Cache agents to clear the cache so that it does not grow too large. This cleanup of the cache is called cache flushing.
To deploy and start the cluster of runtime components, use one of the following methods:
- Command Line
At the command line, specify the component unit to start and optionally, a custom CDD file to use.
- TIBCO API Exchange Gateway Monitoring and Management Server
Both of the deployment methods use two default resources: an EAR file and a cluster deployment descriptor (CDD), which is an XML file.
When the Core Engine (with or without caching agent), Global Throttle Manager, or Cache Cleanup Agent are started, they use the asg_core.ear and asg_core.cdd files in the ASG_HOME/bin directory.
When the Central Logger component is started, it uses the asg_core.ear and asg_cl.cdd files in the ASG_HOME/bin directory.
The Monitoring and Management Server and the GUI Configuration Server can be started at the command line.
Any configuration updates that are made through the GUI Configuration Server are persisted in the configuration files on the shared storage device. These configuration files need to be reloaded by the runtime components to be effectuated.
The optional operational reporting and analytics provided by the TIBCO Spotfire Server components interacts with the Central Logger through the Central Logger database using a standard JDBC connection.