Copyright © TIBCO Software Inc. All Rights Reserved
Copyright © TIBCO Software Inc. All Rights Reserved


Chapter 1 Distributed Application Monitoring : TCP Transport for TIBCO Hawk

TCP Transport for TIBCO Hawk
TCP Transport for TIBCO Hawk is a TCP-based transport for Hawk components that use Akka clustering designs.
For more information on the Akka clustering, refer to the Akka documentation at http://akka.io/docs/.
Also, the TCP Transport for TIBCO Hawk removes the dependency on an external server or transport (such as, TIBCO Rendezvous and TIBCO Enterprise Messaging Service) as the TCP communication happens peer to peer. TIBCO Hawk can be easily deployed in cloud when TCP Transport for TIBCO Hawk is used.
TCP Transport for TIBCO Hawk Architecture
The following are the key components of the TCP Transport for TIBCO Hawk:
 
Figure 2 The TCP Transport for TIBCO Hawk Architecture
Cluster Manager
The Cluster Manager is seed node for the TCP Transport Cluster. This is also the initial point of contact in the cluster. To use the TCP Transport for TIBCO Hawk, start the Cluster Manager process before starting any Hawk components. For using the fault tolerance, you can start multiple Cluster Managers.
Console and Hawk Agent Main Cluster
All the Hawk Agents, Hawk Consoles, and Hawk TCP daemons form one TCP Transport Cluster.
To form a cluster, all Hawk components and the daemon process are configured with daemon process as the seed node. The daemon process connects to itself and forms a cluster. All other Hawk components (Hawk agents and Hawk console) joins the cluster by connecting to the daemon process. It is best practice to have an odd number of daemon processes (such as 1, 3, 5, and so on). After the daemon process is started all other Hawk components can be started in any order. All agents and consoles in the cluster should have the same domain name.The configuration parameter for connecting to the daemon process is
tcp_session self_ip:port daemon_ip:port
where,
self_ip:port is the socket address of the Hawk component (Hawk agents, Hawk Console, or Cluster Manager) that is joining the cluster.
daemon_ip:port is the socket address of the Cluster Manager acting as a seed node for the TCP Transport Cluster. For the seed node daemon, self_ip:port and daemon_ip:port is same.
For fault tolerance, multiple seed nodes can be specified as comma separated list:
tcp_session self_ip:port daemon1_ip:port1,daemon2_ip:port2,...
Start the first seed node in the comma-separated list first and then other seed nodes. The cluster is not ready till the first node is started.
The Hawk Console subscribes to TCP Transport Cluster membership events. The membership ensures that the console gets notified whenever a new Hawk Agent is up or an existing one is down.
Hawk Agent and AMI Subcluster
The Hawk agent and its AMI applications form a separate cluster. For this cluster, the Hawk agent becomes the daemon process. The Hawk agent is the seed node through which all the AMI applications join the cluster. The configuration parameter for the Hawk agent’s AMI component is:
ami_tcp_session self_ip:port
where,
self_ip:port is the socket address of the Hawk agent acting as the Cluster Manager for the cluster
For the AMI application, the configuration parameter for connection to the Hawk agent in the cluster is:
tcp_session=self_ip:port agent_ip:ami_tcp_session_port
where,
self_ip:port is the socket address for the AMI application
agent_ip:ami_tcp_session_port is socket address of the Hawk agent AMI component for the TCP Transport Cluster. This is the same socket address that was used in the self_ip:port attribute of the ami_tcp_session parameter in the Hawk agent of the TCP Transport Cluster.
TCP Transport and TIBCO Rendezvous Bridge for Hawk Microagent (HMA)
The TCP Transport and TIBCO Rendezvous Bridge is required for using Hawk Microagent (HMA) with TCP Transport for TIBCO Hawk. Since the TIBCO Rendezvous transport support Hawk C/C++ AMI API, a bridge is required to use HMA in the TCP Transport Cluster. In this bridge, the TIBCO Rendezvous transport is used for HMA and the TCP Transport for TIBCO Hawk for all other AMI applications. The AMI specific transport implementation of TCP Transport for TIBCO Hawk handles both the sessions.
Network Partition Strategies
In case of network failure or an unreachable node, the TCP Transport for TIBCO Hawk might create a network partition and marks the node down based on predefined partition strategies. In network partitioning, the transport keeps the best partition alive and shuts down the other partition. The TCP Transport for TIBCO Hawk handle the network partitions based on the following strategies:
Quorum-Based Strategy
In the Quorum-based strategy of network partitioning, the network partition has to maintain a minimum number of TCP daemons in the TCP Transport Cluster to be operational. This minimum number of daemons is termed as quorum size. In case of network failure, the cluster with the required quorum size remains operational. The TCP Transport Cluster shuts down the other partition and marks any unreachable node in the other partition as down. For example, there are five Cluster Manager in the cluster with quorum size set to three. If the network partition occurs with one partition having three Cluster Managers and other having two then the partition with two Cluster Managers shuts down.
quorum size = (number of daemons / 2) + 1
Majority-Based Strategy
In the majority based strategy, if the network partition occurs then the partition which has the majority of nodes survives and the other partition shuts down. For example, if there are five nodes in the TCP Transport Cluster and network partition occurs with three and two nodes, then the partition having two nodes shuts itself down. The advantage of the majority based strategy over the Quorum-based strategy is that cluster can gradually downsize without outage. In case, network splits with equal partitions than one which has the oldest node survives.
 

Copyright © TIBCO Software Inc. All Rights Reserved
Copyright © TIBCO Software Inc. All Rights Reserved