Processes in ActiveSpaces

The following processes are involved in creating, maintaining, and querying the data grid:
  • TIBCO ActiveSpaces Client Applications
  • Proxy
  • State Keeper
  • Realm Service
  • Node
TIBCO ActiveSpaces Client Applications
The client applications use the API libraries shipped with the product to build custom applications. Client applications interact with the data grid by using the proxy process.
Proxy
A proxy is a mediator between a client request and the data grid. Based on the client request, the proxy identifies the primary node in a copyset and interacts with the primary node till the request is processed and shared with the client. You can have many proxies in a data grid.
Realm Service
A data grid is run inside a TIBCO FTL realm. A TIBCO FTL realm serves as a repository for data grid configuration information and provides communication services that enable all data grid processes to communicate with each other.
A client application accesses the data grid by using the realm service URL. In TIBCO FTL 6.0.0 or later, the realm service URL is the URL of the TIBCO FTL server. The realm service offers the following capabilities:
  • Stores data grid definitions
  • Communicates with the administrative tools to store and retrieve data grid definitions
  • Communicates with all the processes running in the data grid and updates the internal configuration if anything changes
  • Collects monitoring data from all processes
Fault Tolerance in Realm Services Used in TIBCO 5.4.1
In fault tolerant mode, only the primary realm server accepts configuration updates. In case the primary realm server is down, the secondary realm server takes over as the primary. In a production environment, it is a good practice to have two realm servers running in separate host computers.
Fault Tolerance in Realm Services Used in TIBCO 6.0.0 or Later

TIBCO FTL 6.0.0 or later uses a different fault tolerance mechanism than the realm servers of TIBCO FTL 5.4.1. Instead of pairing a primary realm server and a backup realm server, TIBCO FTL 6.0.0 or later uses a quorum-based fault tolerance mechanism. A cluster of at least three TIBCO FTL core servers must be run. Each core server provides a realm service. Those realm services all cooperate to provide fault tolerance for the data grid. Fault tolerance is assured as long as a quorum of servers is always running. Each core server must be run on a separate machine. Clients receive a list of URLs at which they can connect to those TIBCO FTL core servers.

State Keeper
A state keeper runs internally in the data grid and tracks all the data in the data grid. Each state keeper saves the data locally on the disk. When you start the realm service, the state keeper receives the grid configuration information from the realm service. State keepers are responsible for the following functions:
  • Tracking and managing all the copysets in a data grid
  • Tracking the proxies in a data grid
  • Identifying a primary node in each copyset
  • Promoting one of the secondary nodes as primary, in case the primary node of a copyset goes down
  • Ensuring consistency as the data grid scales up
Fault Tolerance in State Keepers
It is a good practice to have one to three state keepers running in a production environment. A set of fault-tolerant state keeper processes protects the data grid's run time state information and ensures nonstop access to it. One of the state keepers is designated the lead state keeper and supplies this information to the proxies and copyset nodes. If the lead state keeper goes down, one of the secondary state keepers takes over as the lead. In a fault-tolerant set of three state keepers, a quorum of two state keepers must always be running to ensure data consistency in split brain scenarios. If a state keeper is restarted while a quorum is running, one of the running state keepers updates the state of the restarted state keeper. If the number of running state keepers falls below the quorum and the state of a copyset changes (for example, a node goes down), operations on the data grid fails. When this happens, the remaining state keepers must be brought down and then all state keepers must be restarted.
Node
For more information on nodes, see Nodes.
Fault Tolerance in Nodes
To prevent data loss, you can run up to three nodes per copyset. Every node must have at least one backup node that has an identical copy of the data. For production deployments, TIBCO recommends using at least two nodes per copyset.