Distributed Cache OM
Cache object management (OM) is the standard choice for most TIBCO BusinessEvents Cache Object Management Feature Overview.
Cache-based object management is generally the best choice for a CEP system, and a distributed cache is generally the most appropriate, especially when used with a backing store (database). All the provided caching schemes use a distributed cache and are configured for production as shipped.
Cache OM is a requirement for certain features such as multi-agent concurrency.
Object management is configured using the Cluster Deployment Descriptor (CDD), an XML file that you edit in TIBCO BusinessEvents Studio using a provided editor. For more information, see TIBCO BusinessEvents Configuration Guide.
Distributed Cache Characteristics
In a distributed cache, cached object data is partitioned between the PUs (JVMs) in the cache cluster for efficient use of memory. By default one backup of each item of data is maintained, on a different PU. You can configure more backups of each object to be kept on different PUs to provide more reliability as desired, or to disable maintenance of backups.
Distributed caching offers a good balance between memory management, performance, high availability and reliability.
Scaling the System
Scaling is linear. To scale the system’s capacity to process more data, add more inference agents. To scale the cache, add more cache servers .
In addition, each entity can have a different cache mode, to help you balance memory usage and performance .
Reliability of Cache Object Management
When you use Cache object management without a backing store, objects are persisted in memory only, and reliability comes from maintaining backup copies of cached objects in memory caches.
To provide increased reliability in the case of a total system failure, add a backing store.
Multi-Agent Concurrency Features
Multiple inference agents can run concurrently in either of two ways. In both cases the agents share the same ontology and same cache cluster:
- Multiple instances of the same inference agent class, each running on different PUs, form an agent group. This provides simple load balancing of messages arriving from queues, as well as fault tolerance. You can also configure content-aware load balancing for “session stickiness.” (See Load Balancer Configuration in TIBCO BusinessEvents Developer’s Guide.)
- Different agents in different PUs work concurrently to distribute the load on the JVM processes. This results in quicker conflict resolution and the ability to handle a heavy incoming message load. For example, Agent X connects to Agents Y and Z to create rule chaining across a set of PUs. Each agent uses different sets of rules, such as rules for fraud, upsell and cross-sell. All agents operate against the same cluster and share the same ontology. The output from one agent may trigger rules deployed in another agent, causing forward chaining of the work load.