Copyright © TIBCO Software Inc. All Rights Reserved
Copyright © TIBCO Software Inc. All Rights Reserved


Chapter 7 Distributed Cache OM : Cache Object Management Feature Overview

Cache Object Management Feature Overview
Cache-based object management is generally the best choice for a CEP system, and a distributed cache is generally the most appropriate, especially when used with a backing store (database). All the provided caching schemes use a distributed cache and are configured for production as shipped.
Cache OM is a requirement for certain features such as multi-agent concurrency (as explained in Chapter 9, Concurrency and Project Design).
Object management is configured using the Cluster Deployment Descriptor (CDD), an XML file that you edit in TIBCO BusinessEvents Studio using a provided editor. See Chapter 23, Cluster Deployment Descriptor (CDD) in TIBCO BusinessEvents Developer’s Guide for details.
Distributed Cache Characteristics
In a distributed cache, cached object data is partitioned between the PUs (JVMs) in the cache cluster for efficient use of memory. By default one backup of each item of data is maintained, on a different PU. You can configure more backups of each object to be kept on different PUs to provide more reliability as desired, or to disable maintenance of backups.
Distributed caching offers a good balance between memory management, performance, high availability and reliability.
See Characteristics of a Distributed Caching Scheme for more details.
Scaling the System
Scaling is linear. To scale the system’s capacity to process more data, add more inference agents (see Inference Agents). To scale the cache, add more cache servers (see Cache Agents).
In addition, each entity can have a different cache mode, to help you balance memory usage and performance (see Between Cache and Rete Network: Cache Modes).
Reliability of Cache Object Management
When you use Cache object management without a backing store, objects are persisted in memory only, and reliability comes from maintaining backup copies of cached objects in memory caches.
To provide increased reliability in the case of a total system failure, add a backing store.
See Characteristics of a Distributed Caching Scheme for more details.
Multi-Agent Concurrency Features
Multiple inference agents can run concurrently in either of two ways. In both cases the agents share the same ontology and same cache cluster:
Multiple instances of the same inference agent class, each running on different PUs, form an agent group. This provides simple load balancing of messages arriving from queues, as well as fault tolerance. You can also configure content-aware load balancing for “session stickiness.” (See Chapter 25, Load Balancer Configuration in TIBCO BusinessEvents Developer’s Guide.)
Concurrent RTC  You can also enable concurrency within a single agent, using the multi-threaded Rete feature, known as concurrent RTC (and in prior releases as concurrentwm). Within one agent, multiple RTC cycles take place concurrently.
Concurrency and Locking
With agent concurrency and concurrent RTC features, you must use locking: in both cases multiple RTCs are being processed at the same time, and data must be protected as in any concurrent system. See Designing for Concurrency for more details.

Copyright © TIBCO Software Inc. All Rights Reserved
Copyright © TIBCO Software Inc. All Rights Reserved