Cache-based object management is generally the best choice for a CEP system, and within the various caching schemes available, (such as replicated cache and near cache) it has been determined that distributed cache is generally the most appropriate, especially when used with a backing store (database).
All the provided caching schemes use a distributed cache and are configured for production as shipped. For configuration of other caching schemes, and for advanced configuration of the provided schemes, consult the
TIBCO BusinessEvents Cache Configuration Guide online reference.
In a distributed cache, cached object data is partitioned between the nodes (JVMs) in the cache cluster for efficient use of memory. This means that no two nodes are responsible for the same item of data. By default one backup of each item of data is maintained, on a different node. You can configure more backups of each object to be kept on different nodes to provide more reliability as desired (and you can disable maintenance of backups).
Distributed caching offers a good balance between memory management, performance, high availability and reliability. It also offers excellent system scaling as data needs grow. See
Characteristics of Distributed Caching Schemes for more details.
When you use Cache object management without adding a backing store, objects are persisted in memory only, and reliability comes from maintaining backup copies of cached objects in memory caches.
Cache OM uses cache clustering to provide data failover and failback. The default backup count is one, meaning each object is backed up in one cache server. You can configure a larger backup count to determine how many backups of cache data you want the object manager to keep, each on different nodes.
Because they store data in memory, cache-based systems are reliable only to the extent that enough cache servers with sufficient memory are available to hold the objects. If one cache server fails, objects are redistributed to the remaining cache servers, if they have enough memory. You can safely say that if backup count is one, then one cache server can fail without risk of data loss. In the case of a total system failure, the cache is lost. To provide increased reliability in the case of a total system failure, add a backing store.
You can implement a backing store for use with Cache OM to provide data persistence. During regular operation, cache data is written to the backing store. On system restart, data in the backing store is restored to the cache cluster. For Cache Only objects, you can specify what gets preloaded at startup: all objects, only specified objects, or all objects except those specified.
In addition, when you use a backing store, you can limit the size of the cache by specifying the cache size. This is helpful for large volumes of data that the available memory cannot hold. When the number of objects in cache reaches the cache size limit, some of the objects are automatically evicted from cache (they are still in the backing store). When the system demands an object that exists in backing store but not in cache, the object is automatically loaded from the backing store into the cache.
To implement a backing store, you need a supported database product. You perform database setup using provided scripts, and perform various configuration tasks to enable writes to the database and to handle system startup behavior.
When you use Cache object management, you can fine tune performance and memory management using various cache modes. These modes allow you to define how to manage the instances of each object type. For example, very large and infrequently used concept instances can be kept in the cache and retrieved into JVM memory on demand, freeing up the BusinessEvents engine’s JVM memory. Small, frequently used, stateless entities can be kept in JVM memory only, for improved performance. See
Working With Cache Modes for details.