Distributed Cache OM : Managing Storage and Retrieval of Entity Objects

Managing Storage and Retrieval of Entity Objects
When you use Cache OM and a backing store, various options help you manage where entity objects are stored, and how to retrieve them from the backing store at startup to optimize system performance and memory management. With Cache OM, objects created by a running BusinessEvents application can be kept in any of these locations:
You can manage where the object data is kept at the level of the entity type. The best choice depends on how often the object changes, and how often it is accessed. The various options balance the memory and performance characteristics of the system. Different applications have different priorities and it is up to you to choose the options that suit your needs.
Between Backing Store and Cache: Preloading Options and Limited Cache Size
Although best performance is obtained when all objects are in the cache, in practice there are often more objects than you can keep in the cache, or want to.
When the system demands an object that exists in backing store but not in cache, the object is automatically loaded from the backing store into the cache, and then into the Rete network. This takes time, but reduces the need to store so much data in the cache, which uses up memory.
You can configure what objects to preload into cache on startup, and what objects to evict from the cache when not needed. You can preload all, none, or selected types.
You can also configure what object handles to preload into the object table. Again, you can preload handles for all, none, or selected types. The first RTC does not occur until the object table has been preloaded (with the object handles configured for preloading). For more details, see The Role of the Object Table.
It is also important to start enough cache servers to handle the work. See Starting a Minimum Number (Quorum) of Cache Servers.
Limiting Cache Size
When you use a backing store, you can limit the size of the cache by specifying the cache size. This is helpful for large volumes of data that the available memory cannot hold. When the number of objects in cache reaches the cache size limit, some of the objects are automatically evicted from cache (they are still in the backing store).
See Chapter 3, CDD Configuration Procedures in TIBCO BusinessEvents Administration.
Between Cache and Rete Network: Cache Modes
As explained above, you can store less-frequently-used objects in the backing store only, and they are retrieved into the cache as needed. In a similar way, you can define how to manage the instances of each object type, using one of the following cache modes:
Cache Plus Memory: In the Rete network as well as in the cache: Use for constants and concepts that change infrequently. In a concurrent system, Cache Plus Memory mode is inappropriate for most objects, due to the difficulties involved in keeping all the concurrent processes synchronized.
Cache Only: Only in the cache, most commonly used mode for concurrent systems. The objects are retrieved into memory (the Rete network) only when needed for an RTC. Locking is still required as in any concurrent system.
Memory Only: Only in the Rete network, used for objects whose persistence is not important. Use for small, frequently used, stateless entities, for improved performance.
See Chapter 8, Cache Modes and Project Design for more details.
See Entity-Level Configuration for Cache and Backing Store in TIBCO BusinessEvents Administration.