Chapter 18 Understanding and Working With Cache Modes : Working With Cache Modes

Working With Cache Modes
This section describes the cache modes available and explains how to use them appropriately. These options are set when you configure an inference agent that operates within a Cache object management configuration.
The Rete network consumes a large amount of the available memory in the JVM. You can use cache modes to tune the performance of your application, and reduce its footprint in memory.
See Design Constraints With Cache Only Cache Mode for important information about how use of the cache only mode affects your project design.
Cache Modes are Set on Individual Entities to Tune Performance
You set cache modes at the level of individual entity types in your project. This fine granularity allows you to tune performance and memory usage based on the size and usage of the concepts, scorecards, and events in your project ontology.
For example, you can use the In Memory Only cache mode to that frequently used stateless entities are kept in memory (and are not cached). Objects kept in memory are highly available to the application.
On the other hand, using Cache Only mode for large and infrequently used entities reduces the memory footprint. However, you must explicitly load them (in rules or rule functions) so they are available to the Rete network.
All related concepts must use the same cache mode  Concepts can be related to each other concept through inheritance, containment or reference. If, for example, the parent is Cache plus Memory and the child is Memory Only, inconsistent object graphs will result on recovery. Ensure all use the same cache mode.
See Working With Concepts and Concept Relationships for more details about the relationships between concepts.
Cache Plus Memory—The Default Cache Mode
When you use Cache object management, by default all the persistent object types use the cache plus memory setting (Cache + Memory).
With this mode, the entity objects are serialized and are always available in cache. There is one object in the cache (in a logical sense), and any number of references (handles) to that object in each JVM. References to the entities are tracked in the working memory so they can be deserialized and retrieved from the cache when needed by the engine.
The agent’s working memory is used to store the Rete network for the loaded rule sets. The inference agent does not store the actual object. It relies on the object management layer to load the objects on demand from the backing store. For example, take a rule of the form:
Declaration (Scope): Customer
Condition: Customer.age > 20
If the cache mode is cache plus memory, (Cache + Memory) then working memory stores handles to all customers—including those whose age is greater than 20. The customer objects for customers whose age is less than 20 are deserialized and retrieved from cache when the rule condition is evaluated, in order to process the rule.
Because a Rete network is so large, the references themselves can consume large amounts of memory. So if you want to reduce the memory footprint, you can use the Cache Only mode for selected entity types.
In Memory Only—Useful for Stateless Entities
When you select In Memory Only mode for an entity type, instances of that entity are available only in the engine’s local JVM memory only. These entities and their properties are not recoverable, or clustered or shared. For this reason, it is recommended that you use this mode for stateless entities only.
This mode is typically used for static reference data that can be created in the rule functions on startup.
In Memory Only mode can also be used for transient utility entities that created and deleted within a single processing, and are not needed across RTC cycles.
Entities configured in this mode do not persist objects to the cluster and correspondingly the objects are not recovered from the cluster.
This cache mode works the same as the In Memory object management option. See Chapter 15, Configuring In Memory Object Management for more details.
Cache Only Mode
As with the default cache mode (Cache plus Memory), when you choose the Cache Only mode for selected entities, the entity objects are serialized and are always available in cache.
However, with the Cache Only mode, the references (handles) for the Rete network must be loaded into memory, as well as the deserialized objects themselves, whenever the objects are needed for rule processing. Therefore when a cache-only object is required for rule processing, it must be explicitly loaded in the rule or rule function (see Design Constraints With Cache Only Cache Mode). For example you can put such a rule function in an event preprocessor.
When the rules have run to completion and the entities are no longer needed, the objects and references are retracted, that is, removed from working memory, to free memory. The entity instances are written to the cache or deleted (as needed).
The Cache Only mode uses less memory but adds CPU overhead. Because an active Rete network is not maintained for the cached entities, the entities must be explicitly asserted when needed. Therefore the engine re-evaluates all rules relating to the instance (as it does for any newly asserted instance, for example, arriving through a destination). However, the portion of the working memory that would be used by the object and its references is reduced significantly. It’s up to you to balance the benefit of reducing the memory footprint against the cost of the increased load on the CPU in any given situation.
If you use a backing store you can configure preloading options for Cache Only objects. See Configuring How Backing Store Data is Loaded at Startup.