Cache-based object management is generally the best choice for a CEP system, and a distributed cache is generally the most appropriate, especially when used with a backing store (database). All the provided caching schemes use a distributed cache and are configured for production as shipped.For configuration of other caching schemes, and for advanced configuration of the provided schemes, consult the TIBCO BusinessEvents Cache Configuration Guide online reference.Cache OM is a requirement for other features such as multi-agent and concurrent RTC features (as explained in Chapter 9, Concurrency and Project Design).In a distributed cache, cached object data is partitioned between the PUs (JVMs) in the cache cluster for efficient use of memory. By default one backup of each item of data is maintained, on a different PU. You can configure more backups of each object to be kept on different PUs to provide more reliability as desired, or to disable maintenance of backups.Distributed caching offers a good balance between memory management, performance, high availability and reliability. It also offers excellent system scaling as data needs grow. See Characteristics of Distributed Caching Schemes for more details.To scale the system’s capacity to handle more data, add more cache agents, which are PUs specialized to handle cache data only (see Cache Agents (Storage Nodes)).To scale the systems capacity to process more data, add more inference agents (see Inference Agents).In addition, each entity can have a different cache mode, to help you balance memory usage and performance (see Between Cache and Rete Network: Cache Modes).When you use Cache object management without a backing store, objects are persisted in memory only, and reliability comes from maintaining backup copies of cached objects in memory caches.See Characteristics of Distributed Caching Schemes for more details.When you use cache object management (generally with a backing store) then you can also use multi-agent features.Multiple inference agents can run concurrently in either of two ways. In both cases the agents share the same ontology and same cache cluster:
• Multiple instances of the same inference agent, each running on different PUs, form an agent group. This provides load balancing of messages arriving from queues, as well as fault tolerance.
• Different agents in different PUs work concurrently to distribute the load on the JVM processes. This results in quicker conflict resolution and the ability to handle a heavy incoming message load. For example, Agent X connects to Agents Y and Z to create rule chaining across a set of PUs. Each agent uses different sets of rules, such as rules for fraud, upsell and cross-sell. All agents operate against the same cluster and share the same ontology. The output from one agent may trigger rules deployed in another agent, causing forward chaining of the work loadAnother way to achieve concurrency is to use the multi-threaded Rete feature. This feature also requires cache. Within one agent, multiple RTC take place concurrently.With agent or RTC concurrency, you must use locking: in both cases multiple RTCs are being processed at the same time, and data must be protected as in any concurrent system. See Designing for Concurrency for more details.Object management is configured using the Cluster Deployment Descriptor, an XML file that you edit in BusinessEvents Studio using a provided editor.See Chapter 3, CDD Configuration Procedures and Chapter 4, Cluster Deployment Descriptor Reference in TIBCO BusinessEvents Administration for details.
Copyright © TIBCO Software Inc. All Rights Reserved.