Read this section to grasp the basic ideas you need to understand and work with cache object management. For greater depth, see TIBCO BusinessEvents Architect’s Guide.What is object management? Object management (or OM for short) refers to various ways that BusinessEvents can manage the objects created by the actions of an inference agent’s rules and rule functions. You define the OM options in the project’s CDD file. The objects created are instances of the types defined in the ontology—concepts, scorecards and events. In the Project Design Tutorial, you used In Memory OM. With In Memory OM, when an engine shuts down, all its data is lost. This option is useful for a project where objects are not created or don’t have to be persisted, for example, a project that routes incoming events to different BusinessEvents projects for handling.What is a cache cluster? When you use Cache OM, objects are persisted redundantly in memory caches. Each BusinessEvents engine (JVM) participates as a node in the cache cluster. The cache manager manages the objects across the JVMs.What is a distributed cache? BusinessEvents ships with a preconfigured distributed cache scheme. In a distributed cache, the cached object data is partitioned between cluster nodes (BusinessEvents engines). Each object is stored redundantly in two or more nodes, for reliability and high availability: if one engine fails, the objects are repartitioned across the remaining engines so that the configured number of backups is maintained.What is a backing store? With a distributed cache, one or more engines can fail without loss of data (depending on the number of backups of each object). However if all engines fail, the data is lost. For this reason it is common to write the objects to a database, known as the backing store. BusinessEvents provides tools to create the backing store for a project. When you use a backing store you can also use a limited cache and populate it as needed from the backing store, balancing performance and memory requirements as needed.What are cache agents? Cache agents store and serve the cache data for the cluster. Cache agents participate in distribution, partitioning and storage of the objects in the cluster. An engine (processing unit) that runs a cache agent can’t run any other agents of any type. (For testing you can configure an inference agent to also act as a cache agent, but this is not recommended for production systems.)What other types of agents are there? Besides cache agents and inference agents, two other types of agents can join the cluster, too. Query agents are used in the TIBCO BusinessEvents Event Stream Processing add-on. Dashboard agents are used in TIBCO BusinessEvents Views. (Add-ons are available separately from BusinessEvents.) A single engine can run multiple agents of these types.How do agents use cached data? The inference agents can use objects in the cache as well as those they create from data coming in through channels (or create internally using rules and rule functions). Cached concept objects are shared by all agents in the cluster, scorecards are kept local to an agent, and events are clustered between the agents.How does data get from the Rete network to the cache? At the end of each RTC—each "run to completion" cycle—data is flushed from the Rete network to the cache. In this way the Rete network does not become clogged with irrelevant data. (This behavior occurs when the entities are configured as cache-only, which is the default and is strongly recommended.How does data get from the cache into the Rete network? Before each RTC, you must load the relevant data into the Rete network from the cache. If you don’t then cached data that is needed by the rules triggered by incoming events is not present. For example, a request to create a new account arrives through a channel. It is important to check that no matching account already exists in the cache before you create the new account. To do so, you use an event preprocessor rule function to load any matching accounts into the Rete network. Then in the RTC rules can check for existing accounts and avoid creating duplicates.How should I use Cache + Memory mode? With cache plus memory mode, data is not flushed from the Rete network automatically. This option is to be used only for constants or entities that change very seldom. Concurrency management is problematic with this mode, because of the difficulty in keeping all the agent Rete networks in all engines synchronized.) Instead, use cache-only mode for all or most of your needs.How do I change the number of backups of each object are kept? By default one backup of each cache object is maintained. Backups are maintained in different cache agents. You can increase the number of backups by adding this property to the Cluster tab properties sheet: tangosol.coherence.distributed.backupcountHow does BusinessEvents deal with concurrency and locking? Multiple inference agents can run concurrently, sharing the same ontology and same cache cluster. Another way to achieve concurrency is to use the multi-threaded Rete feature, which also requires cache OM. With agent or RTC concurrency, you must use locking: in both cases multiple RTCs are being processed at the same time, and data must be protected as in any concurrent system. The tutorial does not demonstrate concurrency features. See Chapter 9, Concurrency and Project Design in TIBCO BusinessEvents Architect’s Guide for more details.
Copyright © TIBCO Software Inc. All Rights Reserved.