Chapter 3 Cache Object Management Tutorial : Caching and Multi-Engine Overview

Caching and Multi-Engine Overview
This tutorial builds on the Project Design Tutorial, to explore common object management and deploy-time options.
Object management (OM)  Refers to various ways that BusinessEvents can manage the state of ontology object instances created by each inference agent.
Cache-based OM  When you use cache-based object management, object data is kept in memory caches, with (optional but recommended) redundant storage of each object for reliability and high availability. Within a cache cluster, nodes deployed as cache servers manage the data and handle recovery. Cache data is shared by all agents in the cluster.
All the provided caching schemes use a distributed cache and are configured for production as shipped. In a distributed cache, cached object data is partitioned between the nodes (JVMs) in the cache cluster for efficient use of memory. No two nodes are responsible for the same item of data. You can configure one or more backups of each object to be kept on different nodes to provide reliability.
Multi-engine features  With cache OM you can use multi-engine features. Multiple inference agents deployed in the same cache cluster share the cache data and can run concurrently. They can be differently configured agents, or instances of the same agents (known as an agent group).
Load balancing and fault tolerance  Deploying an agent group enables you to use load balancing or fault tolerance or both. When you deploy more inference agent instances than the specified number of active agents, the inactive inference agents are automatically used for fault tolerance. Inactive agents maintain a passive Rete network and do not listen to events from channels.
Load balancing is used with point-to-point messaging such as JMS queues. (Broadcast messages are received by all active agents in the cluster.) See Load Balancing and Fault Tolerance Between Inference Agentsin TIBCO BusinessEvents User’s Guide.
To explore these features you will make some minor modifications to the fraud detection project, and then deploy five nodes: three inference agents to provide load balancing and fault tolerance, and two cache servers, providing fault tolerant data storage. You will then exercise the project to see the effect of the cache, load balancing, and fault tolerance.
JMS Server Required
Load balancing requires use of point-to-point messaging using a queue. The tutorial provides step-by-step instructions for connecting to the TIBCO Enterprise Message Service™ server, running on the default port. If you use a different JMS provider, adapt the instructions accordingly.