Copyright © TIBCO Software Inc. All Rights Reserved


Chapter 5 Cluster Configuration for Berkeley DB OM : Cluster Tab — Berkeley DB OM Settings and Properties

Cluster Tab — Berkeley DB OM Settings and Properties
For General settings see Cluster Tab — General Settings.
A checkpoint is the point in time at which working memory data is written to disk. The checkpoint interval is the time, in seconds, between writes to disk.
Note: No changes can occur in the Rete network during a checkpoint.
Database operations include object creations, updates, and deletions. An outstanding database operation is one that is held in working memory only (it has not yet been written to disk). When the number of outstanding database operations exceeds the Checkpoint Ops Limit value, a checkpoint occurs.
When the persistence layer performs cleanup, the least recently used (LRU) properties are moved to the persistence store, to reduce the number of properties in memory to the specified number.
It is recommended that you delete retracted objects to avoid accumulating large numbers of retracted objects in the database. However, you may want to keep retracted objects in the database, for example for reporting or data mining purposes.
When recovery features are disabled, performance improves because the processing required to support the recovery features is not done.
Persistence files for the agent are stored under the Database Environment directory on the target machine when the agent is deployed. By default (if you do not specify a directory), persistence files are located under the TIBCO BusinessEvents engine’s working directory, in directories named db/session_name.
Note: Each agent must have its own database environment directory. If you will deploy more than one agent, define its directory at the agent level, see Defining the Database Directories for Each Inference Agent for details.
Tip: If you can’t determine the location of a deployed application’s persistence files, search for their filenames. The persistence file directory contains one file called je.lck and one or more files called 00000000.jdb, 00000001.jdb, and so on.
Percentage of JVM memory to set aside for use by the persistence layer’s internal cache. This memory is set aside when the engine starts up.
For projects with multiple agents, you can also set be.engine.om.berkeleydb.cacheweight.agent
For projects with multiple agents, you can provide a weight for one or more agents. The weight enables the system to calculate what percentage of the memory set aside using be.engine.om.berkeleydb.internalcachepercent to allocate to each agent.
Session cache percent = internalcachepercent * (cache.weight.agent / total of all session cacheweight values).
For example, if you want to provide a lot of the allocated memory to a certain agent, you can add an entry for that agent providing a higher weight value, and the rest of the agents will be assigned the default weight.
Sets the default maximum event cache size. This default is used if be.engine.om.eventcache.maxsize.agent is not specified for an agent.
If you do not set either of the event cache maximum size properties (be.engine.om.eventcache.defaultmaxsize or be.engine.om.eventcache.maxsize.agent) then the value of the Object Management tab setting is used (see Caches Used for Persistence-Based Object Management).
Note, however, that the property cache size applies to the number of concept properties. Events store their properties inside the event. The event cache maximum size settings refer to the entire event, not its individual properties.
The default value is provided by the be.engine.om.eventcache.defaultmaxsize property.
When the persistence layer performs cleanup, the least recently used (LRU) events are moved to the persistence store, to reduce the number of events in memory to the specified number.
If your system has sufficient memory, you can improve performance by increasing the number of events kept in memory. When determining how many events to keep in memory, consider the size of the events — some may be quite large. Also consider your other requirements for memory.
Additional Configuration Notes
Some items in the Table 3, Cluster Tab —General Settings (Sheet 1 of 2) table require additional explanation, provided below.
Checkpoint Interval and Outstanding Database Operations
You can schedule checkpoints based on the Checkpoint Interval only, or on the Max Outstanding Database Operations only, or on both settings. It is recommended that you use both settings. When you do so, data is written to disk as follows:
When the Checkpoint Interval passes (even if fewer than the Max Outstanding Database Operations have occurred).
When the Max Outstanding Database Operations value is exceeded within the Checkpoint Interval. TIBCO BusinessEvents then resets the Checkpoint Interval timer.
For example, assume the checkpoint interval is thirty seconds and the number of outstanding database operations is defined as five. Thirty seconds passes with only three outstanding database operations, so TIBCO BusinessEvents performs a checkpoint. Then ten seconds passes and six database operations occur, so again, TIBCO BusinessEvents performs a checkpoint. TIBCO BusinessEvents also resets the checkpoint interval timer.
Caches Used for Persistence-Based Object Management
If your system has sufficient memory, you can improve performance by increasing the number of concept properties kept in memory for each agent. When determining how many concept properties to keep in memory, consider the size of the properties — some may be quite large. Also consider your other requirements for memory
Two caches are used with the Persistence option: a concept property cache and an event cache. The property cache size controls how many concept properties are kept in JVM memory. You define similar settings for the event cache in the CDD file.
Additional memory management settings also are available. They enable you to control the percentage of JVM memory that is set aside for use by the persistence layer’s internal cache. See Cluster Tab — Berkeley DB OM Settings and Properties.
Defining the Database Directories for Each Inference Agent
If your project has multiple agents (or if you will deploy the same application multiple times on the same machine), you must ensure that each agent has its own database environment directory. To do so, add the following property at the agent level:
be.engine.om.berkeleydb.dbenv
Specify the location of the database directory relative to the deployed agent.
At deploy time, directories are automatically created for each agent under the directory you specify.

Copyright © TIBCO Software Inc. All Rights Reserved