Table of Contents

Operational Configuration

Operational Configuration Elements
Parameter Settings
Element Attributes
Command Line Setting Override Feature
Operational Configuration Elements Reference

Cache Configuration

Cache Configuration Elements
Parameter Macros
Sample Cache Configurations
Cache Configuration Elements Reference

Excerpts from Oracle Coherence documentation are included with permission from Oracle and/or its affiliates. Copyright © 2000, 2007 Oracle and/or its affiliates. All rights reserved.

Operational Configuration

Operational Configuration Elements


Operational Configuration Deployment Descriptor Elements

Description

The following sections describe the elements that control the operational and runtime settings used by Tangosol Coherence to create, configure and maintain its clustering, communication, and data management services. These elements may be specified in either the tangosol-coherence.xml operational descriptor, or the tangosol-coherence-override.xml override file. For information on configuring caches see the cache configuration descriptor section.

Document Location

When deploying Coherence, it is important to make sure that the tangosol-coherence.xml descriptor is present and situated in the application classpath (like with any other resource, Coherence will use the first one it finds in the classpath). By default (as Tangosol ships the software) tangosol-coherence.xml is packaged into in the coherence.jar.

Document Root

The root element of the operational descriptor is coherence, this is where you may begin configuring your cluster and services.

Document Format

Coherence Operational Configuration deployment descriptor should begin with the following DOCTYPE declaration:

<!DOCTYPE coherence PUBLIC "-//Tangosol, Inc.//DTD Tangosol Coherence 3.0//EN" "http://www.tangosol.com/dtd/coherence_3_0.dtd">
When deploying Coherence into environments where the default character set is EBCDIC rather than ASCII, please make sure that this descriptor file is in ASCII format and is deployed into its runtime environment in the binary format.

Operational Override File (tangosol-coherence-override.xml)

Though it is acceptable to supply an alternate definition of the default tangosol-coherence.xml file, the preferred approach to operational configuration is to specify an override file. The override file contains only the subset of the operational descriptor which you wish to adjust. The default name for the override file is tangosol-coherence-override.xml, and the first instance found in the classpath will be used. The format of the override file is the same as for the operational descriptor, except that all elements are optional, any missing element will simply be loaded from the operational descriptor.

Multiple levels of override files may also be configured, allowing for additional fine tuning between similar deployment environments such as staging and production. For example Coherence 3.2 and above utilize this feature to provide alternate configurations such as the logging verbosity based on the deployment type (evaluation, development, production). See the tangosol-coherence-override-eval.xml, tangosol-coherence-override-dev.xml, and tangosol-coherence-override-prod.xml, within coherence.jar for the specific customizations.

It is recommended that you supply an override file rather then a custom operational descriptor, thus specifing only the settings you wish to adjust.

Command Line Override

Tangosol Coherence provides a very powerful Command Line Setting Override Feature, which allows for any element defined in this descriptor to be overridden from the Java command line if it has a system-property attribute defined in the descriptor. This feature allows you to use the same operational descriptor (and override file) across all cluster nodes, and provide per-node customizations as system properties.

Element Index

The following table lists all non-terminal elements which may be used from within the operational configuration.

Element Used In:
access-controller security-config
authorized-hosts cluster-config
burst-mode packet-publisher
callback-handler security-config
cluster-config coherence
coherence root element
configurable-cache-factory-config coherence
filters cluster-config
flow-control packet-delivery
host-range authorized-hosts
incoming-message-handler cluster-config
init-param init-params
init-params filters, services, configurable-cache-factory-config, access-controller, callback-handler
logging-config coherence
management-config coherence
member-identity cluster-config
multicast-listener cluster-config
notification-queueing packet-publisher
outgoing-message-handler cluster-config
outstanding-packets flow-control
packet-buffer unicast-listener, multicast-listener, packet-publisher
packet-delivery packet-publisher
packet-pool packet-publisher, incoming-message-handler
packet-publisher cluster-config
packet-size packet-publisher
packet-speaker cluster-config
pause-detection flow-control
security-config coherence
services cluster-config
shutdown-listener cluster-config
socket-address well-known-addresses
tcp-ring-listener cluster-config
traffic-jam packet-publisher
unicast-listener cluster-config
well-known-addresses unicast-listener
license-config coherence

Parameter Setttings

Parameter Settings for the Coherence Operational Configuration deployment descriptor init-param Element

This section describes the possible predefined parameter settings for the init-param element in a number of elements where parameters may be specified.

In the following tables, Parameter Name column refers to the value of the param-name element and Value Description column refers to the possible values for the corresponding param-value element.

For example when you see:

Parameter Name Value Description
local-storage Specifies whether or not this member of the DistributedCache service enables the local storage.

Legal values are true or false.

Default value is true.

Preconfigured override is tangosol.coherence.distributed.localstorage

it means that the init-params element may look as follows

<init-params>
<init-param>
<param-name>local-storage</param-name>
<param-value>false</param-value>
</init-param>
</init-params>

or as follows:

<init-params>
<init-param>
<param-name>local-storage</param-name>
<param-value>true</param-value>
</init-param>
</init-params>

Parameters

Used in: init-param.

The following table describes the specific parameter <param-name> - <param-value> pairs that can be configured for various elements.

ReplicatedCache Service Parameters

Description

ReplicatedCache service elements support the following parameters:

These settings may also be specified as part of the replicated-scheme element in the cache configuration descriptor.

Parameters

Parameter Name Value Description
standard-lease-milliseconds Specifies the duration of the standard lease in milliseconds. Once a lease has aged past this number of milliseconds, the lock will automatically be released. Set this value to zero to specify a lease that never expires. The purpose of this setting is to avoid deadlocks or blocks caused by stuck threads; the value should be set higher than the longest expected lock duration (e.g. higher than a transaction timeout). It's also recommended to set this value higher then packet-delivery/timeout-milliseconds value.

Legal values are from positive long numbers or zero.

Default value is 0.
lease-granularity Specifies the lease ownership granularity. Available since release 2.3.

Legal values are:

  • thread
  • member

A value of thread means that locks are held by a thread that obtained them and can only be released by that thread. A value of member means that locks are held by a cluster node and any thread running on the cluster node that obtained the lock can release it.

Default value is thread.

mobile-issues Specifies whether or not the lease issues should be transfered to the most recent lock holders.

Legal values are true or false.

Default value is false.

DistributedCache Service Parameters

Description

DistributedCache service elements support the following parameters:

These settings may also be specified as part of the distributed-scheme element in the cache configuration descriptor.

Parameters

Parameter Name Value Description
thread-count Specifies the number of daemon threads used by the distributed cache service.

If zero, all relevant tasks are performed on the service thread.

Legal values are from positive integers or zero.

Default value is 0.

Preconfigured override is tangosol.coherence.distributed.threads
standard-lease-milliseconds Specifies the duration of the standard lease in milliseconds. Once a lease has aged past this number of milliseconds, the lock will automatically be released. Set this value to zero to specify a lease that never expires. The purpose of this setting is to avoid deadlocks or blocks caused by stuck threads; the value should be set higher than the longest expected lock duration (e.g. higher than a transaction timeout). It's also recommended to set this value higher then packet-delivery/timeout-milliseconds value.

Legal values are from positive long numbers or zero.

Default value is 0.
lease-granularity Specifies the lease ownership granularity. Available since release 2.3.

Legal values are:

  • thread
  • member

A value of thread means that locks are held by a thread that obtained them and can only be released by that thread. A value of member means that locks are held by a cluster node and any thread running on the cluster node that obtained the lock can release it.

Default value is thread.

transfer-threshold Specifies the threshold for the primary buckets distribution in kilo-bytes. When a new node joins the distributed cache service or when a member of the service leaves, the remaining nodes perform a task of bucket ownership re-destribution. During this process, the existing data gets re-balanced along with the ownership information. This parameter indicates a preferred message size for data transfer communications. Setting this value lower will make the distribution process take longer, but will reduce network bandwidth utilization during this activity.

Legal values are integers greater then zero.

Default value is 512 (0.5MB).
partition-count Specifies the number of partitions that a distributed cache will be "chopped up" into. Each member running the distributed cache service that has the local-storage option set to true will manage a "fair" (balanced) number of partitions. The number of partitions should be larger than the square of the number of cluster members to achieve a good balance, and it is suggested that the number be prime. Good defaults include 257 and 1021 and prime numbers in-between, depending on the expected cluster size. A list of first 1,000 primes can be found at http://www.utm.edu/research/primes/lists/small/1000.txt

Legal values are prime numbers.

Default value is 257.
local-storage Specifies whether or not this member of the DistributedCache service enables the local storage.
Normally this value should be left unspecified within the configuration file, and instead set on a per-process basis using the tangosol.coherence.distributed.localstorage system property. This allows cache clients and servers to use the same configuration descriptor.

Legal values are true or false.

Default value is true.

Preconfigured override is tangosol.coherence.distributed.localstorage

backup-count Specifies the number of members of the DistributedCache service that hold the backup data for each unit of storage in the cache.

Value of 0 means that in the case of abnormal termination, some portion of the data in the cache will be lost. Value of N means that if up to N cluster nodes terminate at once, the cache data will be preserved.

To maintain the distributed cache of size M, the total memory usage in the cluster does not depend on the number of cluster nodes and will be in the order of M*(N+1).

Recommended values are 0, 1 or 2.

Default value is 1.
backup-storage/type Specifies the type of the storage used to hold the backup data.

Legal values are:

  • on-heap - The corresponding implementations class is java.util.HashMap.
  • off-heap - The corresponding implementations class is com.tangosol.util.nio.BinaryMap using {com.tangosol.util.nio.DirectBufferManager}}. Only available with JDK 1.4 and later.
  • file-mapped - The corresponding implementations class is com.tangosol.util.nio.BinaryMap using com.tangosol.util.nio.MappedBufferManager. Only available with JDK 1.4 and later.
  • custom - The corresponding implementations class is the class specified by the backup-storage/class element.
  • scheme - The corresponding implementations class is the map returned by the ConfigurableCacheFactory for the scheme referred to by the backup-storage/scheme-name element.

Default value is on-heap.

Preconfigured override is tangosol.coherence.distributed.backup

backup-storage/initial-size Only applicable with the off-heap and file-mapped types.

Specifies the initial buffer size in bytes.

The value of this element must be in the following format:

[\d]+[[.][\d]]?[K|k|M|m|G|g|T|t]?[B|b]?

where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:

  • K or k (kilo, 210)
  • M or m (mega, 220)
  • G or g (giga, 230)
  • T or t (tera, 240)

If the value does not contain a factor, a factor of mega is assumed.

Legal values are positive integers between 1 and Integer.MAX_VALUE - 1023.

Default value is 1MB.

backup-storage/maximum-size Only applicable with the off-heap and file-mapped types.

Specifies the maximum buffer size in bytes.

The value of this element must be in the following format:

[\d]+[[.][\d]]?[K|k|M|m|G|g|T|t]?[B|b]?

where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:

  • K or k (kilo, 210)
  • M or m (mega, 220)
  • G or g (giga, 230)
  • T or t (tera, 240)

If the value does not contain a factor, a factor of mega is assumed.

Legal values are positive integers between 1 and Integer.MAX_VALUE - 1023.

Default value is 1024MB.

backup-storage/directory Only applicable with the file-mapped type.

Specifies the pathname for the directory that the disk persistence manager (com.tangosol.util.nio.MappedBufferManager) will use as "root" to store files in. If not specified or specifies a non-existent directory, a temporary file in the default location is used.

Default value is the default temporary directory designated by the Java runtime.
backup-storage/class-name Only applicable with the custom type.

Specifies a class name for the custom storage implementation. If the class implements com.tangosol.run.xml.XmlConfigurable interface then upon construction the setConfig method is called passing the entire backup-storage element.
backup-storage/scheme-name Only applicable with the scheme type.

Specifies a scheme name for the ConfigurableCacheFactory.
key-associator/class-name Specifies the name of a class that implements the com.tangosol.net.partition.KeyAssociator interface. This implementation must have a zero-parameter public constructor.
key-partitioning/class-name Specifies the name of a class that implements the com.tangosol.net.partition.KeyPartitioningStrategy interface. This implementation must have a zero-parameter public constructor.

InvocationService Parameters

Description

InvocationService service elements support the following parameters:

These settings may also be specified as part of the invocation-scheme element in the cache configuration descriptor.

Parameters

Parameter Name Value Description
thread-count Specifies the number of daemon threads to be used by the invocation service.

If zero, all relevant tasks are performed on the service thread.

Legal values are from positive integers or zero.

Default value is 0.

Preconfigured override is tangosol.coherence.invocation.threads

ProxyService Parameters

Description

ProxyService service elements support the following parameters:

These settings may also be specified as part of the proxy-scheme element in the cache configuration descriptor.

Parameters

Parameter Name Value Description
thread-count Specifies the number of daemon threads to be used by the proxy service.

If zero, all relevant tasks are performed on the service thread.

Legal values are from positive integers or zero.

Default value is 0.

Preconfigured override is tangosol.coherence.proxy.threads

Compression Filter Parameters

The compression filter com.tangosol.net.CompressionFilter, supports the following parameters (see java.util.zip.Deflater for details):

Parameters

Parameter Name Value Description
buffer-length Spesifies compression buffer length in bytes.

Legal values are from positive integers or zero.

Default value is 0.
strategy Specifies the compressions strategy.

Legal values are:

  • gzip
  • huffman-only
  • filtered
  • default

Default value is gzip.

level Specifies the compression level.

Legal values are:

  • default
  • compression
  • speed
  • none

Default value is default.

Element Attributes

Element Attributes

The following table describes the attributes that can be used with some of the elements described above.

Used in: coherence, cluster-config, logging-config, configurable-cache-factory-config, unicast-listener, multicast-listener, tcp-ring-listener, shutdown-listener, packet-publisher, incoming-message-handler, authorized-hosts, host-range, services, filters, filter-name, init-param (operational).

Attribute Required/Optional Description
xml-override Optional Allows the content of this elements to be fully or partially overridden with XML documents that are external to the base document.

Legal value of this attribute is the resource name of such an override document that should be accessible using the ClassLoader.getResourceAsStream(String name) by the classes contained in coherence.jar library. In general that means that resource name should be prefixed with '/' and located in the classpath.

The override XML document referred by this attribute does not have to exist. However, if it does exist then its root element must have the same name as the element it overrides.

In cases where there are multiple elements with the same name (e.g. <services>) the id attribute should be used to identify the base element that will be overridden as well as the override element itself. The elements of the override document that do not have a match in the base document are just appended to the base.
id Optional Used in conjunciton with the xml-override attribute in cases where there are multiple elements with the same name (e.g. <services>) to identify the base element that will be overridden as well as the override element itself. The elements of the override document that do not have a match in the base document are just appended to the base.

Command Line Setting Override Feature

Command Line Setting Override Feature

Both the Coherence Operational Configuration deployment descriptor and the Coherence Cache Configuration deployment descriptor support the ability to assign java command line option name to any element defined in the descriptor. Some of the elements already have these Command Line Setting Overrides defined. You can create your own or change the ones that are already defined.

This feature is very useful when you need to change the settings just for a single JVM, or be able to start different applications with different settings without making them use different descriptors. The most commonplace application is passing different multicast address and/or port to allow different applications to create separate clusters.

To create a Commmand Line Setting Override all you need to do is add system-property attribute specifying the string you would like to assign as the name for the java command line option to the element you want to create an override to. Then you just need to specify it in the java command line prepended with "-D".

For example:

Let's say that we want to create an override for the IP address of the multi-home server to avoid using the default localhost, and instead specify a specific the IP address of the interface we want Coherence to use (let's say it is 192.168.0.301). We would like to call this override tangosol.coherence.localhost.

In order to do that we first add a system-property to the cluster-config/unicast-listener/address element:

<address>localhost</address>

which will look as follows with the property we added:

<address system-property="tangosol.coherence.localhost">localhost</address>

Then we use it by modifying our java command line:

java -jar coherence.jar

to specify our address 192.168.0.301 (instead of the default localhost specified in the configuration) as follows:

java -Dtangosol.coherence.localhost=192.168.0.301 -jar coherence.jar

The following table details all the preconfigured overrides:

Override Option Setting
tangosol.coherence.cluster Cluster name
tangosol.coherence.site Site name
tangosol.coherence.rack Rack name
tangosol.coherence.machine Machine name
tangosol.coherence.process Process name
tangosol.coherence.member Member name
tangosol.coherence.role Role name
tangosol.coherence.priority Priority
tangosol.coherence.localhost Unicast IP address
tangosol.coherence.localport Unicast IP port
tangosol.coherence.localport.adjust Unicast IP port auto assignment
tangosol.coherence.clusteraddress Cluster (multicast) IP address
tangosol.coherence.clusterport Cluster (multicast) IP port
tangosol.coherence.wka Well known IP address
tangosol.coherence.wka.port Well known IP port
tangosol.coherence.ttl Multicast packet time to live (TTL)
tangosol.coherence.tcpring TCP Ring enabled flag
tangosol.coherence.shutdownhook Shutdown listener action
tangosol.coherence.log Logging destination
tangosol.coherence.log.level Logging level
tangosol.coherence.log.limit Log output character limit
tangosol.coherence.cacheconfig Cache configuration descriptor filename
tangosol.coherence.management JMX management mode
tangosol.coherence.management.remote Remote JMX management enabled flag
tangosol.coherence.management.readonly JMX management read-only flag
tangosol.coherence.security Cache access security enabled flag
tangosol.coherence.security.keystore Security access controller keystore file name
tangosol.coherence.security.password Keystore or cluster encryption password
tangosol.coherence.security.permissions Security access controller permissions file name
tangosol.coherence.licensepath License repository location
tangosol.coherence.edition Product edition
tangosol.coherence.mode License mode

Operational Configuration Elements Reference

access-controller

Used in: security-config.

The following table describes the elements you can define within the access-controller element.

Element Required/Optional Description
<class-name> Required Specifies the name of a Java class that implements com.tangosol.net.security.AccessController interface, which will be used by the Coherence Security Framework to check access rights for clustered resources and encrypt/decrypt node-to-node communications regarding those rights.

Default value is com.tangosol.net.security.DefaultController.
<init-params> Optional Contains one or more initialization parameter(s) for a class that implements the AccessController interface.

For the default AccessController implementation the parameters are the paths to the key store file and permissions description file, specified as follows:
<init-params>
<init-param id="1">
<param-type>java.io.File</param-type>
<param-value system-property="tangosol.coherence.security.keystore"></param-value>
</init-param>
<init-param id="2">
<param-type>java.io.File</param-type>
<param-value system-property="tangosol.coherence.security.permissions"></param-value>
</init-param>
</init-params>



Preconfigured overrides based on the default AccessController implementation and the default parameters as specified above are tangosol.coherence.security.keystore and tangosol.coherence.security.permissions.

For more information on the elements you can define within the init-param element, refer to init-param.

authorized-hosts

Used in: cluster-config.

Description

If specified, restricts cluster membership to the cluster nodes specified in the collection of unicast addresses, or address range. The unicast address is the address value from the authorized cluster nodes' unicast-listener element. Any number of host-address and host-range elements may be specified.

Elements

The following table describes the elements you can define within the authorized-hosts element.

Element Required/Optional Description
<host-address> Optional Specifies an IP address or hostname. If any are specified, only hosts with specified host-addresses or within the specified host-ranges will be allowed to join the cluster.

The content override attributes id can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.
<host-range> Optional Specifies a range of IP addresses. If any are specified, only hosts with specified host-addresses or within the specified host-ranges will be allowed to join the cluster.

The content override attributes id can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.

The content override attributes xml-override and id can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.

burst-mode

Used in: packet-publisher.

Description

The burst-mode element is used to control the rate at which packets will transmitted on the network, by specificing the maximum number of packets to transmit without pausing. By default this feature is disabled and is typically only needed when flow-control is disabled, or when operating with heavy loads on a half-duplex network link. This setting only effects packets which are sent by the packet-speaker.

Elements

The following table describes the elements you can define within the burst-mode element.

Element Required/Optional Description
<maximum-packets> Required Specifies the maximum number of packets that the will be sent in a row without pausing. Zero indicates no limit. By setting this value relatively low, Coherence is forced to hold back when sending a large number of packets, which may reduce collisions in some instances or allow incoming traffic to be more quickly processed.

Default value is 0.
<pause-milliseconds> Required Specifies the minimum number of milliseconds to delay between long bursts of packets. By increasing this value, Coherence is forced to hold back when sending a large number of packets, which may reduce collisions in some instances or allow incoming traffic to be more quickly processed.

Default value is 10.

callback-handler

Used in: security-config.

The following table describes the elements you can define within the callback-handler element.

Element Required/Optional Description
<class-name> Required Specifies the name of a Java class that provides the implementation for the javax.security.auth.callback.CallbackHandler interface.
<init-params> Optional Contains one or more initialization parameter(s) for a CallbackHandler implementation.

For more information on the elements you can define within the init-param element, refer to init-param.

cluster-config

Used in: coherence.

Description

Contains the cluster configuration information, including communication and service parameters.

Elements

The following table describes the elements you can define within the cluster-config element.

Element Required/Optional Description
<member-identity> Optional Specifies detailed identity information that is useful for defining the location and role of the cluster member.
<unicast-listener> Required Specifies the configuration information for the Unicast listener, used for receiving point-to-point network communications.
<multicast-listener> Required Specifies the configuration information for the Multicast listener, used for receiving point-to-multipoint network communications.
<shutdown-listener> Required Specifies the action to take upon receiving an external shutdown request.
<tcp-ring-listener> Required Specifies configuration information for the TCP Ring listener, used to death detection.
<packet-publisher> Required Specifies configuration information for the Packet publisher, used for managing network data transmission.
<packet-speaker> Required Specifies configuration information for the Packet speaker, used for network data transmission.
<incoming-message-handler> Required Specifies configuration information for the Incoming message handler, used for dispatching incoming cluster communications.
<outgoing-message-handler> Required Specifies configuration information for the Outgoing message handler, used for dispatching outgoing cluster communications.
<authorized-hosts> Optional Specifies the hosts which are allowed to join the cluster.
<services> Required Specifies the declarative data for all available Coherence services.
<filters> Optional Specifies data transformation filters, which can be used to perform custom transformations on data being transfered between cluster nodes.

The content override attribute xml-override can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.

coherence

Description

The coherence element is the root element of the operational deployment descriptor.

Elements

The following table describes the elements you can define within the coherence element.

Element Required/Optional Description
<cluster-config> Required Contains the cluster configuration information. This element is where most communication and service parameters are defined.
<logging-config> Required Contains the configuration information for the logging facility.
<configurable-cache-factory-config> Required Contains configuration information for the configurable cache factory. It controls where, from, and how the cache configuration settings are loaded.
<management-config> Required Contains the configuration information for the Coherence Management Framework.
<security-config> Optional Contains the configuration information for the Coherence Security Framework.
<license-config> Optional Contains the contains the location of the license repository and the details of the license that this member will utilize.

The content override attribute xml-override can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.

configurable-cache-factory-config

Used in: coherence.

Elements

The following table describes the elements you can define within the configurable-cache-factory-config element.

Element Required/Optional Description
<class-name> Required Specifies the name of a Java class that provides the cache configuration factory.

Default value is com.tangosol.net.DefaultConfigurableCacheFactory.
<init-params> Optional Contains one or more initialization parameter(s) for a cache configuration factory class which implements the com.tangosol.run.xml.XmlConfigurable interface.

For the default cache configuration factory class (DefaultConfigurableCacheFactory) the parameters are specified as follows:
<init-param>
<param-type>java.lang.String</param-type>
<param-value system-property="tangosol.coherence.cacheconfig">
coherence-cache-config.xml
</param-value>
</init-param>

Preconfigured override is tangosol.coherence.cacheconfig.

Unless an absolute or relative path is specified, such as with ./path/to/config.xml, the application's classpath will be used to find the specified descriptor.

The content override attribute xml-override can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.

filters

Used in: cluster-config.

Description

Data transformation filters can be used by services to apply a custom transformation on data being transfered between cluster nodes. This can be used for instance to compress or encrypt Coherence network traffic.

Implementation

Data transformation filters are implementations of the com.tangosol.util.WrapperStreamFactory interface.

Data transformation filters are not related to com.tangosol.util.Filter, which is part of the Coherence API for querying caches.

Elements

The following table describes the elements you can define within each filter element.

Element Required/Optional Description
<filter-name> Required Specifies the canonical name of the filter. This name is unique within the cluster.

For example: gzip.

The content override attributes id can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.
<filter-class> Required Specifies the class name of the filter implementation. This class must have a zero-parameter public constructor and must implement the com.tangosol.util.WrapperStreamFactory interface.
<init-params> Optional Specifies initialization parameters, for configuring filters which implement the com.tangosol.run.xml.XmlConfigurable interface.

For example when using a com.tangosol.net.CompressionFilter the parameters are specified as follows:
<init-param>
<param-name>strategy</param-name>
<param-value>gzip</param-value>
</init-param>
<init-param>
<param-name>level</param-name>
<param-value>default</param-value>
</init-param>

For more information on the parameter values for the standard filters refer to, refer to Compression Filter Parameters, Symmetric Encryption Filter Parameters and PKCS Encryption Filter Parameters.

The content override attributes xml-override and id can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.

flow-control

Used in: packet-delivery.

Description

The flow-control element contains configuration information related to packet throttling and remote GC detection.

Remote GC Detection

Remote Pause detection allows Coherence to detect and react to a cluster node becoming unresponsive (likely due to a long GC). Once a node is marked as paused, packets addressed to it will be sent at a lower rate until the node resumes responding. This remote GC detection is used to avoid flooding a node while it is incapable of responding.

Packet Throttling

Flow control allows Coherence to dynamically adjust the rate at which packets are transmitted to a given cluster node based on point to point transmission statistics.

Elements

The following table describes the elements you can define within the flow-control element.

Element Required/Optional Description
<enabled> Optional Specifies if flow control is enabled. Default is true
<pause-detection> Optional Defines the number of packets that will be resent to an unresponsive cluster node before assuming that the node is paused.
<outstanding-packets> Optional Defines the number of unconfirmed packets that will be sent to a cluster node before packets addressed to that node will be deferred.

host-range

Used in: authorized-hosts.

Description

Specifies a range of unicast addresses of nodes which are allowed to join the cluster.

Elements

The following table describes the elements you can define within each host-range element.

Element Required/Optional Description
<from-address> Required Specifies the starting IP address for a range of host addresses.

For example: 198.168.1.1.
<to-address> Required Specifies to-address element specifies the ending IP address (inclusive) for a range of hosts.

For example: 198.168.2.255.

The content override attributes id can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.

incoming-message-handler

Used in: cluster-config.

Description

The incoming-message-handler assembles UDP packets into logical messages and dispatches them to the appropriate Coherence service for processing.

Elements

The following table describes the elements you can define within the incoming-message-handler element.

Element Required/Optional Description
<maximum-time-variance> Required Specifies the maximum time variance between sending and receiving broadcast Messages when trying to determine the difference between a new cluster Member's system time and the cluster time.

The smaller the variance, the more certain one can be that the cluster time will be closer between multiple systems running in the cluster; however, the process of joining the cluster will be extended until an exchange of Messages can occur within the specified variance.

Normally, a value as small as 20 milliseconds is sufficient, but with heavily loaded clusters and multiple network hops it is possible that a larger value would be necessary.

Default value is 16.
<use-nack-packets> Required Specifies whether the packet receiver will use negative acknowledgments (packet requests) to pro-actively respond to known missing packets. See notification-queueing for additional details and configuration.

Legal values are true or false.

Default value is true.
<priority> Required Specifies a priority of the incoming message handler execution thread.

Legal values are from 1 to 10.

Default value is 7.
<packet-pool> Required Specifies how many incoming packets Coherence will buffer before blocking.

The content override attribute xml-override can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.

init-param

Used in: init-params.

Description

Defines an individual initialization parameter.

Elements

The following table describes the elements you can define within the init-param element.

Element Required/Optional Description
<param-name> Optional Specifies the name of the parameter passed to the class. The param-type or param-name must be specified.

For example: thread-count.

For more information on the pre-defined parameter values available for the specific elements, refer to Parameters.
<param-type> Optional Specifies the data type of the parameter passed to the class. The param-type or param-name must be specified.

For example: int
<param-value> Required Specifies the value passed in the parameter.

For example: 8.

For more information on the pre-defined parameter values available for the specific elements, refer to Parameters.

The content override attributes id can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.

init-params

Used in: filters, services, configurable-cache-factory-config, access-controller and callback-handler.

Description

Defines a series of initialization parameters.

Elements

The following table describes the elements you can define within the init-params element.

Element Required/Optional Description
<init-param> Optional Defines an individual initialization parameter.

license-config

Used in: coherence.

The following table describes the elements you can define within the license-config element.

Element Required/Optional Description
<license-path> Optional Specifies the location of the license repository.
<edition-name> Optional Specifies the product edition that the member will utilize. This allows multiple product editions to be used within the same cluster, with each member specifying the edition that it will be using.

Valid values are: "DGE" (DataGrid Edition), "AE" (Application Edition), "CE" (Caching Edition), "CC" (Compute Client), "RTC" (Real-Time Client), "DC" (Data Client).
<license-mode> Optional Specifies whether the product is being used in an evaluation, development or production mode.

Valid values are "prod" (Production), "dev" (Development) and "eval" (Evaluation).

logging-config

Used in: coherence.

Elements

The following table describes the elements you can define within the logging-config element.

Element Required/Optional Description
<destination> Required Specifies the output device used by the logging system.

Legal values are:

stdout
stderr
jdk
log4j
a file name

Default value is stderr.

If jdk is specified as the destination, Coherence must be run using JDK 1.4 or later; likewise, if log4j is specified, the Log4j libraries must be in the classpath. In both cases, the appropriate logging configuration mechanism (system properties, property files, etc.) should be used to configure the JDK/Log4j logging libraries.

Preconfigured override is tangosol.coherence.log
<severity-level> Required Specifies which logged messages will be output to the log destination.

Legal values are:

0     - only output without a logging severity level specified will be logged
1     - all the above plus errors
2     - all the above plus warnings
3     - all the above plus informational messages
4-9 - all the above plus internal debugging messages (the higher the number, the more the messages)
-1   - no messages

Default value is 3.

Preconfigured override is tangosol.coherence.log.level
<message-format> Required Specifies how messages that have a logging level specified will be formatted before passing them to the log destination.

The value of the message-format element is static text with the following replaceable parameters:

{date}      - the date/time format (to a millisecond) at which the message was logged
{version} - the Tangosol Coherence exact version and build details
{level}    - the logging severity level of the message
{thread}   - the thread name that logged the message
{member}   - the cluster member id (if the cluster is currently running)
{text}      - the text of the message

Default value is:

{date} Tangosol Coherence {version} <{level}> (thread={thread}, member={member}): {text}
<character-limit> Required Specifies the maximum number of characters that the logger daemon will process from the message queue before discarding all remaining messages in the queue. Note that the message that caused the total number of characters to exceed the maximum will NOT be truncated, and all messages that are discarded will be summarized by the logging system with a single log entry detailing the number of messages that were discarded and their total size. The truncation of the logging is only temporary, since once the queue is processed (emptied), the logger is reset so that subsequent messages will be logged.

The purpose of this setting is to avoid a situation where logging can itself prevent recovery from a failing condition. For example, with tight timings, logging can actually change the timings, causing more failures and probably more logging, which becomes a vicious cycle. A limit on the logging being done at any one point in time is a "pressure valve" that prevents such a vicious cycle from occurring. Note that logging occurs on a dedicated low-priority thread to even further reduce its impact on the critical portions of the system.

Legal values are positive integers or zero. Zero implies no limit.

Default value is 4096.

Preconfigured override is tangosol.coherence.log.limit

The content override attribute xml-override can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.

management-config

Used in: coherence.

Elements

The following table describes the elements you can define within the management-config element.

Element Required/Optional Description
<domain-name> Required Specifies the name of the JMX domain used to register MBeans exposed by the Coherence Management Framework.
<managed-nodes> Required Specifies whether or not a cluster node's JVM has an [in-process] MBeanServer and if so, whether or not this node allows management of other nodes' managed objects.

Legal values are:
  • none - No MBeanServer is instantiated.
  • local-only - Manage only MBeans which are local to the cluster node (i.e. within the same JVM).
  • remote-only - Manage MBeans on other remotely manageable cluster nodes. Requires Coherence Application Edition or higher
  • all - Manage both local and remotely manageable cluster nodes. Requires Coherence Application Edition or higher

Default value is none.

Preconfigured override is tangosol.coherence.management

<allow-remote-management> Required Specifies whether or not this cluster node exposes its managed objects to remote MBeanServer(s).

Legal values are: true or false.

Default value is false.

Preconfigured override is tangosol.coherence.management.remote
<read-only> Required Specifies whether or not the managed objects exposed by this cluster node allow operations that modify run-time attributes.

Legal values are: true or false.

Default value is false.

Preconfigured override is tangosol.coherence.management.readonly
<service-name> Required Specifies the name of the Invocation Service used for remote management.

This element is used only if allow-remote-management is set to true.

The content override attribute xml-override can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.

member-identity

Used in: cluster-config.

The member-identity element contains detailed identity information that is useful for defining the location and role of the cluster member.

Elements

The following table describes the elements you can define within the member-identity element.

Element Required/Optional Description
<cluster-name> Optional The cluster-name element contains the name of the cluster. In order to join the cluster all members must specify the same cluster name.

It is strongly suggested that cluster-name be specified for production systems, thus preventing accidental cluster discovery among applications.

Preconfigured override is tangosol.coherence.cluster
<site-name> Optional The site-name element contains the name of the geographic site that the member is hosted at. For WAN clustering, this value identifies the datacenter within which the member is located, and can be used as the basis for intelligent routing, load balancing and disaster recovery planning (i.e. the explicit backing up of data on separate geographic sites). The name is also useful for displaying management information (e.g. JMX) and interpreting log entries.

It is optional to provide a value for this element. Deployments that spread across more than one geographic site should specify a site-name value.

Preconfigured override is tangosol.coherence.site.
<rack-name> Optional The rack-name element contains the name of the location within a geographic site that the member is hosted at. This is often a cage, rack or bladeframe identifier, and can be used as the basis for intelligent routing, load balancing and disaster recovery planning (i.e. the explicit backing up of data on separate bladeframes). The name is also useful for displaying management information (e.g. JMX) and interpreting log entries.

It is optional to provide a value for this element. Large scale deployments should always specify a rack-name value.

Preconfigured override is tangosol.coherence.rack.
<machine-name> Optional The machine-name element contains the name of the physical server that the member is hosted on. This is often the same name as the server identifies itself as (e.g. its HOSTNAME, or its name as it appears in a DNS entry). If provided, the machine-name is used as the basis for creating a machine-id, which in turn is used to guarantee that data are backed up on different physical machines to prevent single points of failure (SPOFs). The name is also useful for displaying management information (e.g. JMX) and interpreting log entries.

It is optional to provide a value for this element. However, it is strongly encouraged that a name always be provided.

Preconfigured override is tangosol.coherence.machine.
<process-name> Optional The process-name element contains the name of the process (JVM) that the member is hosted on. This name makes it possible to easily differentiate among multiple JVMs running on the same machine. The name is also useful for displaying management information (e.g. JMX) and interpreting log entries.

It is optional to provide a value for this element. Often, a single member will exist per JVM, and in that situation this name would be redundant.

Preconfigured override is tangosol.coherence.process.
<member-name> Optional The member-name element contains the name of the member itself. This name makes it possible to easily differentiate among members, such as when multiple members run on the same machine (or even within the same JVM).The name is also useful for displaying management information (e.g. JMX) and interpreting log entries.

It is optional to provide a value for this element. However, it is strongly encouraged that a name always be provided.

Preconfigured override is tangosol.coherence.member.
<role-name> Optional The role-name element contains the name of the member role. This name allows an application to organize members into specialized roles, such as cache servers and cache clients. The name is also useful for displaying management information (e.g. JMX) and interpreting log entries.

It is optional to provide a value for this element. However, it is strongly encouraged that a name always be provided.

Preconfigured override is tangosol.coherence.role.
<priority> Optional The priority element specifies a priority of the corresponding member.

The priority is used as the basis for determining tie-breakers between members. If a condition occurs in which one of two members will be ejected from the cluster, and in the rare case that it is not possible to objectively determine which of the two is at fault and should be ejected, then the member with the lower priority will be ejected.

Valid values are from 1 to 10.

Preconfigured override is tangosol.coherence.priority.

multicast-listener

Used in: cluster-config.

Description

Specifies the configuration information for the Multicast listener. This element is used to specify the address and port that a cluster will use for cluster wide and point-to-multipoint communications. All nodes in a cluster must use the same multicast address and port, whereas distinct clusters on the same network should use different multicast addresses.

Multicast-Free Clustering

By default Coherence uses a multicast protocol to discover other nodes when forming a cluster. If multicast networking is undesirable, or unavailable in your environment, the well-known-addresses feature may be used to eliminate the need for multicast traffic. If you are having difficulties in establishing a cluster via multicast, see the Multicast Test.

Elements

The following table describes the elements you can define within the multicast-listener element.

Element Required/Optional Description
<address> Required Specifies the multicast IP address that a Socket will listen or publish on.

Legal values are from 224.0.0.0 to 239.255.255.255.

Default value depends on the release and build level and typically follows the convention of {build}.{major version}.{minor version}.{patch}. For example, for Coherence Release 2.2 build 255 it is 225.2.2.0.

Preconfigured override is tangosol.coherence.clusteraddress
<port> Required Specifies the port that the Socket will listen or publish on.

Legal values are from 1 to 65535.

Default value depends on the release and build level and typically follows the convention of {version}+{{{build}. For example, for Coherence Release 2.2 build 255 it is 22255.

Preconfigured override is tangosol.coherence.clusterport
<time-to-live> Required Specifies the time-to-live setting for the multicast. This determines the maximum number of "hops" a packet may traverse, where a hop is measured as a traversal from one network segment to another via a router.

Legal values are from from 0 to 255.
For production use, this value should be set to the lowest integer value that works. On a single server cluster, it should work at 0; on a simple switched backbone, it should work at 1; on an advanced backbone with intelligent switching, it may require a value of 2 or more. Setting the value too high can use unnecessary bandwidth on other LAN segments and can even cause the OS or network devices to disable multicast traffic. While a value of 0 is meant to keep packets from leaving the originating machine, some OSs do not implement this correctly, and the packets may in fact be transmitted on the network.

Default value is 4.

Preconfigured override is tangosol.coherence.ttl

<packet-buffer> Required Specifies how many incoming packets the OS will be requested to buffer.
<priority> Required Specifies a priority of the multicast listener execution thread.

Legal values are from 1 to 10.

Default value is 8.
<join-timeout-milliseconds> Required Specifies the number of milliseconds that a new member will wait without finding any evidence of a cluster before starting its own cluster and electing itself as the senior cluster member.

Legal values are from 1 to 1000000.

Note: For production use, the recommended value is 30000.

Default value is 6000.
<multicast-threshold-percent> Required Specifies the threshold percentage value used to determine whether a packet will be sent via unicast or multicast. It is a percentage value and is in the range of 1% to 100%. In a cluster of "n" nodes, a particular node sending a packet to a set of other (i.e. not counting self) destination nodes of size "d" (in the range of 0 to n-1), the packet will be sent multicast if and only if the following both hold true:

  1. The packet is being sent over the network to more than one other node, i.e. (d > 1).
  2. The number of nodes is greater than the threshold,i.e. (d > (n-1) * (threshold/100)).

    Setting this value to 1 will allow the implementation to use multicast for basically all multi-point traffic. Setting it to 100 will force the implementation to use unicast for all multi-point traffic except for explicit broadcast traffic (e.g. cluster heartbeat and discovery) because the 100% threshold will never be exceeded. With the setting of 25 the implementation will send the packet using unicast if it is destined for less than one-fourth of all nodes, and send it using multicast if it is destined for the one-fourth or more of all nodes.

    Note: This element is only used if the well-known-addresses element is empty.

    Legal values are from 1 to 100.

    Default value is 25.

The content override attribute xml-override can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.

notification-queueing

Used in: packet-publisher.

Description

The notification-queueing element is used to specificy the timing of notifications packets sent to other cluster nodes. Notification packets are used to acknowledge the receipt of packets which require confirmation.

Batched Acknowledgments

Rather then sending an individual ACK for each received packet which requires confirmation, Coherence will batch a series of acknowledgments for a given sender into a single ACK. The ack-delay-milliseconds specifies the maximum amount of time that an acknowledgment will be delayed before an ACK notification is sent. By batching the acknowledgments Coherence avoids wasting network bandwidth with many small ACK packets.

Negative Acknowledgments

When enabled cluster nodes will utilize packet ordering to perform early packet loss detection. This allows Coherence to identify a packet as likely being lost and retransmit it well before the packets scheduled resend time.

Elements

The following table describes the elements you can define within the notification-queueing element.

Element Required/Optional Description
<ack-delay-milliseconds> Required Specifies the maximum number of milliseconds that the packet publisher will delay before sending an ACK packet. The ACK packet may be transmitted earlier if number of batched acknowledgments fills the ACK packet.

This value should be substantially lower then the remote node's packet-delivery resend timeout, to allow ample time for the ACK to be received and processed by the remote node before the resend timeout expires.

Default value is 64.
<nack-delay-milliseconds> Required Specifies the number of milliseconds that the packet publisher will delay before sending a NACK packet.

Default value is 1.

outgoing-message-handler

Used in: cluster-config.

Description

The outgoing-message-handler splits logical messages into packets for transmission on the network, and enqueues them on the packet-publisher.

Elements

The following table describes the elements you can define within the outgoing-message-handler element.

Element Required/Optional Description
<use-daemon> Optional Specifies whether or not a daemon thread will be created to perform the outgoing message handling.

Deprecated as of Coherence 3.2, splitting messages into packets is always performed on the message originators thread.
<use-filters> Optional Contains the list of filter names to be used by this outgoing message handler.

For example, specifying use-filter as follows
<use-filters>
<filter-name>gzip</filter-name>
</use-filters>

will activate gzip compression for all network messages, which can help substantially with WAN and low-bandwidth networks.

<priority> Required Specifies a priority of the outgoing message handler execution thread.

Deprecated as of Coherence 3.2.

The content override attribute xml-override can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.

outstanding-packets

Used in: flow-control.

Description

Defines the number of unconfirmed packets that will be sent to a cluster node before packets addressed to that node will be deferred. This helps to prevent the sender from flooding the recipient's network buffers.

Auto Tunning

The value may be specified as either an explicit number via the maximum-packets element, or as a range by using both the maximum-packets and minimum-packets elements. When a range is specified, this setting will be dynamically adjusted based on network statistics.

Elements

The following table describes the elements you can define within the outstanding-packets element.

Element Required/Optional Description
<maximum-packets> Optional The maximum number of unconfirmed packets that will be sent to a cluster node before packets addressed to that node will be deferred. It is recommended that this value not be set below 256. Default is 4096.
<minimum-packets> Optional The lower bound on the range for the number of unconfirmed packets that will be sent to a cluster node before packets addressed to that node will be deferred. It is recommended that this value not be set below 256. Default is 512.

packet-buffer

Used in: unicast-listener, multicast-listener, packet-publisher.

Description

Specifies the size of the OS buffer for datagram sockets.

Performance Impact

Large inbound buffers help insulate the Coherence network layer from JVM pauses caused by the Java Garbage Collector. While the JVM is paused, Coherence is unable to dequeue packets from any inbound socket. If the pause is long enough to cause the packet buffer to overflow, the packet reception will be delayed as the the originating node will need to detect the packet loss and retransmit the packet(s).

It's just a hint

The OS will only treat the specified value as a hint, and is not required to allocate the specified amount. In the event that less space is allocated then requested Coherence will issue a warning and continue to operate with the constrained buffer, which may degrade performance.

Elements

The following table describes the elements you can define within the packet-buffer element.

Element Required/Optional Description
<maximum-packets> Required For unicast-listener, multicast-listener and packet-publisher: Specifies the number of packets of maximum size that the datagram socket will be asked to size itself to buffer. See SO_SNDBUF and SO_RCVBUF. Actual buffer sizes may be smaller if the underlying socket implementation cannot support more than a certain size. Defaults are 32 for publishing, 64 for multicast listening, and 1428 for unicast listening.

packet-delivery

Used in: packet-publisher.

Description

Specifies timing and transmission rate parameters related to packet delivery.

Death Detection

The timeout-milliseconds and heartbeat-milliseconds are used in detecting the death of other cluster nodes.

Elements

The following table describes the elements you can define within the packet-delivery element.

Element Required/Optional Description
<resend-milliseconds> Required For packets which require confirmation, specifies the minimum amount of time in milliseconds to wait for a corresponding ACK packet, before resending a packet.

Default value is 200.
<timeout-milliseconds> Required For packets which require confirmation, specifies the maximum amount of time, in milliseconds, that a packet will be resent. After this timeout expires Coherence will make a determination if the recipient is to be considered "dead". This determination takes additional data into account, such as if other nodes are still able to communicate with the recipient.

Default value is 60000.

Note: For production use, the recommended value is the greater of 60000 and two times the maximum expected full GC duration.
<heartbeat-milliseconds> Required Specifies the interval between heartbeats. Each member issues a unicast heartbeat, and the most senior member issues the cluster heartbeat, which is a broadcast message. The heartbeat is used by the tcp-ring-listener as part of fast death detection.

Default value is 1000.
<flow-control> Optional Configures per-node packet throttling and remote GC detection.

packet-pool

Used in: incoming-message-handler, packet-publisher.

Description

Specifies the number of packets which Coherence will internally maintain for use in tranmitting and receving UDP packets. Unlike the packet-buffers these buffers are managed by Coherence rather then the OS, and allocated on the JVM's heap.

Performance Impact

The packet pools are used as a reusable buffer between Coherence network services. For packet transmission, this defines the maximum number of packets which can be queued on the packet-speaker before the packet-publisher must block. For packet receiption, this defines the number of packets which can be queued on the incoming-message-handler before the unicast, and multicast must block.

Elements

The following table describes the elements you can define within the packet-pool element.

Element Required/Optional Description
<maximum-packets> Required The maximum number of reusable packets to be utilized by the services responsible for publishing and receiving. The pools are intially small, and will grow on demand up to the specified limits. Defaults are 2048 for transmitting and receiving.

packet-publisher

Used in: cluster-config.

Description

Specifies configuration information for the Packet publisher, which manages network data transmission.

Reliable packet delivery

The Packet publisher is responsible for ensuring that transmitted packets reach the destination cluster node. The publisher maintains a set of packets which are waiting to be acknowledged, and if the ACK does not arrive by the packet-delivery resend timeout, the packet will be retransmitted. The recipient node will delay the ACK, in order to batch a series of ACKs into a single response.

Throttling

The rate at which the publisher will accept and transmit packet may be controlled via the traffic-jam and packet-delivery/flow-control settings. Throttling may be necessary when dealing with slow networks, or small packet-buffers.

Elements

The following table describes the elements you can define within the packet-publisher element.

Element Required/Optional Description
<packet-size> Required Specifies the UDP packet sizes to utilize.
<packet-delivery> Required Specifies timing parameters related to reliable packet delivery.
<notification-queueing> Required Contains the notification queue related configuration info.
<burst-mode> Required Specifies the maximum number of packets the publisher may transmit without pausing.
<traffic-jam> Required Specifies the maximum number of packets which can be enqueued on the publisher before client threads block.
<packet-buffer> Required Specifies how many outgoing packets the OS will be requested to buffer.
<packet-pool> Required Specifies how many outgoing packets Coherence will buffer before blocking.
<priority> Required Specifies a priority of the packet publisher execution thread.

Legal values are from 1 to 10.

Default value is 6.

The content override attribute xml-override can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.

packet-size

Used in: packet-publisher.

Description

The packet-size element specifies the maximum and preferred UDP packet sizes. All cluster nodes must use identical maximum packet sizes. For optimal network utilization this value should be 32 bytes less then the network MTU.

When specifying a UDP packet size larger then 1024 bytes on Microsoft Windows a registry setting must be adjusted to allow for optimal transmission rates.

Elements

The following table describes the elements you can define within the packet-size element.

Element Required/Optional Description
<maximum-length> Required Specifies the maximum size, in bytes, of the UDP packets that will be sent and received on the unicast and multicast sockets.

This value should be at least 512; recommended value is 1468 for 100Mb, and 1Gb Ethernet. This value must be identical on all cluster nodes.

Note: Some network equipment cannot handle packets larger than 1472 bytes (IPv4) or 1468 bytes (IPv6), particularly under heavy load. If you encounter this situation on your network, this value should be set to 1472 or 1468 respectively.

Default value is 1468.
<preferred-length> Required Specifies the preferred size, in bytes, of UDP packets that will be sent and received on the unicast and multicast sockets.

This value should be at least 512 and cannot be greater than the maximum-length value; it is recommended to set the value to the same as the maximum-length value.

Default value is 1468.

packet-speaker

Used in: cluster-config.

Description

Specifies configuration information for the Packet speaker, used for network data transmission.

Offloaded Transmission

The Packet speaker is responsible for sending packets on the network. The speaker is utilized when the Packet publisher detects that a network send operation is likely to block. This allows the Packet publisher to avoid blocking on IO and continue to prepare outgoing packets. The Publisher will dynamically choose whether or not to utilize the speaker as the packet load changes.

Elements

The following table describes the elements you can define within the packet-speaker element.

Element Required/Optional Description
<volume-threshold> Optional Specifies the packet load which must be present for the speaker to be activated.
<priority> Required Specifies a priority of the packet speaker execution thread.

Legal values are from 1 to 10.

Default value is 8.

The content override attribute xml-override can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.

pause-detection

Used in: flow-control.

Description

Remote Pause detection allows Coherence to detect and react to a cluster node becoming unresponsive (likely due to a long GC). Once a node is marked as paused, packets addressed to it will be sent at a lower rate until the node resumes responding. This remote GC detection is used to avoid flooding a node while it is incapable of responding.

Elements

The following table describes the elements you can define within the pause-detection element.

Element Required/Optional Description
<maximum-packets> Optional The maximum number of packets that will be resent to an unresponsive cluster node before assuming that the node is paused. Specifying a value of 0 will disable pause detection. Default is 16.

security-config

Used in: coherence.

Elements

The following table describes the elements you can define within the security-config element.

Element Required/Optional Description
<enabled> Required Specifies whether the security features are enabled. All other configuration elements in the security-config group will be verified for validity and used if and only if the value of this element is true.

Legal values are true or false.

Default value is false.

Preconfigured override is tangosol.coherence.security
<login-module-name> Required Specifies the name of the JAAS LoginModule that should be used to authenticate the caller. This name should match a module in a configuration file will be used by the JAAS (for example specified via the -Djava.security.auth.login.config Java command line attribute).

For details please refer to the Sun Login Module Developer's Guide.
<access-controller> Required Contains the configuration information for the class that implements com.tangosol.net.security.AccessController interface, which will be used by the Coherence Security Framework to check access rights for clustered resources and encrypt/decrypt node-to-node communications regarding those rights.
<callback-handler> Optional Contains the configuration information for the class that implements javax.security.auth.callback.CallbackHandler interace which will be called if an attempt is made to access a protected clustered resource when there is no identity associated with the caller.

The content override attribute xml-override can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.

services

Used in: cluster-config.

Description

Specifies the configuration for Coherence services.

Service Components

The types of services which can be configured includes:

  • ReplicatedCache - A cach service which maintains copies of all cache entries on all cluster nodes which run the service.
  • ReplicatedCache.Optimistic - A version of the ReplicatedCache which uses optimistic locking.
  • DistributedCache - A cache service which evenly partitions cache entries across the cluster nodes which run the service.
  • SimpleCache - A version of the ReplicatedCache which lacks concurrent control.
  • LocalCache - A cache service for caches where all cache entries reside in a single cluster node.
  • InvocationService - A service used for performing custom operations on remote cluster nodes.

Elements

The following table describes the elements you can define for each service element.

Element Required/Optional Description
<service-type> Required Specifies the canonical name for a service, allowing the service to be referenced from the service-name element in cache configuration caching-schemes.
<service-component> Required Specifies either the fully qualified class name of the service or the relocatable component name relative to the base Service component.

Legal values are:
  • ReplicatedCache
  • ReplicatedCache.Optimistic
  • DistributedCache
  • SimpleCache
  • LocalCache
  • InvocationService
<use-filters> Optional Contains the list of filter names to be used by this service.

For example, specifying use-filter as follows
<use-filters>
<filter-name>gzip</filter-name>
</use-filters>

will activate gzip compression for the network messages used by this service, which can help substantially with WAN and low-bandwidth networks.

<init-params> Optional Specifies the initialization parameters that are specific to each service-component.

For more service specific parameter information see:

The content override attributes xml-override and id can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.

shutdown-listener

Used in: cluster-config.

Description

Specifies the action a cluster node should take upon receiving an external shutdown request. External shutdown includes the "kill" command on Unix and "Ctrl-C" on Windows and Unix.

Elements

The following table describes the elements you can define within the shutdown-listener element.

Element Required/Optional Description
<enabled> Required Specifies the type of action to take upon an external JVM shutdown.

Legal Values:

  • none - perform no explicit shutdown actions
  • force - perform "hard-stop" the node by calling Cluster.stop()
  • graceful - perform a "normal" shutdown by calling Cluster.shutdown()
  • true - same as force
  • false - same as none

Note: For production use, the suggested value is none unless testing has verified that the behavior on external shutdown is exactly what is desired.

Default value is force.

Preconfigured override is tangosol.coherence.shutdownhook

The content override attribute xml-override can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.

socket-address

Used in: well-known-addresses.

Elements

The following table describes the elements you can define within the socket-address element.

Element Required/Optional Description
<address> Required Specifies the IP address that a Socket will listen or publish on.

Note: The localhost setting may not work on systems that define localhost as the loopback address; in that case, specify the machine name or the specific IP address.
<port> Required Specifies the port that the Socket will listen or publish on.

Legal values are from 1 to 65535.

tcp-ring-listener

Used in: cluster-config.

Description

The TCP-ring provides a means for fast death detection of another node within the cluster. When enabled the cluster nodes form a single "ring" of TCP connections spanning the entire cluster. A cluster node is able to utilize the TCP connection to detect the death of another node within a heartbeat interval (default one second). If disabled the cluster node must rely on detecting that another node has stopped responding to UDP packets for a considerately longer interval. Once the death has been detected it is communicated to all other cluster nodes.

Elements

The following table describes the elements you can define within the tcp-ring-listener element.

Element Required/Optional Description
<enabled> Required Specifies whether the tcp ring listener should be enabled to defect node failures faster.

Legal values are true and false.

Default value is true.

Preconfigured override is tangosol.coherence.tcpring
<maximum-socket-closed-exceptions> Required Specifies the maximum number of tcp ring listener exceptions that will be tolerated before a particular member is considered really gone and is removed from the cluster.

This value is used only if the value of tcp-ring-listener/enabled is true.

Legal values are integers greater than zero.

Default value is 2.
<priority> Required Specifies a priority of the tcp ring listener execution thread.

Legal values are from 1 to 10.

Default value is 6.

The content override attribute xml-override can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.

traffic-jam

Used in: packet-publisher.

Description

The traffic-jam element is used to control the rate at which client threads enqueue packets for the Packet publisher to transmit on the network. Once the limit is exceeded any client thread will be forced to pause until the number of outstanding packets drops below the specified limit. To limit the rate at which the Publisher transmits packets see the flow-control, and burst-mode elements.

Tuning

Specifying a limit which is to low, or a pause which is to long may result in the publisher transmitting all pending packets, and being left without packets to send. An ideal value will ensure that the publisher is never left without work to do, but at the same time prevent the queue from growing uncontrollably. It is therefore recommended that the pause remain quite short (10ms or under), and that the limit on the number of packets be kept high (i.e. > 5000). As of Coherence 3.2 a warning will be periodically logged if this condition is detected.

Traffic Jam and Flow Control

When flow-control is enabled the traffic-jam operates in a point-to-point mode, only blocking a send if the recipient has too many packets outstanding. It is recommended that the traffic-jam/maximum-packets value be greater than the flow-control/outstanding-packets/maximum-packets value. When flow-control is disabled the traffic-jam will take all outstanding packets into account.

Elements

The following table describes the elements you can define within the traffic-jam element.

Element Required/Optional Description
<maximum-packets> Required Specifies the maximum number of pending packets that the Publisher will tolerate before determining that it is clogged and must slow down client requests (requests from local non-system threads). Zero means no limit. This property prevents most unexpected out-of-memory conditions by limiting the size of the resend queue.

Default value is 8192.
<pause-milliseconds> Required Number of milliseconds that the Publisher will pause a client thread that is trying to send a message when the Publisher is clogged. The Publisher will not allow the message to go through until the clog is gone, and will repeatedly sleep the thread for the duration specified by this property.

Default value is 10.

unicast-listener

Used in: cluster-config.

Description

Specifies the configuration information for the Unicast listener. This element is used to specify the address and port that a cluster node will bind to, in order to listen for point-to-point cluster communications.

Automatic Address Settings

By default Coherence will attempt to obtain the IP to bind to using the java.net.InetAddress.getLocalHost() call. On machines with multiple IPs or NICs you may need to explicitly specify the address. Additionally if the specified port is already in use, Coherence will by default auto increment the port number until the binding succeeds.

Multicast-Free Clustering

By default Coherence uses a multicast protocol to discover other nodes when forming a cluster. If multicast networking is undesirable, or unavailable in your environment, the well-known-addresses feature may be used to eliminate the need for multicast traffic. If you are having difficulties in establishing a cluster via multicast, see the Multicast Test.

Elements

The following table describes the elements you can define within the unicast-listener element.

Element Required/Optional Description
<well-known-addresses> Optional Contains a list of "well known" addresses (WKA) that are used by the cluster discovery protocol in place of multicast broadcast.
<machine-id> Required Specifies an identifier that should uniquely identify each server machine. If not specified, a default value is generated from the address of the default network interface.

The machine id for each machine in the cluster can be used by cluster services to plan for failover by making sure that each member is backed up by a member running on a different machine.
<address> Required Specifies the IP address that a Socket will listen or publish on.

Note: The localhost setting may not work on systems that define localhost as the loopback address; in that case, specify the machine name or the specific IP address.

Default value is localhost.

Preconfigured override is tangosol.coherence.localhost
<port> Required Specifies the port that the Socket will listen or publish on.

Legal values are from 1 to 65535.

Default value is 8088.

Preconfigured override is tangosol.coherence.localport
<port-auto-adjust> Required Specifies whether or not the unicast port will be automatically incremented if the specified port cannot be bound to because it is already in use.

Legal values are true or false.

It is recommended that this value be configured to false for production environments.

Default value is true.

Preconfigured override is tangosol.coherence.localport.adjust
<packet-buffer> Required Specifies how many incoming packets the OS will be requested to buffer.
<priority> Required Specifies a priority of the unicast listener execution thread.

Legal values are from 1 to 10.

Default value is 8.
<ignore-socket-closed> Optional Specifies whether or not the unicast listener will ignore socket exceptions that indicate that a Member is unreachable.

Deprecated as of Coherence 3.2.
<maximum-socket-closed-exceptions> Optional Specifies the maximum number of unicast listener exceptions that will be tolerated before a particular member is considered really gone and is removed from the cluster.

Deprecated as of Coherence 3.2.

The content override attribute xml-override can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.

volume-threshold

Used in: packet-speaker

Description

Specifies the minimum outgoing packet volume which must exist for the speaker daemon to be activated.

Performance Impact

When the packet load is relatively low it may be more efficient for the speaker's operations to be performed on the publisher's thread. When the packet load is high utilizing the speaker allows the publisher to continue preparing packets while the speaker transmits them on the network.

Elements

The following table describes the elements you can define within the packet-speaker element.

Element Required/Optional Description
<minimum-packets> Required Specifies the minimum number of packets which must be ready to be sent for the speaker daemon to be activated. A value of 0 will force the speaker to always be used, while a very high value will cause it to never be used. If unspecified, it will be set to match the socket send buffer size, this is the default.

well-known-addresses

Used in: unicast-listener.

Use of the Well Known Addresses (WKA) feature is not supported by Caching Edition. If you are having difficulties in establishing a cluster via multicast, see the Multicast Test.

Description

By default Coherence uses a multicast protocol to discover other nodes when forming a cluster. If multicast networking is undesirable, or unavailable in your environment, the Well Known Addresses feature may be used to eliminate the need for multicast traffic. When in use the cluster is configured with a relatively small list of nodes which are allowed to start the cluster, and which are likely to remain available over the cluster lifetime. There is no requirement for all WKA nodes to be simultaneously active at any point in time. This list is used by all other nodes to find their way into the cluster without the use of multicast, thus at least one well known node must be running for other nodes to be able to join.

This is not a security-related feature, and does not limit the addresses which are allowed to join the cluster. See the authorized-hosts element for details on limiting cluster membership.

Example

The following configuration specifies two well-known-addresses, with the default port.

<well-known-addresses>
<socket-address id="1">
<address>192.168.0.100</address>
</socket-address>
<socket-address id="2">
<address>192.168.0.101</address>
</socket-address>
</well-known-addresses>

Elements

The following table describes the elements you can define within the well-known-addresses element.

Element Required/Optional Description
<socket-address> Required Specifies a list of "well known" addresses (WKA) that are used by the cluster discovery protocol in place of multicast broadcast. If one or more WKA is specified, for a member to join the cluster it will either have to be a WKA or there will have to be at least one WKA member running. Additionally, all cluster communication will be performed using unicast. If empty or unspecified multicast communications will be used.

Preconfigured overrides are tangosol.coherence.wka and tangosol.coherence.wka.port.

The content override attribute xml-override can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document.

Cache Configuration

Cache Configuration Elements

Cache Configuration Deployment Descriptor Elements

Description

The cache configuration deployment descriptor is used to specify the various types of caches which can be used within a cluster. For information on configuring cluster communication, and services see the operational descriptor section.

Document Location

The name and location of the descriptor is specified in the operational deployment descriptor and defaults to coherence-cache-config.xml. The default configuration descriptor (packaged in coherence.jar) will be used unless a custom one is found within the application's classpath. It is recommended that all nodes within a cluster use identical cache configuration descriptors.

Document Root

The root element of the configuration descriptor is cache-config, this is where you may begin configuring your caches.

Document Format

The Cache Configuration descriptor should begin with the following DOCTYPE declaration:

<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
Note:
When deploying Coherence into environments where the default character set is EBCDIC rather than ASCII, please make sure that this descriptor file is in ASCII format and is deployed into its runtime environment in the binary format.

Command Line Override

Tangosol Coherence provides a very powerful Command Line Setting Override Feature, which allows for any element defined in this descriptor to be overridden from the Java command line if it has a system-property attribute defined in the descriptor.

Examples

See the various sample cache configurations for usage examples.

Element Index

The following table lists all non-terminal elements which may be used from within a cache configuration.

Element Used In:
acceptor-config proxy-scheme
async-store-manager external-scheme, paged-external-scheme
backup-storage distributed-scheme
bdb-store-manager external-scheme, paged-external-scheme, async-store-manager
cache-config root element
cache-mapping caching-scheme-mapping
cache-service-proxy proxy-config
caching-scheme-mapping cache-config
caching-schemes cache-config
class-scheme caching-schemes, local-scheme, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme, cachestore-scheme, listener
cachestore-scheme local-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme
custom-store-manager external-scheme, paged-external-scheme, async-store-manager
disk-scheme caching-schemes
distributed-scheme caching-schemes, near-scheme, versioned-near-scheme, overflow-scheme, versioned-backing-map-scheme
external-scheme caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme
init-param init-params
init-params class-scheme
initiator-config remote-cache-scheme, remote-invocation-scheme
invocation-scheme caching-schemes
jms-acceptor acceptor-config
jms-initiator initiator-config
key-associator distributed-scheme
key-partitioning distributed-scheme
lh-file-manager external-scheme, paged-external-scheme, async-store-manager
listener local-scheme, disk-scheme, external-scheme, paged-external-scheme, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme
local-scheme caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme
near-scheme caching-schemes
nio-file-manager external-scheme, paged-external-scheme, async-store-manager
nio-memory-manager external-scheme, paged-external-scheme, async-store-manager
outgoing-message-handler acceptor-config, initiator-config
optimistic-scheme caching-schemes, near-scheme, versioned-near-scheme, overflow-scheme
overflow-scheme caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme
paged-external-scheme caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme
proxy-config proxy-scheme
proxy-scheme caching-schemes
read-write-backing-map-scheme caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme
remote-cache-scheme cachestore-scheme, caching-schemes, near-scheme
remote-invocation-scheme caching-schemes
replicated-scheme caching-schemes, near-scheme, versioned-near-scheme, overflow-scheme, versioned-backing-map-scheme
tcp-acceptor acceptor-config
tcp-initiator initiator-config
versioned-backing-map-scheme caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme
versioned-near-scheme caching-schemes
version-transient-scheme versioned-near-scheme, versioned-backing-map-scheme
version-persistent-scheme versioned-backing-map-scheme

Parameter Macros

Parameter Macros

Cache Configuration Deployment Descriptor supports parameter 'macros' to minimize custom coding and enable specification of commonly used attributes when configuring class constructor parameters. The macros should be entered enclosed in curly braces as shown below without any quotes or spaces.

The following parameter macros may be specified:

<param-type> <param-value> Description
java.lang.String {cache-name} Used to pass the current cache name as a constructor parameter

For example:

<class-name>com.mycompany.cache.CustomCacheLoader</class-name>
<init-params>
    <init-param>
        <param-type>java.lang.String</param-type>
        <param-value>{cache-name}</param-value>
    </init-param>
</init-params>

java.lang.ClassLoader {class-loader} Used to pass the current classloader as a constructor parameter.

For example:

<class-name>com.mycompany.cache.CustomCacheLoader</class-name>
<init-params>
    <init-param>
        <param-type>java.lang.ClassLoader</param-type>
        <param-value>{class-loader}</param-value>
    </init-param>
</init-params>

com.tangosol.net.BackingMapManagerContext {manager-context} Used to pass the current BackingMapManagerContext object as a constructor parameter.

For example:

<class-name>com.mycompany.cache.CustomCacheLoader</class-name>
<init-params>
    <init-param>
        <param-type>com.tangosol.net.BackingMapManagerContext</param-type>
        <param-value>{manager-context}</param-value>
    </init-param>
</init-params>

{scheme-ref} scheme name Instantiates an object defined by the <class-scheme>, <local-scheme> or <file-scheme> with the specified <scheme-name> value and uses it as a constructor parameter.

For example:

<class-scheme>
    <scheme-name>dbconnection</scheme-name>
    <class-name>com.mycompany.dbConnection</class-name>
    <init-params>
        <init-param>
            <param-name>driver</param-name>
            <param-type>String</param-type>
            <param-value>org.gjt.mm.mysql.Driver</param-value>
        </init-param>
        <init-param>
            <param-name>url</param-name>
            <param-type>String</param-type>
            <param-value>jdbc:mysql://dbserver:3306/companydb</param-value>
        </init-param>
        <init-param>
            <param-name>user</param-name>
            <param-type>String</param-type>
            <param-value>default</param-value>
        </init-param>
        <init-param>
            <param-name>password</param-name>
            <param-type>String</param-type>
            <param-value>default</param-value>
        </init-param>
    </init-params>
</class-scheme>
...
<class-name>com.mycompany.cache.CustomCacheLoader</class-name>
<init-params>
    <init-param>
        <param-type>{scheme-ref}</param-type>
        <param-value>dbconnection</param-value>
    </init-param>
</init-params>

Sample Cache Configurations

Sample Cache Configurations

This page provides a series of simple cache scheme configurations. The samples build upon one another and will often utilize a scheme-ref element to reuse other samples as nested schemes. Cache schemes are specified in the caching-schemes element of the cache configuration descriptor. These samples only specify a minimum number of settings, follow the embedded links to the scheme's documentation to see the full set of options.

Contents

Local caches (accessible from a single JVM)
  In-memory cache
  NIO in-memory cache
  In-memory cache with expiring entries
  Size limited in-memory cache
  Cache on disk
  Size limited cache on disk
  Persistent cache on disk
  In-memory cache with disk based overflow
  Cache of a database

Clustered caches (accessible from multiple JVMs)
  Replicated cache
  Replicated cache with overflow
  Partitioned cache
  Partitioned cache with overflow
  Partitioned cache of a database
  Local cache of a partitioned cache (Near cache)

Local caches

This section defines a series of local cache schemes. In this context "local" means that the cache is only directly accessible by a single JVM. Later in this document local caches will be used as building blocks for clustered caches.

In-memory cache

This sample uses a local-scheme to define an in-memory cache. The cache will store as much as the JVM heap will allow.

<local-scheme>
<scheme-name>SampleMemoryScheme</scheme-name>
</local-scheme>

NIO in-memory cache

This sample uses an external-scheme to define an in-memory cache using an nio-memory-manager. The advantage of an NIO memory based cache is that it allows for large in-memory cache storage while not negatively impacting the JVM's GC times. The size of the cache is limited by the maximum size of the NIO memory region.

<external-scheme>
<scheme-name>SampleNioMemoryScheme</scheme-name>
<nio-memory-manager/>
</external-scheme>

Size limited in-memory cache

Adding a high-units element to the local-scheme limits the size of the cache. Here the cache is size limited to one thousand entries. Once the limit is exceeded, the scheme's eviction policy will determine which elements to evict from the cache.

<local-scheme>
<scheme-name>SampleMemoryLimitedScheme</scheme-name>
<high-units>1000</high-units>
</local-scheme>

In-memory cache with expiring entries

Adding a expiry-delay element to the local-scheme will cause cache entries to automatically expire, if they are not updated for a given time interval. Once expired the cache will invalidate the entry, and remove it from the cache.

<local-scheme>
<scheme-name>SampleMemoryExpirationScheme</scheme-name>
<expiry-delay>5m</expiry-delay>
</local-scheme>

Cache on disk

This sample uses an external-scheme to define an on-disk cache. The cache will store as much as the file system will allow.

This example uses the lh-file-manager for its on-disk storage implementation. See the external-scheme documentation for additional external storage options.
<external-scheme>
<scheme-name>SampleDiskScheme</scheme-name>
<lh-file-manager/>
</external-scheme>

Size limited cache on disk

Adding a [high-units|local-cache#high-units element to external-scheme limits the size of the cache. The cache is size limited to one million entries. Once the limit is exceeded, LRU eviction is used determine which elements to evict from the cache. Refer to the paged-external-scheme for an alternate size limited external caching approach.

<external-scheme>
<scheme-name>SampleDiskLimitedScheme</scheme-name>
<lh-file-manager/>
<high-units>1000000</high-units>
</external-scheme>

Persistent cache on disk

This sample uses an external-scheme to implement a cache suitable for use as long-term storage for a single JVM.

External caches are generally used for temporary storage of large data sets, and are automatically deleted on JVM shutdown. An external-cache can be used for long-term storage in non-clustered caches when using either the lh-file-manager or bdb-store-manager storage managers. For clustered persistence see the "Partitioned cache of a database" sample.

The {cache-name} macro is used to specify the name of the file the data will be stored in.

<external-scheme>
<scheme-name>SampleDiskPersistentScheme</scheme-name>
<lh-file-manager>
<directory>/my/storage/directory</directory>
<file-name>{cache-name}.store</file-name>
</lh-file-manager>
</external-scheme>

Or to use Berkeley DB rather then LH.

<external-scheme>
<scheme-name>SampleDiskPersistentScheme</scheme-name>
<bdb-store-manager>
<directory>/my/storage/directory</directory>
<store-name>{cache-name}.store</store-name>
</bdb-store-manager>
</external-scheme>

In-memory cache with disk based overflow

This sample uses an overflow-scheme to define a size limited in-memory cache, once the in-memory (front-scheme) size limit is reached, a portion of the cache contents will be moved to the on-disk (back-scheme). The front-scheme's eviction policy will determine which elements to move from the front to the back.

Note this sample reuses the "Size limited in-memory cache" and "Cache on disk" samples to implement the front and back of the cache.

<overflow-scheme>
<scheme-name>SampleOverflowScheme</scheme-name>
<front-scheme>
<local-scheme>
<scheme-ref>SampleMemoryLimitedScheme</scheme-ref>
</local-scheme>
</front-scheme>
<back-scheme>
<external-scheme>
<scheme-ref>SampleDiskScheme</scheme-ref>
</external-scheme>
</back-scheme>
</overflow-scheme>

Cache of a database

This sample uses a read-write-backing-map-scheme to define a cache of a database. This scheme maintains local cache cache of a portion of the database contents. Cache misses will read-through to the database, and cache writes will be written back to the database.

The cachestore-scheme element is configured with a custom class implementing either the com.tangosol.net.cache.CacheLoader or com.tangosol.net.cache.CacheStore interface. This class is responsible for all operations against the database, such as reading and writing cache entries.

The {cache-name} macro is used to inform the cache store implementation of the name of the cache it will back.

<read-write-backing-map-scheme>
<scheme-name>SampleDatabaseScheme</scheme-name>
<internal-cache-scheme>
<local-scheme>
<scheme-ref>SampleMemoryScheme</scheme-ref>
</local-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>com.tangosol.examples.coherence.DBCacheStore</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>{cache-name}</param-value>
</init-param>
</init-params>
</class-scheme>
</cachestore-scheme>
</read-write-backing-map-scheme>

Clustered caches

This section defines a series of clustered cache examples. Clustered caches are accessible from multiple JVMs (any cluster node running the same cache service). The internal cache storage (backing-map) on each cluster node is defined using local caches. The cache service provides the capability to access local caches from other cluster nodes.

Replicated cache

This sample uses the replicated-scheme to define a clustered cache in which a copy of each cache entry will be stored on all cluster nodes.

The "In-memory cache" sample is used to define the cache storage on each cluster node. The size of the cache is only limited by the cluster node with the smallest JVM heap.

<replicated-scheme>
<scheme-name>SampleReplicatedScheme</scheme-name>
<backing-map-scheme>
<local-scheme>
<scheme-ref>SampleMemoryScheme</scheme-ref>
</local-scheme>
</backing-map-scheme>
</replicated-scheme>

Replicated cache with overflow

The backing-map-scheme element could just as easily specify any of the other local cache samples. For instance if it had used the "In-memory cache with disk based overflow" each cluster node would have a local overflow cache allowing for much greater storage capacity.

<replicated-scheme>
<scheme-name>SampleReplicatedOverflowScheme</scheme-name>
<backing-map-scheme>
<overflow-scheme>
<scheme-ref>SampleOverflowScheme</scheme-ref>
</overflow-scheme>
</backing-map-scheme>
</replicated-scheme>

Partitioned cache

This sample uses the distributed-scheme to define a clustered cache in which cache storage is partitioned across all cluster nodes.

The "In-memory cache" sample is used to define the cache storage on each cluster node. The total storage capacity of the cache is the sum of all storage-enabled cluster nodes running the partitioned cache service.

<distributed-scheme>
<scheme-name>SamplePartitionedScheme</scheme-name>
<backing-map-scheme>
<local-scheme>
<scheme-ref>SampleMemoryScheme</scheme-ref>
</local-scheme>
</backing-map-scheme>
</distributed-scheme>

Partitioned cache with overflow

The backing-map-scheme element could just as easily specify any of the other local cache samples. For instance if it had used the "In-memory cache with disk based overflow" each storage-enabled cluster node would have a local overflow cache allowing for much greater storage capacity. Note that the cache's backup storage also uses the same overflow scheme which allows for backup data to be overflowed to disk.

<distributed-scheme>
<scheme-name>SamplePartitionedOverflowScheme</scheme-name>
<backing-map-scheme>
<overflow-scheme>
<scheme-ref>SampleOverflowScheme</scheme-ref>
</overflow-scheme>
</backing-map-scheme>
<backup-storage>
<type>scheme</type>
<scheme-name>SampleOverflowScheme</scheme-name>
</backup-storage>
</distributed-scheme>

Partitioned cache of a database

Switching the [backing-map|distributed-cache#backing-map-scheme element to use a read-write-backing-map-scheme allows the cache to load and store entries against an external source such as a database.

This sample reuses the "Cache of a database" sample to define the database access.

<distributed-scheme>
<scheme-name>SamplePartitionedDatabaseScheme</scheme-name>
<backing-map-scheme>
<read-write-backing-map-scheme>
<scheme-ref>SampleDatabaseScheme</scheme-ref>
</read-write-backing-map-scheme>
</backing-map-scheme>
</distributed-scheme>

Local cache of a partitioned cache (Near cache)

This sample uses the near-scheme to define a local in-memory cache of a subset of a partitioned cache. The result is that any cluster node accessing the partitioned cache will maintain a local copy of the elements it frequently accesses. This offers read performance close to the replicated-scheme based caches, while offering the high scalability of a distributed-scheme based cache.

The "Size limited in-memory cache" sample is reused to define the "near" front-scheme cache, while the "Partitioned cache" sample is reused to define the back-scheme.

Note that the size limited configuration of the front-scheme specifies the limit on how much of the back-scheme cache is locally cached.

<near-scheme>
<scheme-name>SampleNearScheme</scheme-name>
<front-scheme>
<local-scheme>
<scheme-ref>SampleLimitedMemoryScheme</scheme-ref>
</local-scheme>
</front-scheme>
<back-scheme>
<distributed-scheme>
<scheme-ref>SamplePartitionedScheme</scheme-ref>
</distributed-scheme>
</back-scheme>
</near-scheme>

Cache Configuration Elements Reference

acceptor-config

Used in: proxy-scheme.

Description

The acceptor-config element specifies the configuration info for a protocol-specific connection acceptor used by a proxy service to enable Coherence*Extend clients to connect to the cluster and use the services offered by the cluster without having to join the cluster.

The acceptor-config element must contain exactly one protocol-specific connection acceptor configuration element (either jms-acceptor or tcp-acceptor).

Elements

The following table describes the elements you can define within the acceptor-config element.

Element Required/Optional Description
<jms-acceptor> Optional Specifies the configuration info for a connection acceptor that enables Coherence*Extend clients to connect to the cluster over JMS.
<tcp-acceptor> Optional Specifies the configuration info for a connection acceptor that enables Coherence*Extend clients to connect to the cluster over TCP/IP.
<outgoing-message-handler> Optional Specifies the configuration info used by the connection acceptor to detect dropped client-to-cluster connections.
<use-filters> Optional Contains the list of filter names to be used by this connection acceptor.

For example, specifying use-filter as follows
<use-filters>
<filter-name>gzip</filter-name>
</use-filters>

will activate gzip compression for all network messages, which can help substantially with WAN and low-bandwidth networks.

<serializer> Optional Specifies the class configuration info for a Serializer implementation used by the connection acceptor to serialize and deserialize user types.

For example, the following configures a ConfigurablePofContext that uses the my-pof-types.xml POF type configuration file to deserialize user types to and from a POF stream:
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
<init-params>
<init-param>
<param-type>string</param-type>
<param-value>my-pof-types.xml</param-value>
</init-param>
</init-params>
</serializer>

async-store-manager

Used in: external-scheme, paged-external-scheme.

Description

The async-store-manager element adds asynchronous write capabilities to other store manager implementations.

Supported store managers include:

Implementation

This store manager is implemented by the com.tangosol.io.AsyncBinaryStoreManager class.

Elements

The following table describes the elements you can define within the async-store-manager element.

Element Required/Optional Description
<class-name> Optional Specifies a custom implementation of the async-store-manager.

Any custom implementation must extend the com.tangosol.io.AsyncBinaryStoreManager class and declare the exact same set of public constructors.
<init-params> Optional Specifies initialization parameters, for use in custom async-store-manager implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<custom-store-manager> Optional Configures the external cache to use a custom storage manager implementation.
<bdb-store-manager> Optional Configures the external cache to use Berkeley Database JE on-disk databases for cache storage.
<lh-file-manager> Optional Configures the external cache to use a Tangosol LH on-disk database for cache storage.
<nio-file-manager> Optional Configures the external cache to use a memory-mapped file for cache storage.
<nio-memory-manager> Optional Configures the external cache to use an off JVM heap, memory region for cache storage.
<async-limit> Optional Specifies the maximum number of bytes that will be queued to be written asynchronously. Setting the value to zero does not disable the asynchronous writes; instead, it indicates that the implementation default for the maximum number of bytes should be used.

The value of this element must be in the following format:
[\d]+[[.][\d]+]?[K|k|M|m]?[B|b]?

where the first non-digit (from left to right) indicates the factor with which the preceeding decimal value should be multiplied:

  • K (kilo, 210)
  • M (mega, 220)

If the value does not contain a factor, a factor of one is assumed.

Valid values are any positive memory sizes and {{zero.

Default value is 4MB.

backup-storage

Used in: distributed-scheme.

Description

The backup-storage element specifies the type and configuration of backup storage for a partitioned cache.

Elements

The following table describes the elements you can define within the backup-storage element.

Element Required/Optional Description
<type> Required Specifies the type of the storage used to hold the backup data.

Legal values are:

  • on-heap - The corresponding implementations class is java.util.HashMap.
  • off-heap - The corresponding implementations class is com.tangosol.util.nio.BinaryMap using the com.tangosol.util.nio.DirectBufferManager.
  • file-mapped - The corresponding implementations class is com.tangosol.util.nio.BinaryMap using the com.tangosol.util.nio.MappedBufferManager.
  • custom - The corresponding implementations class is the class specified by the class-name element.
  • scheme - The corresponding implementations class is specified as a caching-scheme by the scheme-name element.

Default value is the value specified in the tangosol-coherence.xml descriptor.

<initial-size> Optional Only applicable with the off-heap and file-mapped types.
Specifies the initial buffer size in bytes.

The value of this element must be in the following format:
[\d]+[[.][\d]]?[K|k|M|m|G|g|T|t]?[B|b]?

where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:

  • K or k (kilo, 210)
  • M or m (mega, 220)
  • G or g (giga, 230)
  • T or t (tera, 240)

If the value does not contain a factor, a factor of mega is assumed.

Legal values are positive integers between 1 and Integer.MAX_VALUE - 1023.

Default value is the value specified in the tangosol-coherence.xml descriptor.

<maximum-size> Optional Only applicable with the off-heap and file-mapped types.
Specifies the initial buffer size in bytes.

The value of this element must be in the following format:
[\d]+[[.][\d]]?[K|k|M|m|G|g|T|t]?[B|b]?

where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:

  • K or k (kilo, 210)
  • M or m (mega, 220)
  • G or g (giga, 230)
  • T or t (tera, 240)

If the value does not contain a factor, a factor of mega is assumed.

Legal values are positive integers between 1 and Integer.MAX_VALUE - 1023.

Default value is the value specified in the tangosol-coherence.xml descriptor.

<directory> Optional Only applicable with the file-mapped type.

Specifies the pathname for the directory that the disk persistence manager ( com.tangosol.util.nio.MappedBufferManager) will use as "root" to store files in. If not specified or specifies a non-existent directory, a temporary file in the default location is used.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<class-name> Optional Only applicable with the custom type.

Specifies a class name for the custom storage implementation. If the class implements com.tangosol.run.xml.XmlConfigurable interface then upon construction the setConfig method is called passing the entire backup-storage element.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<scheme-name> Optional Only applicable with the scheme type.

Specifies a scheme name for the ConfigurableCacheFactory.

Default value is the value specified in the tangosol-coherence.xml descriptor.

bdb-store-manager

Used in: external-scheme, paged-external-scheme, async-store-manager.

Berkeley Database JE Java class libraries are required to utilize a bdb-store-manager, visit the

Description

The BDB store manager is used to define external caches which will use Berkeley Database JE on-disk embedded databases for storage. See the persistent disk cache and overflow cache samples for examples of Berkeley based store configurations.

Implementation

This store manager is implemented by the com.tangosol.io.bdb.BerkeleyDBBinaryStoreManager class, and produces BinaryStore objects implemened by the com.tangosol.io.bdb.BerkeleyDBBinaryStore class.

Elements

The following table describes the elements you can define within the bdb-store-manager element.

Element Required/Optional Description
<class-name> Optional Specifies a custom implementation of the Berkeley Database BinaryStoreManager.

Any custom implementation must extend the com.tangosol.io.bdb.BerkeleyDBBinaryStoreManager class and declare the exact same set of public constructors.
<init-params> Optional Specifies additional Berkeley DB configuration settings. See Berkeley DB Configuration.

Also used to specify initialization parameters, for use in custom implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<directory> Optional Specifies the pathname for the root directory that the Berkeley Database JE store manager will use to store files in. If not specified or specifies a non-existent directory, a temporary directory in the default location will be used.
<store-name> Optional Specifies the name for a database table that the Berkely Database JE store manager will use to store data in. Specifying this parameter will cause the bdb-store-manager to use non-temporary (persistent) database instances. This is intended only for local caches that are backed by a cache loader from a non-temporary store, so that the local cache can be pre-populated from the disk on startup. When specified it is recommended that it utilize the {cache-name} macro.

Normally this parameter should be left unspecified, indicating that temporary storage is to be used.

async-store-manager

Used in: external-scheme, paged-external-scheme.

Description

The async-store-manager element adds asynchronous write capabilities to other store manager implementations.

Supported store managers include:

Implementation

This store manager is implemented by the com.tangosol.io.AsyncBinaryStoreManager class.

Elements

The following table describes the elements you can define within the async-store-manager element.

Element Required/Optional Description
<class-name> Optional Specifies a custom implementation of the async-store-manager.

Any custom implementation must extend the com.tangosol.io.AsyncBinaryStoreManager class and declare the exact same set of public constructors.
<init-params> Optional Specifies initialization parameters, for use in custom async-store-manager implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<custom-store-manager> Optional Configures the external cache to use a custom storage manager implementation.
<bdb-store-manager> Optional Configures the external cache to use Berkeley Database JE on-disk databases for cache storage.
<lh-file-manager> Optional Configures the external cache to use a Tangosol LH on-disk database for cache storage.
<nio-file-manager> Optional Configures the external cache to use a memory-mapped file for cache storage.
<nio-memory-manager> Optional Configures the external cache to use an off JVM heap, memory region for cache storage.
<async-limit> Optional Specifies the maximum number of bytes that will be queued to be written asynchronously. Setting the value to zero does not disable the asynchronous writes; instead, it indicates that the implementation default for the maximum number of bytes should be used.

The value of this element must be in the following format:
[\d]+[[.][\d]+]?[K|k|M|m]?[B|b]?

where the first non-digit (from left to right) indicates the factor with which the preceeding decimal value should be multiplied:

  • K (kilo, 210)
  • M (mega, 220)

If the value does not contain a factor, a factor of one is assumed.

Valid values are any positive memory sizes and {{zero.

Default value is 4MB.

backup-storage

Used in: distributed-scheme.

Description

The backup-storage element specifies the type and configuration of backup storage for a partitioned cache.

Elements

The following table describes the elements you can define within the backup-storage element.

Element Required/Optional Description
<type> Required Specifies the type of the storage used to hold the backup data.

Legal values are:

  • on-heap - The corresponding implementations class is java.util.HashMap.
  • off-heap - The corresponding implementations class is com.tangosol.util.nio.BinaryMap using the com.tangosol.util.nio.DirectBufferManager.
  • file-mapped - The corresponding implementations class is com.tangosol.util.nio.BinaryMap using the com.tangosol.util.nio.MappedBufferManager.
  • custom - The corresponding implementations class is the class specified by the class-name element.
  • scheme - The corresponding implementations class is specified as a caching-scheme by the scheme-name element.

Default value is the value specified in the tangosol-coherence.xml descriptor.

<initial-size> Optional Only applicable with the off-heap and file-mapped types.
Specifies the initial buffer size in bytes.

The value of this element must be in the following format:
[\d]+[[.][\d]]?[K|k|M|m|G|g|T|t]?[B|b]?

where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:

  • K or k (kilo, 210)
  • M or m (mega, 220)
  • G or g (giga, 230)
  • T or t (tera, 240)

If the value does not contain a factor, a factor of mega is assumed.

Legal values are positive integers between 1 and Integer.MAX_VALUE - 1023.

Default value is the value specified in the tangosol-coherence.xml descriptor.

<maximum-size> Optional Only applicable with the off-heap and file-mapped types.
Specifies the initial buffer size in bytes.

The value of this element must be in the following format:
[\d]+[[.][\d]]?[K|k|M|m|G|g|T|t]?[B|b]?

where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:

  • K or k (kilo, 210)
  • M or m (mega, 220)
  • G or g (giga, 230)
  • T or t (tera, 240)

If the value does not contain a factor, a factor of mega is assumed.

Legal values are positive integers between 1 and Integer.MAX_VALUE - 1023.

Default value is the value specified in the tangosol-coherence.xml descriptor.

<directory> Optional Only applicable with the file-mapped type.

Specifies the pathname for the directory that the disk persistence manager ( com.tangosol.util.nio.MappedBufferManager) will use as "root" to store files in. If not specified or specifies a non-existent directory, a temporary file in the default location is used.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<class-name> Optional Only applicable with the custom type.

Specifies a class name for the custom storage implementation. If the class implements com.tangosol.run.xml.XmlConfigurable interface then upon construction the setConfig method is called passing the entire backup-storage element.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<scheme-name> Optional Only applicable with the scheme type.

Specifies a scheme name for the ConfigurableCacheFactory.

Default value is the value specified in the tangosol-coherence.xml descriptor.

bdb-store-manager

Used in: external-scheme, paged-external-scheme, async-store-manager.

Berkeley Database JE Java class libraries are required to utilize a bdb-store-manager.

Description

The BDB store manager is used to define external caches which will use Berkeley Database JE on-disk embedded databases for storage. See the persistent disk cache and overflow cache samples for examples of Berkeley based store configurations.

Implementation

This store manager is implemented by the com.tangosol.io.bdb.BerkeleyDBBinaryStoreManager class, and produces BinaryStore objects implemened by the com.tangosol.io.bdb.BerkeleyDBBinaryStore class.

Elements

The following table describes the elements you can define within the bdb-store-manager element.

Element Required/Optional Description
<class-name> Optional Specifies a custom implementation of the Berkeley Database BinaryStoreManager.

Any custom implementation must extend the com.tangosol.io.bdb.BerkeleyDBBinaryStoreManager class and declare the exact same set of public constructors.
<init-params> Optional Specifies additional Berkeley DB configuration settings. See Berkeley DB Configuration.

Also used to specify initialization parameters, for use in custom implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<directory> Optional Specifies the pathname for the root directory that the Berkeley Database JE store manager will use to store files in. If not specified or specifies a non-existent directory, a temporary directory in the default location will be used.
<store-name> Optional Specifies the name for a database table that the Berkely Database JE store manager will use to store data in. Specifying this parameter will cause the bdb-store-manager to use non-temporary (persistent) database instances. This is intended only for local caches that are backed by a cache loader from a non-temporary store, so that the local cache can be pre-populated from the disk on startup. When specified it is recommended that it utilize the {cache-name} macro.

Normally this parameter should be left unspecified, indicating that temporary storage is to be used.

cache-config

Description

The cache-config element is the root element of the cache configuration descriptor.

At a high level a cache configuration consists of cache schemes and cache scheme mappings. Cache schemes describe a type of cache, for instance a database backed, distributed cache. Cache mappings define what scheme to use for a given cache name.

Elements

The following table describes the elements you can define within the cache-config element.

Element Required/Optional Description
<caching-scheme-mapping> Required Specifies the cacheing-scheme that will be used for caches, based on the cache's name.
<caching-schemes> Required Defines the available caching-schemes for use in the cluster.

cache-mapping

Used in: caching-scheme-mapping

Description

Each cache-mapping element specifyies the cache-scheme which is to be used for a given cache name or pattern.

Elements

The following table describes the elements you can define within the cache-mapping element.

Element Required/Optional Description
<cache-name> Required Specifies a cache name or name pattern. The name is unique within a cache factory.

The following cache name patterns are supported:

  • exact match, i.e. "MyCache"
  • prefix match, i.e. "My*" that matches to any cache name starting with "My"
  • any match "*", that matches to any cache name

The patterns get matched in the order of specificity (more specific definition is selected whenever possible). For example, if both "MyCache" and "My*" mappings are specified, the scheme from the "MyCache" mapping will be used to configure a cache named "MyCache".

<scheme-name> Required Contains the caching scheme name. The name is unique within a configuration file.

Caching schemes are configured in the caching-schemes section.
<init-params> Optional Allows specifying replaceable cache scheme parameters.

During cache scheme parsing, any occurrence of any replaceable parameter in format "{parameter-name}" is replaced with the corresponding parameter value.

Consider the following cache mapping example:
<cache-mapping>
<cache-name>My*</cache-name>
<scheme-name>my-scheme</scheme-name>
<init-params>
<init-param>
<param-name>cache-loader</param-name>
<param-value>com.acme.MyCacheLoader</param-value>
</init-param>
<init-param>
<param-name>size-limit</param-name>
<param-value>1000</param-value>
</init-param>
</init-params>
</cache-mapping>

For any cache name match "My*", any occurrence of the literal "{cache-loader}" in any part of the corresponding cache-scheme element will be replaced with the string "com.acme.MyCacheLoader" and any occurrence of the literal "{size-limit}" will be replaced with the value of "1000".

Since Coherence 3.0

caching-scheme-mapping

Used in: cache-config

Description

Defines mappings between cache names, or name patterns, and caching-schemes. For instance you may define that caches whose names start with "accounts-" will use a distributed caching scheme, while caches starting with the name "rates-" will use a replicated caching scheme.

Elements

The following table describes the elements you can define within the caching-scheme-mapping element.

Element Required/Optional Description
<cache-mapping> Optional Contains a single binding between a cache name and the caching scheme this cache will use.

caching-schemes

Used in: cache-config

Description

The caching-schemes element defines a series of cache scheme elements. Each cache scheme defines a type of cache, for instance a database backed partitioned cache, or a local cache with an LRU eviction policy. Scheme types are bound to actual caches using cache-scheme-mappings.

Scheme Types and Names

Each of the cache scheme element types is used to describe a different type of cache, for instance distributed, versus replicated. Multiple instances of the same type may be defined so long as each has a unique scheme-name.

For example the following defines two different distributed schemes:

<distributed-scheme>
<scheme-name>DistributedInMemoryCache</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<local-scheme/>
</backing-map-scheme>
</distributed-scheme>
<distributed-scheme>
<scheme-name>DistributedOnDiskCache</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<external-scheme>
<nio-file-manager>
<initial-size>8MB</initial-size>
<maximum-size>512MB</maximum-size>
<directory></directory>
</nio-file-manager>
</external-scheme>
</backing-map-scheme>
</distributed-scheme>

Nested Schemes

Some caching scheme types contain nested scheme definitions. For instance in the above example the distributed schemes include a nested scheme defintion describing their backing map.

Scheme Inheritance

Caching schemes can be defined by specifying all the elements required for a given scheme type, or by inheriting from another named scheme of the same type, and selectively overriding specific values. Scheme inheritance is accomplished by including a <scheme-ref> element in the inheriting scheme containing the scheme-name of the scheme to inherit from.

For example:

The following two configurations will produce equivalent "DistributedInMemoryCache" scheme defintions:

<distributed-scheme>
<scheme-name>DistributedInMemoryCache</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<local-scheme>
<eviction-policy>LRU</eviction-policy>
<high-units>1000</high-units>
<expiry-delay>1h</expiry-delay>
</local-scheme>
</backing-map-scheme>
</distributed-scheme>
<distributed-scheme>
<scheme-name>DistributedInMemoryCache</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<local-scheme>
<scheme-ref>LocalSizeLimited</scheme-ref>
</local-scheme>
</backing-map-scheme>
</distributed-scheme>
<local-scheme>
<scheme-name>LocalSizeLimited</scheme-name>
<eviction-policy>LRU</eviction-policy>
<high-units>1000</high-units>
<expiry-delay>1h</expiry-delay>
</local-scheme>

Please note that while the first is somewhat more compact, the second offers the ability to easily resuse the "LocalSizeLimited" scheme within multiple schemes. The following example demonstrates multiple schemes reusing the same "LocalSizeLimited" base defintion, but the second imposes a diffrent expiry-delay.

<distributed-scheme>
<scheme-name>DistributedInMemoryCache</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<local-scheme>
<scheme-ref>LocalSizeLimited</scheme-ref>
</local-scheme>
</backing-map-scheme>
</distributed-scheme>
<replicated-scheme>
<scheme-name>ReplicatedInMemoryCache</scheme-name>
<service-name>ReplicatedCache</service-name>
<backing-map-scheme>
<local-scheme>
<scheme-ref>LocalSizeLimited</scheme-ref>
<expiry-delay>10m</expiry-delay>
</local-scheme>
</backing-map-scheme>
</replicated-scheme>
<local-scheme>
<scheme-name>LocalSizeLimited</scheme-name>
<eviction-policy>LRU</eviction-policy>
<high-units>1000</high-units>
<expiry-delay>1h</expiry-delay>
</local-scheme>

Elements

The following table describes the different types of schemes you can define within the caching-schemes element.

Element Required/Optional Description
<local-scheme> Optional Defines a cache scheme which provides on-heap cache storage.
<external-scheme> Optional Defines a cache scheme which provides off-heap cache storage, for instance on disk.
<paged-external-scheme> Optional Defines a cache scheme which provides off-heap cache storage, that is size-limited via time based paging.
<distributed-scheme> Optional Defines a cache scheme where storage of cache entries is partitioned across the cluster nodes.
<replicated-scheme> Optional Defines a cache scheme where each cache entry is stored on all cluster nodes.
<optimistic-scheme> Optional Defines a replicated cache scheme which uses optimistic rather then pessimistic locking.
<near-scheme> Optional Defines a two tier cache scheme which consists of a fast local front-tier cache of a much larger back-tier cache.
<versioned-near-scheme> Optional Defines a near-scheme which uses object versioning to ensure coherence between the front and back tiers.
<overflow-scheme> Optional Defines a two tier cache scheme where entries evicted from a size-limited front-tier overflow and are stored in a much larger back-tier cache.
<invocation-scheme> Optional Defines an invocation service which can be used for performing custom operations in parallel across cluster nodes.
<read-write-backing-map-scheme> Optional Defines a backing map scheme which provides a cache of a persistent store.
<versioned-backing-map-scheme> Optional Defines a backing map scheme which utilizes object versioning to determine what updates need to be written to the persistent store.
<remote-cache-scheme> Optional Defines a cache scheme that enables caches to be accessed from outside a Coherence cluster via Coherence*Extend.
<class-scheme> Optional Defines a cache scheme using a custom cache implementation.

Any custom implementation must implement the java.util.Map interface, and include a zero-parameter public constructor.

Additionally if the contents of the Map can be modified by anything other than the CacheService itself (e.g. if the Map automatically expires its entries periodically or size-limits its contents), then the returned object must implement the com.tangosol.util.ObservableMap interface.
<disk-scheme> Optional Note: As of Coherence 3.0, the disk-scheme configuration element has been deprecated and replaced by the external-scheme and paged-external-scheme configuration elements.

class-scheme

Used in: caching-schemes, local-scheme, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme, cachestore-scheme, listener

Description

Class schemes provide a mechanism for instantiating an arbitrary Java object for use by other schemes. The scheme which contains this element will dictate what class or interface(s) must be extended. See the database cache sample for an example of using a class-scheme.

The class-scheme may be configured to either instantiate objects directly via their class-name, or indirectly via a class-factory-name and method-name. The class-scheme must be configured with either a class-name or class-factory-name and method-name.

Elements

The following table describes the elements you can define within the class-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<class-name> Optional Contains a fully specified Java class name to instantiate.

This class must extend an appropriate implementation class as dictated by the containing scheme and must declare the exact same set of public constructors as the superclass.
<class-factory-name> Optional Specifies a fully specified name of a Java class that will be used as a factory for object instantiation.
<method-name> Optional Specifies the name of a static factory method on the factory class which will perform object instantiation.
<init-params> Optional Specifies initialization parameters which are accessible by implementations which support the com.tangosol.run.xml.XmlConfigurable interface, or which include a public constructor with a matching signature.

cachestore-scheme

Used in: local-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme.

Description

Cache store schemes define a mechanism for connecting a cache to a backend data store. The cache store scheme may use any class implementing either the com.tangosol.net.cache.CacheStore or com.tangosol.net.cache.CacheLoader interfaces, where the former offers read-write capabilities, where the latter is read-only. Custom implementations of these interfaces may be produced to connect Coherence to various data stores. See the database cache sample for an example of using a cachestore-scheme.

Elements

The following table describes the elements you can define within the cachestore-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<class-scheme> Optional Specifies the implementation of the cache store.

The specified class must implement one of the following two interfaces.

  • com.tangosol.net.cache.CacheStore - for read-write support
  • com.tangosol.net.cache.CacheLoader - for read-only support
<remote-cache-scheme> Optional Configures the cachestore-scheme to use Coherence*Extend as its cache store implementation.

custom-store-manager

Used in: external-scheme, paged-external-scheme, async-store-manager.

Description

Used to create and configure custom implementations of a store manager for use in external caches.

Elements

The following table describes the elements you can define within the custom-store-manager element.

Element Required/Optional Description
<class-name> Required Specifies the implementation of the store manager.

The specified class must implement the com.tangosol.io.BinaryStoreManager interface.
<init-params> Optional Specifies initialization parameters, for use in custom store manager implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.

disk-scheme

As of Coherence 3.0, the disk-scheme configuration element has been deprecated and replaced with by the external-scheme and paged-external-scheme configuration elements.

distributed-scheme

Used in: caching-schemes, near-scheme, versioned-near-scheme, overflow-scheme, versioned-backing-map-scheme

Description

The distributed-scheme defines caches where the storage for entries is partitioned across cluster nodes. See the service overview for a more detailed description of partitioned caches. See the partitioned cache samples for examples of various distributed-scheme configurations.

Clustered Concurrency Control

Partitioned caches support cluster wide key-based locking so that data can be modified in a cluster without encountering the classic missing update problem. Note that any operation made without holding an explicit lock is still atomic but there is no guarantee that the value stored in the cache does not change between atomic operations.

Cache Clients

The partitioned cache service supports the concept of cluster nodes which do not contribute to the overall storage of the cluster. Nodes which are not storage enabled are considered "cache clients".

Cache Partitions

The cache entries are evenly segmented into a number of logical partitions, and each storage enabled cluster node running the specified partitioned service will be responsible for maintain a fair-share of these partitions.

Key Association

By default the specific set of entries assigned to each partition is transparent to the application. In some cases it may be advantageous to keep certain related entries within the same cluster node. A key-associator may be used to indicate related entries, the partitioned cache service will ensure that associated entries reside on the same partition, and thus on the same cluster node. Alternatively, key association may be specified from within the application code by using keys which implement the com.tangosol.net.cache.KeyAssociation interface.

Cache Storage (Backing Map)

Storage for the cache is specified via the backing-map-scheme. For instance a partitioned cache which uses a local cache for its backing map will result in cache entries being stored in-memory on the storage enabled cluster nodes.

Failover

For the purposes of failover a configurable number of backups of the cache may be maintained in backup-storage across the cluster nodes. Each backup is also divided into partitions, and when possible a backup partition will not reside on the same physical machine as the primary partition. If a cluster node abruptly leaves the cluster, responsibility for its partitions will automatically be reassigned to the existing backups, and new backups of those partitions will be created (on remote nodes) in order to maintain the configured backup count.

Partition Redistribution

When a node joins or leaves the cluster, a background redistribution of partitions occurs to ensure that all cluster nodes manage a fair-share of the total number of partitions. The amount of bandwidth consumed by the background transfer of partitions is governed by the transfer-threshold.

Elements

The following table describes the elements you can define within the distributed-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<service-name> Optional Specifies the name of the service which will manage caches created from this scheme.

Services are configured from within the operational descriptor.
<listener> Optional Specifies an implementation of a com.tangosol.MapListener which will be notified of events occurring on the cache.
<backing-map-scheme> Optional Specifies what type of cache will be used within the cache server to store the entries.

Legal values are:

When using an overflow-based backing map it is important that the corresponding backup-storage be configured for overflow (potentially using the same scheme as the backing-map). See the partitioned cache with overflow sample for an example configuration.
<partition-count> Optional Specifies the number of partitions that a partitioned cache will be "chopped up" into. Each node running the partitioned cache service that has the local-storage option set to true will manage a "fair" (balanced) number of partitions. The number of partitions should be larger than the square of the number of cluster members to achieve a good balance, and it is suggested that the number be prime. Good defaults include 257 and 1021 and prime numbers in-between, depending on the expected cluster size. A list of first 1,000 primes can be found at http://www.utm.edu/research/primes/lists/small/1000.txt

Legal values are prime numbers.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<key-associator> Optional Specifies a class that will be responsible for providing associations between keys. Allowing associated keys to reside on the same partition.
<key-partitioning> Optional Specifies the class which will be responsible for assigning keys to partitions.

If unspecified the default key partitioning algorithm will be used, which ensures that keys are evenly segmented across partitions.
<backup-count> Optional Specifies the number of members of the partitioned cache service that hold the backup data for each unit of storage in the cache.

Value of 0 means that in the case of abnormal termination, some portion of the data in the cache will be lost. Value of N means that if up to N cluster nodes terminate at once, the cache data will be preserved.

To maintain the partitioned cache of size M, the total memory usage in the cluster does not depend on the number of cluster nodes and will be in the order of M*(N+1).

Recommended values are 0, 1 or 2.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<backup-storage> Optional Specifies the type and configuration for the partitioned cache backup storage.
<thread-count> Optional Specifies the number of daemon threads used by the partitioned cache service.

If zero, all relevant tasks are performed on the service thread.

Legal values are from positive integers or zero.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<standard-lease-milliseconds> Optional Specifies the duration of the standard lease in milliseconds. Once a lease has aged past this number of milliseconds, the lock will automatically be released. Set this value to zero to specify a lease that never expires. The purpose of this setting is to avoid deadlocks or blocks caused by stuck threads; the value should be set higher than the longest expected lock duration (e.g. higher than a transaction timeout). It's also recommended to set this value higher then packet-delivery/timeout-milliseconds value.

Legal values are from positive long numbers or zero.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<lease-granularity> Optional Specifies the lease ownership granularity. Available since release 2.3.

Legal values are:

  • thread
  • member

A value of thread means that locks are held by a thread that obtained them and can only be released by that thread. A value of member means that locks are held by a cluster node and any thread running on the cluster node that obtained the lock can release it.

Default value is the value specified in the tangosol-coherence.xml descriptor.

<transfer-threshold> Optional Specifies the threshold for the primary buckets distribution in kilo-bytes. When a new node joins the partitioned cache service or when a member of the service leaves, the remaining nodes perform a task of bucket ownership re-distribution. During this process, the existing data gets re-balanced along with the ownership information. This parameter indicates a preferred message size for data transfer communications. Setting this value lower will make the distribution process take longer, but will reduce network bandwidth utilization during this activity.

Legal values are integers greater then zero.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<local-storage> Optional Specifies whether or not a cluster node will contribute storage to the cluster, i.e. maintain partitions. When disabled the node is considered a cache client.
Normally this value should be left unspecified within the configuration file, and instead set on a per-process basis using the tangosol.coherence.distributed.localstorage system property. This allows cache clients and servers to use the same configuration descriptor.

Legal values are true or false.

Default value is the value specified in the tangosol-coherence.xml descriptor.

<autostart> Optional The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not the cache services associated with this cache scheme should be automatically started at a cluster node.

Legal values are true or false.

Default value is false.

external-scheme

Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme

Description

External schemes define caches which are not JVM heap based, allowing for greater storage capacity. See the local cache samples for examples of various external cache configurations.

Implementaion

This scheme is implemented by:

  • com.tangosol.net.cache.SerializationMap - for unlimited size caches
  • com.tangosol.net.cache.SerializationCache - for size limited caches

The implementation type is chosen based on the following rule:

  • if the high-units element is specified and not zero then SerializationCache is used;
  • otherwise SerializationMap is used.

Pluggable Storage Manager

External schemes use a pluggable store manager to store and retrieve binary key value pairs. Supported store managers include:

Size Limited Cache

The cache may be configured as size-limited, which means that once it reaches its maximum allowable size it prunes itself.

Eviction against disk based caches can be expensive, consider using a paged-external-scheme for such cases.

Entry Expiration

External schemes support automatic expiration of entries based on the age of the value, as configured by the expiry-delay.

Persistence (long-term storage)

External caches are generally used for temporary storage of large data sets, for example as the back-tier of an overflow-scheme. Certain implementations do however support persistence for non-clustered caches, see the bdb-store-manager and lh-file-manager for details. Clustered persistence should be configured via a read-write-backing-map-scheme on a distributed-scheme.

Elements

The following table describes the elements you can define within the external-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<class-name> Optional Specifies a custom implementation of the external cache.

Any custom implementation must extend one of the following classes:

  • com.tangosol.net.cache.SerializationCache - for size limited caches
  • com.tangosol.net.cache.SerializationMap - for unlimited size caches
  • com.tangosol.net.cache.SimpleSerializationMap - for unlimited size caches

and declare the exact same set of public constructors as the superclass.

<init-params> Optional Specifies initialization parameters, for use in custom external cache implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache.
<high-units> Optional Used to limit the size of the cache. Contains the maximum number of units that can be placed in the cache before pruning occurs. An entry is the unit of measurement. Once this limit is exceeded, the cache will begin the pruning process, evicting the least recently used entries until the number of units is brought below this limit. The scheme's class-name element may be used to provide custom extensions to SerializationCache, which implement alternative eviction policies.

Legal values are positive integers or zero. Zero implies no limit.

Default value is zero.
<expiry-delay> Optional Specifies the amount of time from last update that entries will be kept by the cache before being expired. Entries that are expired will not be accessible and will be evicted.

The value of this element must be in the following format:
[\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]?

where the first non-digits (from left to right) indicate the unit of time duration:

  • MS or ms (milliseconds)
  • S or s (seconds)
  • M or m (minutes)
  • H or h (hours)
  • D or d (days)

If the value does not contain a unit, a unit of seconds is assumed.

A value of zero implies no expiry.

Default value is zero.

<async-store-manager> Optional Configures the external cache to use an asynchronous storage manager wrapper for any other storage manager.
<custom-store-manager> Optional Configures the external cache to use a custom storage manager implementation.
<bdb-store-manager> Optional Configures the external cache to use Berkeley Database JE on-disk databases for cache storage.
<lh-file-manager> Optional Configures the external cache to use a Tangosol LH on-disk database for cache storage.
<nio-file-manager> Optional Configures the external cache to use a memory-mapped file for cache storage.
<nio-memory-manager> Optional Configures the external cache to use an off JVM heap, memory region for cache storage.

init-param

Used in: init-params.

Defines an individual initialization parameter.

Elements

The following table describes the elements you can define within the init-param element.

Element Required/Optional Description
<param-name> Optional Contains the name of the initialization parameter.

For example:

<class-name>com.mycompany.cache.CustomCacheLoader</class-name>
<init-params>
<init-param>
<param-name>sTableName</param-name>
<param-value>EmployeeTable</param-value>
</init-param>
<init-param>
<param-name>iMaxSize</param-name>
<param-value>2000</param-value>
</init-param>
</init-params>
<param-type> Optional Contains the Java type of the initialization parameter.

The following standard types are supported:

  • java.lang.String (a.k.a. string)
  • java.lang.Boolean (a.k.a. boolean)
  • java.lang.Integer (a.k.a. int)
  • java.lang.Long (a.k.a. long)
  • java.lang.Double (a.k.a. double)
  • java.math.BigDecimal
  • java.io.File
  • java.sql.Date
  • java.sql.Time
  • java.sql.Timestamp

For example:

<class-name>com.mycompany.cache.CustomCacheLoader</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>EmployeeTable</param-value>
</init-param>
<init-param>
<param-type>int</param-type>
<param-value>2000</param-value>
</init-param>
</init-params>

Please refer to the list of available Parameter Macros.

<param-value> Optional Contains the value of the initialization parameter.

The value is in the format specific to the Java type of the parameter.

Please refer to the list of available Parameter Macros.

init-params

Used in: class-scheme, cache-mapping.

Description

Defines a series of initialization parameters as name/value pairs. See the database cache sample for an example of using init-params.

Elements

The following table describes the elements you can define within the init-params element.

Element Required/Optional Description
<init-param> Optional Defines an individual initialization parameter.

invocation-scheme

Used in: caching-schemes.

Description

Defines an Invocation Service. The invocation service may be used to perform custom operations in parallel on any number of cluster nodes. See the com.tangosol.net.InvocationService API for additional details.

Elements

The following table describes the elements you can define within the invocation-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<thread-count> Optional Specifies the number of daemon threads used by the invocation service.

If zero, all relevant tasks are performed on the service thread.

Legal values are positive integers or zero.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<autostart> Optional The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not this service should be automatically started at a cluster node.

Legal values are true or false.

Default value is false.

key-associator

Used in: distributed-scheme

Description

Specifies an implementation of a com.tangosol.net.partition.KeyAssociator which will be used to determine associations between keys, allowing related keys to reside on the same partition.

Alternatively the cache's keys may manage the association by implementing the com.tangosol.net.cache.KeyAssociation interface.

Elements

The following table describes the elements you can define within the key-associator element.

Element Required/Optional Description
<class-name> Required The name of a class that implements the com.tangosol.net.partition.KeyAssociator interface. This implementation must have a zero-parameter public constructor.

Default value is the value specified in the tangosol-coherence.xml descriptor.

key-partitioning

Used in: distributed-scheme

Description

Specifies an implementation of a com.tangosol.net.partition.KeyPartitioningStrategy which will be used to determine the partition in which a key will reside.

Elements

The following table describes the elements you can define within the key-partitioning element.

Element Required/Optional Description
<class-name> Required The name of a class that implements the com.tangosol.net.partition.KeyPartitioningStrategy interface. This implementation must have a zero-parameter public constructor.

Default value is the value specified in the tangosol-coherence.xml descriptor.

lh-file-manager

Used in: external-scheme, paged-external-scheme, async-store-manager.

Description

Configures a store manager which will use a Tangosol LH on-disk embedded database for storage. See the persistent disk cache and overflow cache samples for examples of LH based store configurations.

Implementation

Implemented by the com.tangosol.io.lh.LHBinaryStoreManager class. The BinaryStore objects created by this class are instances of com.tangosol.io.lh.LHBinaryStore.

Elements

The following table describes the elements you can define within the lh-file-manager element.

Element Required/Optional Description
<class-name> Optional Specifies a custom implementation of the LH BinaryStoreManager.

Any custom implementation must extend the com.tangosol.io.lh.LHBinaryStoreManager class and declare the exact same set of public constructors.
<init-params> Optional Specifies initialization parameters, for use in custom LH file manager implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<directory> Optional Specifies the pathname for the root directory that the LH file manager will use to store files in. If not specified or specifies a non-existent directory, a temporary file in the default location will be used.
<file-name> Optional Specifies the name for a non-temporary (persistent) file that the LH file manager will use to store data in. Specifying this parameter will cause the lh-file-manager to use non-temporary database instances. This is intended only for local caches that are backed by a cache loader from a non-temporary file, so that the local cache can be pre-populated from the disk file on startup. When specified it is recommended that it utilize the {cache-name} macro.

Normally this parameter should be left unspecified, indicating that temporary storage is to be used.

listener

Used in: local-scheme, external-scheme, paged-external-scheme, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme.

Description

The Listener element specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on a cache.

Elements

The following table describes the elements you can define within the listener element.

Element Required/Optional Description
<class-scheme> Required Specifies the full class name of listener implementation to use.

The specified class must implement the com.tangosol.util.MapListener interface.

local-scheme

Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme

Description

Local cache schemes define in-memory "local" caches. Local caches are generally nested within other cache schemes, for instance as the front-tier of a near-scheme. See the local cache samples for examples of various local cache configurations.

Implementation

Local caches are implemented by the com.tangosol.net.cache.LocalCache class.

Cache of an External Store

A local cache may be backed by an external cache store, cache misses will read-through to the back end store to retrieve the data. If a writable store is provided, cache writes will propagate to the cache store as well. For optimizing read/write access against a cache store see the read-write-backing-map-scheme.

Size Limited Cache

The cache may be configured as size-limited, which means that once it reaches its maximum allowable size it prunes itself back to a specified smaller size, choosing which entries to evict according to its eviction-policy. The entries and size limitations are measured in terms of units as calculated by the scheme's unit-calculator.

Entry Expiration

The local cache supports automatic expiration of entries based on the age of the value, as configured by the expiry-delay.

Elements

The following table describes the elements you can define within the local-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<service-name> Optional Specifies the name of the service which will manage caches created from this scheme.

Services are configured from within the operational descriptor.
<class-name> Optional Specifies a custom implementation of the local cache.

Any custom implementation must extend the com.tangosol.net.cache.LocalCache class and declare the exact same set of public constructors.
<init-params> Optional Specifies initialization parameters, for use in custom local cache implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occuring on the cache.
<cachestore-scheme> Optional Specifies the store which is being cached. If unspecified the cached data will only reside in memory, and only reflect operations performed on the cache itself.
<eviction-policy> Optional Specifies the type of eviction policy to use.

Legal values are:

  • LRU - Least Recently Used eviction policy chooses which entries to evict based on how recently they were last accessed, evicting those that were not accessed the for the longest period first.
  • LFU - Least Frequently Used eviction policy chooses which entries to evict based on how often they are being accessed, evicting those that are accessed least frequently first.
  • HYBRID - Hybrid eviction policy chooses which entries to evict based the combination (weighted score) of how often and recently they were accessed, evicting those that are accessed least frequently and were not accessed for the longest period first.
  • <class-scheme> - A custom eviction policy, specified as a class-scheme. The class specified within this scheme must implement the com.tangosol.net.cache.LocalCache.EvictionPolicy interface.

Default value is HYBRID.

<high-units> Optional Used to limit the size of the cache. Contains the maximum number of units that can be placed in the cache before pruning occurs. An entry is the unit of measurement, unless it is overridden by an alternate unit-calculator. Once this limit is exceeded, the cache will begin the pruning process, evicting entries according to the eviction policy until the low-units size is reached.

Legal values are positive integers or zero. Zero implies no limit.

Default value is 0.
<low-units> Optional Contains the number of units that the cache will be pruned down to when pruning takes place. An entry is the unit of measurement, unless it is overridden by an alternate unit-calculator. When pruning occurs entries will continue to be evicted according to the eviction policy until this size.

Legal values are positive integers or zero. Zero implies the default.

Default value is 75% of the high-units setting (i.e. for a high-units setting of 1000 the default low-units will be 750).
<unit-calculator> Optional Specifies the type of unit calculator to use.

A unit calculator is used to determine the cost (in "units") of a given object.

Legal values are:

  • FIXED - A unit calculator that assigns an equal weight of 1 to all cached objects.
  • BINARY - A unit calculator that assigns an object a weight equal to the number of bytes of memory required to cache the object. This requires that the objects are Binary instances, as in a Partitioned cache. See com.tangosol.net.cache.BinaryMemoryCalculator for additional details.
  • <class-scheme> - A custom unit calculator, specified as a class-scheme. The class specified within this scheme must implement the com.tangosol.net.cache.LocalCache.UnitCalculator interface.

Default value is FIXED.

<expiry-delay> Optional Specifies the amount of time from last update that entries will be kept by the cache before being marked as expired. Any attempt to read an expired entry will result in a reloading of the entry from the configured cache store. Expired values are periodically discarded from the cache based on the flush-delay.

The value of this element must be in the following format:
[\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]?

where the first non-digits (from left to right) indicate the unit of time duration:

  • MS or ms (milliseconds)
  • S or s (seconds)
  • M or m (minutes)
  • H or h (hours)
  • D or d (days)

If the value does not contain a unit, a unit of seconds is assumed.

A value of zero implies no expiry.

Default value is zero.

<flush-delay> Optional Specifies the time interval between periodic cache flushes, which will discard expired entries from the cache, thus freeing resources.

The value of this element must be in the following format:
[\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]?

where the first non-digits (from left to right) indicate the unit of time duration:

  • MS or ms (milliseconds)
  • S or s (seconds)
  • M or m (minutes)
  • H or h (hours)
  • D or d (days)

If the value does not contain a unit, a unit of seconds is assumed.

If expiry is enabled, the default flush-delay is 1m, otherwise a default of zero is used and automatic flushes are disabled.

near-scheme

Used in: caching-schemes.

Description

The near-scheme defines a two tier cache consisting of a front-tier which caches a subset of a back-tier cache. The front-tier is generally a fast, size limited cache, while the back-tier is slower, but much higher capacity. A typical deployment might use a local-scheme for the front-tier, and a distributed-scheme for the back-tier. The result is that a portion of a large partitioned cache will be cached locally in-memory allowing for very fast read access. See the near cache sample for an example of a near cache configurations.

Implementation

The near scheme is implemented by the com.tangosol.net.cache.NearCache class.

Front-tier Invalidation

Specifying an invalidation-strategy defines a strategy that is used to keep the front tier of the near cache in sync with the back tier. Depending on that strategy a near cache is configured to listen to certain events occurring on the back tier and automatically update (or invalidate) the front portion of the near cache.

Elements

The following table describes the elements you can define within the near-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<class-name> Optional Specifies a custom implementation of the near cache.

Any custom implementation must extend the com.tangosol.net.cache.NearCache class and declare the exact same set of public constructors.
<init-params> Optional Specifies initialization parameters, for use in custom near cache implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache.
<front-scheme> Required Specifies the cache-scheme to use in creating the front-tier cache.

Legal values are:

The eviction policy of the front-scheme defines which entries will be cached locally.
For example:

<front-scheme>
<local-scheme>
<eviction-policy>HYBRID</eviction-policy>
<high-units>1000</high-units>
</local-scheme>
</front-scheme>
<back-scheme> Required Specifies the cache-scheme to use in creating the back-tier cache.

Legal values are:

For example:

<back-scheme>
<distributed-scheme>
<scheme-ref>default-distributed</scheme-ref>
</distributed-scheme>
</back-scheme>
<invalidation-strategy> Optional Specifies the strategy used keep the front-tier in-sync with the back-tier.

Please see com.tangosol.net.cache.NearCache for more details.

Legal values are:

  • none - instructs the cache not to listen for invalidation events at all. This is the best choice for raw performance and scalability when business requirements permit the use of data which might not be absolutely current. Freshness of data can be guaranteed by use of a sufficiently brief eviction policy. The worst case performance is identical to a standard Distributed cache.
  • present - instructs the near cache to listen to the back map events related only to the items currently present in the front map.
    This strategy works best when cluster nodes have sticky data access patterns (for example, HTTP session management with a sticky load balancer).
  • all - instructs the near cache to listen to all back map events.
    This strategy is optimal for read-heavy access patterns where there is significant overlap between the front caches on each cluster member.
  • auto - instructs the near cache to switch between present and all strategies automatically based on the cache statistics.


Default value is auto.

<autostart> Optional The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not the cache services associated with this cache scheme should be automatically started at a cluster node.

Legal values are true or false.

Default value is false.

nio-file-manager

Used in: external-scheme, paged-external-scheme, async-store-manager.

Description

Configures an external store which uses memory-mapped file for storage.

Implementation

This store manager is implemented by the com.tangosol.io.nio.MappedStoreManager class. The BinaryStore objects created by this class are instances of the com.tangosol.io.nio.BinaryMapStore.

Elements

The following table describes the elements you can define within the nio-file-manager element.

Element Required/Optional Description
<class-name> Optional Specifies a custom implementation of the local cache.

Any custom implementation must extend the com.tangosol.io.nio.MappedStoreManager class and declare the exact same set of public constructors.
<init-params> Optional Specifies initialization parameters, for use in custom nio-file-manager implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<initial-size> Optional Specifies the initial buffer size in megabytes.

The value of this element must be in the following format:
[\d]+[[.][\d]+]?[K|k|M|m]?[B|b]?

where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:

  • K or k (kilo, 210)
  • M or m (mega, 220)
  • G or g (giga, 230)
  • T or t (tera, 240)

If the value does not contain a factor, a factor of mega is assumed.

Legal values are positive integers between 1 and Integer.MAX_VALUE - 1023.

Default value is 1MB.

<maximum-size> Optional Specifies the maximum buffer size in bytes.

The value of this element must be in the following format:
[\d]+[[.][\d]+]?[K|k|M|m]?[B|b]?

where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:

  • K or k (kilo, 210)
  • M or m (mega, 220)
  • G or g (giga, 230)
  • T or t (tera, 240)

If the value does not contain a factor, a factor of mega is assumed.

Legal values are positive integers between 1 and Integer.MAX_VALUE - 1023.

Default value is 1024MB.

<directory> Optional Specifies the pathname for the root directory that the manager will use to store files in. If not specified or specifies a non-existent directory, a temporary file in the default location will be used.

nio-memory-manager

Used in: external-scheme, paged-external-scheme, async-store-manager.

Description

Configures a store-manager which uses an off JVM heap, memory region for storage, which means that it does not affect the Java heap size and the related JVM garbage-collection performance that can be responsible for application pauses. See the NIO cache sample for an example of an NIO cache configuration.

Some JVMs (starting with 1.4) require the use of a command line parameter if the total NIO buffers will be greater than 64MB. For example: -XX:MaxDirectMemorySize=512M

Implementation

Implemented by the com.tangosol.io.nio.DirectStoreManager class. The BinaryStore objects created by this class are instances of the com.tangosol.io.nio.BinaryMapStore.

Elements

The following table describes the elements you can define within the nio-memory-manager element.

Element Required/Optional Description
<class-name> Optional Specifies a custom implementation of the local cache.

Any custom implementation must extend the com.tangosol.io.nio.DirectStoreManager class and declare the exact same set of public constructors.
<init-params> Optional Specifies initialization parameters, for use in custom nio-memory-manager implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<initial-size> Optional Specifies the initial buffer size in bytes.

The value of this element must be in the following format:
[\d]+[[.][\d]+]?[K|k|M|m]?[B|b]?

where the first non-digit (from left to right) indicates the factor with which the preceeding decimal value should be multiplied:

  • K or k (kilo, 210)
  • M or m (mega, 220)
  • G or g (giga, 230)
  • T or t (tera, 240)

If the value does not contain a factor, a factor of mega is assumed.

Legal values are positive integers between 1 and Integer.MAX_VALUE - 1023.

Default value is 1MB.

<maximum-size> Optional Specifies the maximum buffer size in bytes.

The value of this element must be in the following format:
[\d]+[[.][\d]+]?[K|k|M|m]?[B|b]?

where the first non-digit (from left to right) indicates the factor with which the preceeding decimal value should be multiplied:

  • K or k (kilo, 210)
  • M or m (mega, 220)
  • G or g (giga, 230)
  • T or t (tera, 240)

If the value does not contain a factor, a factor of mega is assumed.

Legal values are positive integers between 1 and Integer.MAX_VALUE - 1023.

Default value is 1024MB.

optimistic-scheme

Used in: caching-schemes, near-scheme, versioned-near-scheme, overflow-scheme

The optimistic scheme defines a cache which fully replicates all of its data to all cluster nodes that are running the service.

Optimistic Locking

Unlike the replicated and partitioned caches, optimistic caches do not support concurrency control (locking). Individual operations against entries are atomic but there is no guarantee that the value stored in the cache does not change between atomic operations. The lack of concurrency control allows optimistic caches to support very fast write operations.

Cache Storage (Backing Map)

Storage for the cache is specified via the backing-map-scheme. For instance an optimistic cache which uses a local cache for its backing map will result in cache entries being stored in-memory.

Elements

The following table describes the elements you can define within the optimistic-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<service-name> Optional Specifies the name of the service which will manage caches created from this scheme.

Services are configured from within the operational descriptor.
<listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache.
<backing-map-scheme> Optional Specifies what type of cache will be used within the cache server to store the entries.

Legal values are:

In order to ensure cache coherence, the backing-map of an optimistic cache must not use a read-through pattern to load cache entries. Either use a cache-aside pattern from outside the cache service, or switch to the distributed-scheme, which supports read-through clustered caching.

<autostart> Optional The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not the cache services associated with this cache scheme should be automatically started at a cluster node.

Legal values are true or false.

Default value is false.

overflow-scheme

Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme.

Description

The overflow-scheme defines a two tier cache consisting of a fast, size limited front-tier, and slower but much higher capacity back-tier cache. When the size limited front fills up, evicted entries are transparently moved to the back. In the event of a cache miss, entries may move from the back to the front. A typical deployment might use a local-scheme for the front-tier, and a external-scheme for the back-tier, allowing for fast local caches with capacities larger the the JVM heap would allow.

Implementation

Implemented by either com.tangosol.net.cache.OverflowMap or com.tangosol.net.cache.SimpleOverflowMap, see expiry-enabled for details.

Entry Expiration

Overflow supports automatic expiration of entries based on the age of the value, as configured by the expiry-delay.

Elements

The following table describes the elements you can define within the overflow-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<class-name> Optional Specifies a custom implementation of the overflow cache.

Any custom implementation must extend either the com.tangosol.net.cache.OverflowMap or com.tangosol.net.cache.SimpleOverflowMap class, and declare the exact same set of public constructors.
<init-params> Optional Specifies initialization parameters, for use in custom overflow cache implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache.
<front-scheme> Required Specifies the cache-scheme to use in creating the front-tier cache.

Legal values are:

The eviction policy of the front-scheme defines which entries which items are in the front vs back tiers.
For Example:

<front-scheme>
<local-scheme>
<eviction-policy>HYBRID</eviction-policy>
<high-units>1000</high-units>
</local-scheme>
</front-scheme>
<back-scheme> Required Specifies the cache-scheme to use in creating the back-tier cache.

Legal values are:

For Example:

<back-scheme>
<external-scheme>
<lh-file-manager/>
</external-scheme>
</back-scheme>
<miss-cache-scheme> Optional Specifies a cache-scheme for maintaining information on cache misses. For caches which are not expiry-enabled, the miss-cache is used track keys which resulted in both a front and back tier cache miss. The knowledge that a key is not in either tier allows some operations to perform faster, as they can avoid querying the potentially slow back-tier. A size limited scheme may be used to control how many misses are tracked. If unspecified no cache-miss data will be maintained.

Legal values are:

<expiry-enabled> Optional Turns on support for automatically-expiring data, as provided by the com.tangosol.net.cache.CacheMap API.

When enabled the overflow-scheme will be implemented using com.tangosol.net.cache.OverflowMap, rather then com.tangosol.net.cache.SimpleOverflowMap.

Legal values are true or false.

Default value is false.
<expiry-delay> Optional Specifies the amount of time from last update that entries will be kept by the cache before being expired. Entries that are expired will not be accessible and will be evicted.

The value of this element must be in the following format:
[\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]?

where the first non-digits (from left to right) indicate the unit of time duration:

  • MS or ms (milliseconds)
  • S or s (seconds)
  • M or m (minutes)
  • H or h (hours)
  • D or d (days)

If the value does not contain a unit, a unit of seconds is assumed.

A value of zero implies no expiry.

Default value is zero.

<autostart> Optional The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not the cache services associated with this cache scheme should be automatically started at a cluster node.

Legal values are true or false.

Default value is false.

paged-external-scheme

Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme

Description

As with external-schemes, paged-external-schemes define caches which are not JVM heap based, allowing for greater storage capacity. The paged-external-scheme optimizes LRU eviction by using a paging approach.

Implementation

This scheme is implemented by the com.tangosol.net.cache.SerializationPagedCache class.

Paging

Cache entries are maintained over a series of pages, where each page is a separate com.tangosol.io.BinaryStore, obtained from the configured storage manager. When a page is created it is considered to be the "current" page, and all write operations are performed against this page. On a configurable interval the current page is closed and a new current page is created. Read operations for a given key are performed against the last page in which the key was stored. When the number of pages exceeds a configured maximum, the oldest page is destroyed and those items which were not updated since the page was closed are be evicted. For example configuring a cache with a duration of ten minutes per page, and a maximum of six pages, will result in entries being cached for at most an hour.

Paging improves performance by avoiding individual delete operations against the storage manager as cache entries are removed or evicted. Instead the cache simply releases its references to those entries, and relies on the eventual destruction of an entire page to free the associated storage of all page entries in a single stroke.

Pluggable Storage Manager

External schemes use a pluggable store manager to create and destroy pages, as well as to access entries within those pages. Supported store-managers include:

Persistence (long-term storage)

Paged external caches are used for temporary storage of large data sets, for example as the back-tier of an overflow-scheme. These caches are not usable as for long-term storage (persistence), and will not survive beyond the life of the JVM. Clustered persistence should be configured via a read-write-backing-map-scheme on a distributed-scheme. If a non-clustered persistent cache is what is needed, refer to the external-scheme.

Elements

The following table describes the elements you can define within the paged-external-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<class-name> Optional Specifies a custom implementation of the external paged cache.

Any custom implementation must extend the com.tangosol.net.cache.SerializationPagedCache class and declare the exact same set of public constructors.
<init-params> Optional Specifies initialization parameters, for use in custom external paged cache implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache.
<page-limit> Required Specifies the maximum number of active pages for the paged cache.

Legal values are positive integers between 2 and 3600.
<page-duration> Optional Specifies the length of time, in seconds, that a page in the paged cache is current.

The value of this element must be in the following format:
[\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]?

where the first non-digits (from left to right) indicate the unit of time duration:

  • MS or ms (milliseconds)
  • S or s (seconds)
  • M or m (minutes)
  • H or h (hours)
  • D or d (days)

If the value does not contain a unit, a unit of seconds is assumed.

Legal values are between 5 and 604800 seconds (one week) and zero (no expiry).

Default value is zero

<async-store-manager> Optional Configures the paged external cache to use an asynchronous storage manager wrapper for any other storage manager.
<custom-store-manager> Optional Configures the paged external cache to use a custom storage manager implementation.
<bdb-store-manager> Optional Configures the paged external cache to use Berkeley Database JE on-disk databases for cache storage.
<lh-file-manager> Optional Configures the paged external cache to use a Tangosol LH on-disk database for cache storage.
<nio-file-manager> Optional Configures the paged external cache to use a memory-mapped file for cache storage.
<nio-memory-manager> Optional Configures the paged external cache to use an off JVM heap, memory region for cache storage.

read-write-backing-map-scheme

Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme.

Description

The read-write-backing-map-scheme defines a backing map which provides a size limited cache of a persistent store.

Implementation

The read-write-backing-map-scheme is implemented by the com.tangosol.net.cache.ReadWriteBackingMap class.

Cache of an External Store

A read write backing map maintains a cache backed by an external persistent cache store, cache misses will read-through to the backend store to retrieve the data. If a writable store is provided, cache writes will propogate to the cache store as well.

Refresh-Ahead Caching

When enabled the cache will watch for recently accessed entries which are about to expire, and asynchronously reload them from the cache store. This insulates the application from potentially slow reads against the cache store, as items periodically expire.

Write-Behind Caching

When enabled the cache will delay writes to the backend cache store. This allows for the writes to be batched into more efficient update blocks, which occur asynchronously from the client thread.

Elements

The following table describes the elements you can define within the read-write-backing-map-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<class-name> Optional Specifies a custom implementation of the read write backing map.

Any custom implementation must extend the com.tangosol.net.cache.ReadWriteBackingMap class and declare the exact same set of public constructors.
<init-params> Optional Specifies initialization parameters, for use in custom read write backing map implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache.
<cachestore-scheme> Optional Specifies the store to cache. If unspecified the cached data will only reside within the internal cache, and only reflect operations performed on the cache itself.
<internal-cache-scheme> Required Specifies a cache-scheme which will be used to cache entries.

Legal values are:

<miss-cache-scheme> Optional Specifies a cache-scheme for maintaining information on cache misses. The miss-cache is used track keys which were not found in the cache store. The knowledge that a key is not in the cache store allows some operations to perform faster, as they can avoid querying the potentially slow cache store. A size-limited scheme may be used to control how many misses are cached. If unspecified no cache-miss data will be maintained.

Legal values are:

<read-only> Optional Specifies if the cache is read only. If true the cache will load data from cachestore for read operations and will not perform any writing to the cachestore when the cache is updated.

Legal values are true or false.

Default value is false.
<write-delay> Optional Specifies the time interval for a write-behind queue to defer asynchronous writes to the cachestore by.

The value of this element must be in the following format:

[\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]?

where the first non-digits (from left to right) indicate the unit of time duration:

  • MS or ms (milliseconds)
  • S or s (seconds)
  • M or m (minutes)
  • H or h (hours)
  • D or d (days)

If the value does not contain a unit, a unit of seconds is assumed.

If zero, synchronous writes to the cachestore (without queuing) will take place, otherwise the writes will be asynchronous and deferred by the number of seconds after the last update to the value in the cache.

Default is zero.

<write-batch-factor> Optional The write-batch-factor element is used to calculate the "soft-ripe" time for write-behind queue entries.

A queue entry is considered to be "ripe" for a write operation if it has been in the write-behind queue for no less than the write-delay interval. The "soft-ripe" time is the point in time prior to the actual "ripe" time after which an entry will be included in a batched asynchronous write operation to the CacheStore (along with all other "ripe" and "soft-ripe" entries).

This element is only applicable if asynchronous writes are enabled (i.e. the value of the write-delay element is greater than zero) and the CacheStore implements the storeAll() method.

The value of the element is expressed as a percentage of the write-delay interval. For example, if the value is zero, only "ripe" entries from the write-behind queue will be batched. On the other hand, if the value is 1.0, all currently queued entries will be batched and the value of the write-delay element will be effectively ignored.

Legal values are non-negative doubles less than or equal to 1.0.

Default is zero.
<write-requeue-threshold> Optional Specifies the maximum size of the write-behind queue for which failed cachestore write operations are requeued.

The purpose of this setting is to prevent flooding of the write-behind queue with failed cachestore operations. This can happened in situations where a large number of successive write operations fail.

If zero, write-behind requeueing is disabled.

Legal values are positive integers or zero.

Default is zero.
<refresh-ahead-factor> Optional The refresh-ahead-factor element is used to calculate the "soft-expiration" time for cache entries.

Soft-expiration is the point in time prior to the actual expiration after which any access request for an entry will schedule an asynchronous load request for the entry.

This attribute is only applicable if the internal cache is a LocalCache, configured with automatic expiration.

The value is expressed as a percentage of the internal LocalCache expiration interval. If zero, refresh-ahead scheduling will be disabled. If 1.0, then any get operation will immediately trigger an asynchronous reload.

Legal values are non-negative doubles less than or equal to 1.0.

Default value is zero.
<rollback-cachestore-failures> Optional Specifies whether or not exceptions caught during synchronous cachestore operations are rethrown to the calling thread (possibly over the network to a remote member).

If the value of this element is false, an exception caught during a synchronous cachestore operation is logged locally and the internal cache is updated.

If the value is true, the exception is rethrown to the calling thread and the internal cache is not changed. If the operation was called within a transactional context, this would have the effect of rolling back the current transaction.

Legal values are true or false.

Default value is false.

replicated-scheme

Used in: caching-schemes, near-scheme, versioned-near-scheme, overflow-scheme, versioned-backing-map-scheme

Description

The replicated scheme defines caches which fully replicate all their cache entries on each cluster nodes running the specified service. See the service overview for a more detailed description of replicated caches.

Clustered Concurrency Control

Replicated caches support cluster wide key-based locking so that data can be modified in a cluster without encountering the classic missing update problem. Note that any operation made without holding an explicit lock is still atomic but there is no guarantee that the value stored in the cache does not change between atomic operations.

Cache Storage (Backing Map)

Storage for the cache is specified via the backing-map scheme. For instance a replicated cache which uses a local cache for its backing map will result in cache entries being stored in-memory.

Elements

The following table describes the elements you can define within the replicated-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<service-name> Optional Specifies the name of the service which will manage caches created from this scheme.

Services are configured from within the operational descriptor.
<listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache.
<backing-map-scheme> Optional Specifies what type of cache will be used within the cache server to store the entries.

Legal values are:

In order to ensure cache coherence, the backing-map of an replicated cache must not use a read-through pattern to load cache entries. Either use a cache-aside pattern from outside the cache service, or switch to the distributed-scheme, which supports read-through clustered caching.

<standard-lease-milliseconds> Optional Specifies the duration of the standard lease in milliseconds. Once a lease has aged past this number of milliseconds, the lock will automatically be released. Set this value to zero to specify a lease that never expires. The purpose of this setting is to avoid deadlocks or blocks caused by stuck threads; the value should be set higher than the longest expected lock duration (e.g. higher than a transaction timeout). It's also recommended to set this value higher then packet-delivery/timeout-milliseconds value.

Legal values are from positive long numbers or zero.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<lease-granularity> Optional Specifies the lease ownership granularity. Available since release 2.3.

Legal values are:

  • thread
  • member

A value of thread means that locks are held by a thread that obtained them and can only be released by that thread. A value of member means that locks are held by a cluster node and any thread running on the cluster node that obtained the lock can release it.

Default value is the value specified in the tangosol-coherence.xml descriptor.

<mobile-issues> Optional Specifies whether or not the lease issues should be transfered to the most recent lock holders.

Legal values are true or false.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<autostart> Optional The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not the cache services associated with this cache scheme should be automatically started at a cluster node.

Legal values are true or false.

Default value is false.

versioned-backing-map-scheme

Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme.

Description

The versioned-backing-map-scheme is an extension of a read-write-backing-map-scheme, defining a size limited cache of a persistent store. It utilizes object versioning to determine what updates need to be written to the persistent store.

Implementation

The versioned-backing-map-scheme scheme is implemented by the com.tangosol.net.cache.VersionedBackingMap class.

Cache of an External Store

As with the read-write-backing-map-scheme, a versioned backing map maintains a cache backed by an external persistent cache store, cache misses will read-through to the backend store to retrieve the data. Cache stores may also support updates to the backend data store.

Refresh-Ahead and Write-Behind Caching

As with the read-write-backing-map-scheme both the refresh-ahead and write-behind caching optimizations are supported.

Versioning

For entries whose values implement the com.tangosol.util.Versionable interface, the versioned backing map will utilize the version identifier to determine if an update needs to be written to the persistent store. The primary benefit of this feature is that in the event of cluster node failover, the backup node can determine if the most recent version of an entry has already been written to the persistent store, and if so it can avoid an extraneous write.

Elements

The following table describes the elements you can define within the versioned-backing-map-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<class-name> Optional Specifies a custom implementation of the versioned backing map.

Any custom implementation must extend the com.tangosol.net.cache.VersionedBackingMap class and declare the exact same set of public constructors.
<init-params> Optional Specifies initialization parameters, for use in custom versioned backing map implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache.
<cachestore-scheme> Optional Specifies the store to cache. If unspecified the cached data will only reside within the internal cache, and only reflect operations performed on the cache itself.
<internal-cache-scheme> Required Specifies a cache-scheme which will be used to cache entries.

Legal values are:

<miss-cache-scheme> Optional Specifies a cache-scheme for maintaining information on cache misses. The miss-cache is used track keys which were not found in the cache store. The knowledge that a key is not in the cache store allows some operations to perform faster, as they can avoid querying the potentially slow cache store. A size-limited scheme may be used to control how many misses are cached. If unspecified no cache-miss data will be maintained.

Legal values are:

<read-only> Optional Specifies if the cache is read only. If true the cache will load data from cachestore for read operations and will not perform any writing to the cachestore when the cache is updated.

Legal values are true or false.

Default value is false.
<write-delay> Optional Specifies the time interval for a write-behind queue to defer asynchronous writes to the cachestore by.

The value of this element must be in the following format:

[\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]?

where the first non-digits (from left to right) indicate the unit of time duration:

  • MS or ms (milliseconds)
  • S or s (seconds)
  • M or m (minutes)
  • H or h (hours)
  • D or d (days)

If the value does not contain a unit, a unit of seconds is assumed.

If zero, synchronous writes to the cachestore (without queuing) will take place, otherwise the writes will be asynchronous and deferred by the number of seconds after the last update to the value in the cache.

Default is zero.

<write-batch-factor> Optional The write-batch-factor element is used to calculate the "soft-ripe" time for write-behind queue entries.

A queue entry is considered to be "ripe" for a write operation if it has been in the write-behind queue for no less than the write-delay interval. The "soft-ripe" time is the point in time prior to the actual "ripe" time after which an entry will be included in a batched asynchronous write operation to the CacheStore (along with all other "ripe" and "soft-ripe" entries).

This element is only applicable if asynchronous writes are enabled (i.e. the value of the write-delay element is greater than zero) and the CacheStore implements the storeAll() method.

The value of the element is expressed as a percentage of the write-delay interval. For example, if the value is zero, only "ripe" entries from the write-behind queue will be batched. On the other hand, if the value is 1.0, all currently queued entries will be batched and the value of the write-delay element will be effectively ignored.

Legal values are non-negative doubles less than or equal to 1.0.

Default is zero.
<write-requeue-threshold> Optional Specifies the maximum size of the write-behind queue for which failed cachestore write operations are requeued.

The purpose of this setting is to prevent flooding of the write-behind queue with failed cachestore operations. This can happened in situations where a large number of successive write operations fail.

If zero, write-behind requeueing is disabled.

Legal values are positive integers or zero.

Default is zero.
<refresh-ahead-factor> Optional The refresh-ahead-factor element is used to calculate the "soft-expiration" time for cache entries.

Soft-expiration is the point in time prior to the actual expiration after which any access request for an entry will schedule an asynchronous load request for the entry.

This attribute is only applicable if the internal cache is a LocalCache, configured with automatic expiration.

The value is expressed as a percentage of the internal LocalCache expiration interval. If zero, refresh-ahead scheduling will be disabled. If 1.0, then any get operation will immediately trigger an asynchronous reload.

Legal values are non-negative doubles less than or equal to 1.0.

Default value is zero.
<rollback-cachestore-failures> Optional Specifies whether or not exceptions caught during synchronous cachestore operations are rethrown to the calling thread (possibly over the network to a remote member).

If the value of this element is false, an exception caught during a synchronous cachestore operation is logged locally and the internal cache is updated.

If the value is true, the exception is rethrown to the calling thread and the internal cache is not changed. If the operation was called within a transactional context, this would have the effect of rolling back the current transaction.

Legal values are true or false.

Default value is false.
<version-persistent-scheme> Optional Specifies a cache-scheme for tracking the version identifier for entries in the persistent cachestore.
<version-transient-scheme> Optional Specifies a cache-scheme for tracking the version identifier for entries in the transient internal cache.
<manage-transient> Optional Specifies if the backing map is responsible for keeping the transient version cache up to date.

If disabled the backing map manages the transient version cache only for operations for which no other party is aware (such as entry expiry). This is used when there is already a transient version cache of the same name being maintained at a higher level, for instance within a versioned-near-scheme.

Legal values are true or false.

Default value is false.

versioned-near-scheme

Used in: caching-schemes.

As of Coherence release 2.3, it is suggested that a near-scheme be used instead of versioned-near-scheme. Legacy Coherence applications use versioned-near-scheme to ensure coherence through object versioning. As of Coherence 2.3 the near-scheme includes a better alternative, in the form of reliable and efficient front cache invalidation.

Description

As with the near-scheme, the versioned-near-scheme defines a two tier cache consisting of a small and fast front-end, and higher-capacity but slower back-end cache. The front-end and back-end are expressed as normal cache-schemes. A typical deployment might use a local-scheme for the front-end, and a distributed-scheme for the back-end.

Implementation

The versioned near scheme is implemented by the com.tangosol.net.cache.VersionedNearCache class.

Versioning

Object versioning is used to ensure coherence between the front and back tiers.

Elements

The following table describes the elements you can define within the near-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<class-name> Optional Specifies a custom implementation of the versioned near cache.

The specified class must extend the com.tangosol.net.cache.VersionedNearCache class and declare the exact same set of public constructors.
<init-params> Optional Specifies initialization parameters, for use in custom versioned near cache implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache.
<front-scheme> Required Specifies the cache-scheme to use in creating the front-tier cache.

Legal values are:

For Example:

<front-scheme>
<local-scheme>
<scheme-ref>default-eviction</scheme-ref>
</local-scheme>
</front-scheme>

or
<front-scheme>
<class-scheme>
<class-name>com.tangosol.util.SafeHashMap</class-name>
<init-params></init-params>
</class-scheme>
</front-scheme>
<back-scheme> Required Specifies the cache-scheme to use in creating the back-tier cache.

Legal values are:


For Example:

<back-scheme>
<distributed-scheme>
<scheme-ref>default-distributed</scheme-ref>
</distributed-scheme>
</back-scheme>

<version-transient-scheme> Optional Specifies a scheme for versioning cache entries, which ensures coherence between the front and back tiers.
<autostart> Optional The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not the cache services associated with this cache scheme should be automatically started at a cluster node.

Legal values are true or false.

Default value is false.

version-transient-scheme

Used in: versioned-near-scheme, versioned-backing-map-scheme.

Description

The version-transient-scheme defines a cache for storing object versioning information for use in versioned near-caches. Specifying a size limit on the specified scheme's backing-map allows control over how many version identifiers are tracked.

Elements

The following table describes the elements you can define within the version-transient-scheme element.

Element Required/Optional Description
<cache-name-suffix> Optional Specifies the name modifier that is used to create a cache of version objects associated with a given cache. The value of this element is appended to the base cache name.

Legal value is a string.

Default value is "-version".

For example, if the base case is named "Sessions" and this name modifier is set to "-version", the associated version cache will be named "Sessions-version".
<replicated-scheme>
or
<distributed-scheme>
Required Specifies the scheme for the cache used to maintain the versioning information.

Legal values are:

version-persistent-scheme

Used in: versioned-backing-map-scheme.

Description

The version-persistent-scheme defines a cache for storing object versioning information in a clustered cache. Specifying a size limit on the specified scheme's backing-map allows control over how many version identifiers are tracked.

Elements

The following table describes the elements you can define within the version-persistent-scheme element.

Element Required/Optional Description
<cache-name-suffix> Optional Specifies the name modifier that is used to create a cache of version objects associated with a given cache. The value of this element is appended to the base cache name.

Legal value is a string.

Default value is "-persist".

For example, if the base case is named "Sessions" and this name modifier is set to "-persist", the associated version cache will be named "Sessions-persist".
<replicated-scheme>
or
<distributed-scheme>
Required Specifies the scheme for the cache used to maintain the versioning information.

Legal values are:

cache-config

Description

The cache-config element is the root element of the cache configuration descriptor.

At a high level a cache configuration consists of cache schemes and cache scheme mappings. Cache schemes describe a type of cache, for instance a database backed, distributed cache. Cache mappings define what scheme to use for a given cache name.

Elements

The following table describes the elements you can define within the cache-config element.

Element Required/Optional Description
<caching-scheme-mapping> Required Specifies the cacheing-scheme that will be used for caches, based on the cache's name.
<caching-schemes> Required Defines the available caching-schemes for use in the cluster.

cache-mapping

Used in: caching-scheme-mapping

Description

Each cache-mapping element specifyies the cache-scheme which is to be used for a given cache name or pattern.

Elements

The following table describes the elements you can define within the cache-mapping element.

Element Required/Optional Description
<cache-name> Required Specifies a cache name or name pattern. The name is unique within a cache factory.

The following cache name patterns are supported:

  • exact match, i.e. "MyCache"
  • prefix match, i.e. "My*" that matches to any cache name starting with "My"
  • any match "*", that matches to any cache name

The patterns get matched in the order of specificity (more specific definition is selected whenever possible). For example, if both "MyCache" and "My*" mappings are specified, the scheme from the "MyCache" mapping will be used to configure a cache named "MyCache".

<scheme-name> Required Contains the caching scheme name. The name is unique within a configuration file.

Caching schemes are configured in the caching-schemes section.
<init-params> Optional Allows specifying replaceable cache scheme parameters.

During cache scheme parsing, any occurrence of any replaceable parameter in format "{parameter-name}" is replaced with the corresponding parameter value.

Consider the following cache mapping example:
<cache-mapping>
<cache-name>My*</cache-name>
<scheme-name>my-scheme</scheme-name>
<init-params>
<init-param>
<param-name>cache-loader</param-name>
<param-value>com.acme.MyCacheLoader</param-value>
</init-param>
<init-param>
<param-name>size-limit</param-name>
<param-value>1000</param-value>
</init-param>
</init-params>
</cache-mapping>

For any cache name match "My*", any occurrence of the literal "{cache-loader}" in any part of the corresponding cache-scheme element will be replaced with the string "com.acme.MyCacheLoader" and any occurrence of the literal "{size-limit}" will be replaced with the value of "1000".

Since Coherence 3.0

cachestore-scheme

Used in: local-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme.

Description

Cache store schemes define a mechanism for connecting a cache to a backend data store. The cache store scheme may use any class implementing either the com.tangosol.net.cache.CacheStore or com.tangosol.net.cache.CacheLoader interfaces, where the former offers read-write capabilities, where the latter is read-only. Custom implementations of these interfaces may be produced to connect Coherence to various data stores. See the database cache sample for an example of using a cachestore-scheme.

Elements

The following table describes the elements you can define within the cachestore-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<class-scheme> Optional Specifies the implementation of the cache store.

The specified class must implement one of the following two interfaces.

  • com.tangosol.net.cache.CacheStore - for read-write support
  • com.tangosol.net.cache.CacheLoader - for read-only support
<remote-cache-scheme> Optional Configures the cachestore-scheme to use Coherence*Extend as its cache store implementation.

caching-scheme-mapping

Used in: cache-config

Description

Defines mappings between cache names, or name patterns, and caching-schemes. For instance you may define that caches whose names start with "accounts-" will use a distributed caching scheme, while caches starting with the name "rates-" will use a replicated caching scheme.

Elements

The following table describes the elements you can define within the caching-scheme-mapping element.

Element Required/Optional Description
<cache-mapping> Optional Contains a single binding between a cache name and the caching scheme this cache will use.

caching-schemes

Used in: cache-config

Description

The caching-schemes element defines a series of cache scheme elements. Each cache scheme defines a type of cache, for instance a database backed partitioned cache, or a local cache with an LRU eviction policy. Scheme types are bound to actual caches using cache-scheme-mappings.

Scheme Types and Names

Each of the cache scheme element types is used to describe a different type of cache, for instance distributed, versus replicated. Multiple instances of the same type may be defined so long as each has a unique scheme-name.

For example the following defines two different distributed schemes:

<distributed-scheme>
<scheme-name>DistributedInMemoryCache</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<local-scheme/>
</backing-map-scheme>
</distributed-scheme>
<distributed-scheme>
<scheme-name>DistributedOnDiskCache</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<external-scheme>
<nio-file-manager>
<initial-size>8MB</initial-size>
<maximum-size>512MB</maximum-size>
<directory></directory>
</nio-file-manager>
</external-scheme>
</backing-map-scheme>
</distributed-scheme>

Nested Schemes

Some caching scheme types contain nested scheme definitions. For instance in the above example the distributed schemes include a nested scheme defintion describing their backing map.

Scheme Inheritance

Caching schemes can be defined by specifying all the elements required for a given scheme type, or by inheriting from another named scheme of the same type, and selectively overriding specific values. Scheme inheritance is accomplished by including a <scheme-ref> element in the inheriting scheme containing the scheme-name of the scheme to inherit from.

For example:

The following two configurations will produce equivalent "DistributedInMemoryCache" scheme defintions:

<distributed-scheme>
<scheme-name>DistributedInMemoryCache</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<local-scheme>
<eviction-policy>LRU</eviction-policy>
<high-units>1000</high-units>
<expiry-delay>1h</expiry-delay>
</local-scheme>
</backing-map-scheme>
</distributed-scheme>
<distributed-scheme>
<scheme-name>DistributedInMemoryCache</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<local-scheme>
<scheme-ref>LocalSizeLimited</scheme-ref>
</local-scheme>
</backing-map-scheme>
</distributed-scheme>
<local-scheme>
<scheme-name>LocalSizeLimited</scheme-name>
<eviction-policy>LRU</eviction-policy>
<high-units>1000</high-units>
<expiry-delay>1h</expiry-delay>
</local-scheme>

Please note that while the first is somewhat more compact, the second offers the ability to easily resuse the "LocalSizeLimited" scheme within multiple schemes. The following example demonstrates multiple schemes reusing the same "LocalSizeLimited" base defintion, but the second imposes a diffrent expiry-delay.

<distributed-scheme>
<scheme-name>DistributedInMemoryCache</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<local-scheme>
<scheme-ref>LocalSizeLimited</scheme-ref>
</local-scheme>
</backing-map-scheme>
</distributed-scheme>
<replicated-scheme>
<scheme-name>ReplicatedInMemoryCache</scheme-name>
<service-name>ReplicatedCache</service-name>
<backing-map-scheme>
<local-scheme>
<scheme-ref>LocalSizeLimited</scheme-ref>
<expiry-delay>10m</expiry-delay>
</local-scheme>
</backing-map-scheme>
</replicated-scheme>
<local-scheme>
<scheme-name>LocalSizeLimited</scheme-name>
<eviction-policy>LRU</eviction-policy>
<high-units>1000</high-units>
<expiry-delay>1h</expiry-delay>
</local-scheme>

Elements

The following table describes the different types of schemes you can define within the caching-schemes element.

Element Required/Optional Description
<local-scheme> Optional Defines a cache scheme which provides on-heap cache storage.
<external-scheme> Optional Defines a cache scheme which provides off-heap cache storage, for instance on disk.
<paged-external-scheme> Optional Defines a cache scheme which provides off-heap cache storage, that is size-limited via time based paging.
<distributed-scheme> Optional Defines a cache scheme where storage of cache entries is partitioned across the cluster nodes.
<replicated-scheme> Optional Defines a cache scheme where each cache entry is stored on all cluster nodes.
<optimistic-scheme> Optional Defines a replicated cache scheme which uses optimistic rather then pessimistic locking.
<near-scheme> Optional Defines a two tier cache scheme which consists of a fast local front-tier cache of a much larger back-tier cache.
<versioned-near-scheme> Optional Defines a near-scheme which uses object versioning to ensure coherence between the front and back tiers.
<overflow-scheme> Optional Defines a two tier cache scheme where entries evicted from a size-limited front-tier overflow and are stored in a much larger back-tier cache.
<invocation-scheme> Optional Defines an invocation service which can be used for performing custom operations in parallel across cluster nodes.
<read-write-backing-map-scheme> Optional Defines a backing map scheme which provides a cache of a persistent store.
<versioned-backing-map-scheme> Optional Defines a backing map scheme which utilizes object versioning to determine what updates need to be written to the persistent store.
<remote-cache-scheme> Optional Defines a cache scheme that enables caches to be accessed from outside a Coherence cluster via Coherence*Extend.
<class-scheme> Optional Defines a cache scheme using a custom cache implementation.

Any custom implementation must implement the java.util.Map interface, and include a zero-parameter public constructor.

Additionally if the contents of the Map can be modified by anything other than the CacheService itself (e.g. if the Map automatically expires its entries periodically or size-limits its contents), then the returned object must implement the com.tangosol.util.ObservableMap interface.
<disk-scheme> Optional Note: As of Coherence 3.0, the disk-scheme configuration element has been deprecated and replaced by the external-scheme and paged-external-scheme configuration elements.

class-scheme

Used in: caching-schemes, local-scheme, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme, cachestore-scheme, listener

Description

Class schemes provide a mechanism for instantiating an arbitrary Java object for use by other schemes. The scheme which contains this element will dictate what class or interface(s) must be extended. See the database cache sample for an example of using a class-scheme.

The class-scheme may be configured to either instantiate objects directly via their class-name, or indirectly via a class-factory-name and method-name. The class-scheme must be configured with either a class-name or class-factory-name and method-name.

Elements

The following table describes the elements you can define within the class-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<class-name> Optional Contains a fully specified Java class name to instantiate.

This class must extend an appropriate implementation class as dictated by the containing scheme and must declare the exact same set of public constructors as the superclass.
<class-factory-name> Optional Specifies a fully specified name of a Java class that will be used as a factory for object instantiation.
<method-name> Optional Specifies the name of a static factory method on the factory class which will perform object instantiation.
<init-params> Optional Specifies initialization parameters which are accessible by implementations which support the com.tangosol.run.xml.XmlConfigurable interface, or which include a public constructor with a matching signature.

custom-store-manager

Used in: external-scheme, paged-external-scheme, async-store-manager.

Description

Used to create and configure custom implementations of a store manager for use in external caches.

Elements

The following table describes the elements you can define within the custom-store-manager element.

Element Required/Optional Description
<class-name> Required Specifies the implementation of the store manager.

The specified class must implement the com.tangosol.io.BinaryStoreManager interface.
<init-params> Optional Specifies initialization parameters, for use in custom store manager implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.

disk-scheme

As of Coherence 3.0, the disk-scheme configuration element has been deprecated and replaced with by the external-scheme and paged-external-scheme configuration elements.

distributed-scheme

Used in: caching-schemes, near-scheme, versioned-near-scheme, overflow-scheme, versioned-backing-map-scheme

Description

The distributed-scheme defines caches where the storage for entries is partitioned across cluster nodes. See the service overview for a more detailed description of partitioned caches. See the partitioned cache samples for examples of various distributed-scheme configurations.

Clustered Concurrency Control

Partitioned caches support cluster wide key-based locking so that data can be modified in a cluster without encountering the classic missing update problem. Note that any operation made without holding an explicit lock is still atomic but there is no guarantee that the value stored in the cache does not change between atomic operations.

Cache Clients

The partitioned cache service supports the concept of cluster nodes which do not contribute to the overall storage of the cluster. Nodes which are not storage enabled are considered "cache clients".

Cache Partitions

The cache entries are evenly segmented into a number of logical partitions, and each storage enabled cluster node running the specified partitioned service will be responsible for maintain a fair-share of these partitions.

Key Association

By default the specific set of entries assigned to each partition is transparent to the application. In some cases it may be advantageous to keep certain related entries within the same cluster node. A key-associator may be used to indicate related entries, the partitioned cache service will ensure that associated entries reside on the same partition, and thus on the same cluster node. Alternatively, key association may be specified from within the application code by using keys which implement the com.tangosol.net.cache.KeyAssociation interface.

Cache Storage (Backing Map)

Storage for the cache is specified via the backing-map-scheme. For instance a partitioned cache which uses a local cache for its backing map will result in cache entries being stored in-memory on the storage enabled cluster nodes.

Failover

For the purposes of failover a configurable number of backups of the cache may be maintained in backup-storage across the cluster nodes. Each backup is also divided into partitions, and when possible a backup partition will not reside on the same physical machine as the primary partition. If a cluster node abruptly leaves the cluster, responsibility for its partitions will automatically be reassigned to the existing backups, and new backups of those partitions will be created (on remote nodes) in order to maintain the configured backup count.

Partition Redistribution

When a node joins or leaves the cluster, a background redistribution of partitions occurs to ensure that all cluster nodes manage a fair-share of the total number of partitions. The amount of bandwidth consumed by the background transfer of partitions is governed by the transfer-threshold.

Elements

The following table describes the elements you can define within the distributed-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<service-name> Optional Specifies the name of the service which will manage caches created from this scheme.

Services are configured from within the operational descriptor.
<listener> Optional Specifies an implementation of a com.tangosol.MapListener which will be notified of events occurring on the cache.
<backing-map-scheme> Optional Specifies what type of cache will be used within the cache server to store the entries.

Legal values are:

When using an overflow-based backing map it is important that the corresponding backup-storage be configured for overflow (potentially using the same scheme as the backing-map). See the partitioned cache with overflow sample for an example configuration.
<partition-count> Optional Specifies the number of partitions that a partitioned cache will be "chopped up" into. Each node running the partitioned cache service that has the local-storage option set to true will manage a "fair" (balanced) number of partitions. The number of partitions should be larger than the square of the number of cluster members to achieve a good balance, and it is suggested that the number be prime. Good defaults include 257 and 1021 and prime numbers in-between, depending on the expected cluster size. A list of first 1,000 primes can be found at http://www.utm.edu/research/primes/lists/small/1000.txt

Legal values are prime numbers.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<key-associator> Optional Specifies a class that will be responsible for providing associations between keys. Allowing associated keys to reside on the same partition.
<key-partitioning> Optional Specifies the class which will be responsible for assigning keys to partitions.

If unspecified the default key partitioning algorithm will be used, which ensures that keys are evenly segmented across partitions.
<backup-count> Optional Specifies the number of members of the partitioned cache service that hold the backup data for each unit of storage in the cache.

Value of 0 means that in the case of abnormal termination, some portion of the data in the cache will be lost. Value of N means that if up to N cluster nodes terminate at once, the cache data will be preserved.

To maintain the partitioned cache of size M, the total memory usage in the cluster does not depend on the number of cluster nodes and will be in the order of M*(N+1).

Recommended values are 0, 1 or 2.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<backup-storage> Optional Specifies the type and configuration for the partitioned cache backup storage.
<thread-count> Optional Specifies the number of daemon threads used by the partitioned cache service.

If zero, all relevant tasks are performed on the service thread.

Legal values are from positive integers or zero.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<standard-lease-milliseconds> Optional Specifies the duration of the standard lease in milliseconds. Once a lease has aged past this number of milliseconds, the lock will automatically be released. Set this value to zero to specify a lease that never expires. The purpose of this setting is to avoid deadlocks or blocks caused by stuck threads; the value should be set higher than the longest expected lock duration (e.g. higher than a transaction timeout). It's also recommended to set this value higher then packet-delivery/timeout-milliseconds value.

Legal values are from positive long numbers or zero.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<lease-granularity> Optional Specifies the lease ownership granularity. Available since release 2.3.

Legal values are:

  • thread
  • member

A value of thread means that locks are held by a thread that obtained them and can only be released by that thread. A value of member means that locks are held by a cluster node and any thread running on the cluster node that obtained the lock can release it.

Default value is the value specified in the tangosol-coherence.xml descriptor.

<transfer-threshold> Optional Specifies the threshold for the primary buckets distribution in kilo-bytes. When a new node joins the partitioned cache service or when a member of the service leaves, the remaining nodes perform a task of bucket ownership re-distribution. During this process, the existing data gets re-balanced along with the ownership information. This parameter indicates a preferred message size for data transfer communications. Setting this value lower will make the distribution process take longer, but will reduce network bandwidth utilization during this activity.

Legal values are integers greater then zero.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<local-storage> Optional Specifies whether or not a cluster node will contribute storage to the cluster, i.e. maintain partitions. When disabled the node is considered a cache client.
Normally this value should be left unspecified within the configuration file, and instead set on a per-process basis using the tangosol.coherence.distributed.localstorage system property. This allows cache clients and servers to use the same configuration descriptor.

Legal values are true or false.

Default value is the value specified in the tangosol-coherence.xml descriptor.

<autostart> Optional The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not the cache services associated with this cache scheme should be automatically started at a cluster node.

Legal values are true or false.

Default value is false.

external-scheme

Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme

Description

External schemes define caches which are not JVM heap based, allowing for greater storage capacity. See the local cache samples for examples of various external cache configurations.

Implementaion

This scheme is implemented by:

  • com.tangosol.net.cache.SerializationMap - for unlimited size caches
  • com.tangosol.net.cache.SerializationCache - for size limited caches

The implementation type is chosen based on the following rule:

  • if the high-units element is specified and not zero then SerializationCache is used;
  • otherwise SerializationMap is used.

Pluggable Storage Manager

External schemes use a pluggable store manager to store and retrieve binary key value pairs. Supported store managers include:

Size Limited Cache

The cache may be configured as size-limited, which means that once it reaches its maximum allowable size it prunes itself.

Eviction against disk based caches can be expensive, consider using a paged-external-scheme for such cases.

Entry Expiration

External schemes support automatic expiration of entries based on the age of the value, as configured by the expiry-delay.

Persistence (long-term storage)

External caches are generally used for temporary storage of large data sets, for example as the back-tier of an overflow-scheme. Certain implementations do however support persistence for non-clustered caches, see the bdb-store-manager and lh-file-manager for details. Clustered persistence should be configured via a read-write-backing-map-scheme on a distributed-scheme.

Elements

The following table describes the elements you can define within the external-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<class-name> Optional Specifies a custom implementation of the external cache.

Any custom implementation must extend one of the following classes:

  • com.tangosol.net.cache.SerializationCache - for size limited caches
  • com.tangosol.net.cache.SerializationMap - for unlimited size caches
  • com.tangosol.net.cache.SimpleSerializationMap - for unlimited size caches

and declare the exact same set of public constructors as the superclass.

<init-params> Optional Specifies initialization parameters, for use in custom external cache implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache.
<high-units> Optional Used to limit the size of the cache. Contains the maximum number of units that can be placed in the cache before pruning occurs. An entry is the unit of measurement. Once this limit is exceeded, the cache will begin the pruning process, evicting the least recently used entries until the number of units is brought below this limit. The scheme's class-name element may be used to provide custom extensions to SerializationCache, which implement alternative eviction policies.

Legal values are positive integers or zero. Zero implies no limit.

Default value is zero.
<expiry-delay> Optional Specifies the amount of time from last update that entries will be kept by the cache before being expired. Entries that are expired will not be accessible and will be evicted.

The value of this element must be in the following format:
[\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]?

where the first non-digits (from left to right) indicate the unit of time duration:

  • MS or ms (milliseconds)
  • S or s (seconds)
  • M or m (minutes)
  • H or h (hours)
  • D or d (days)

If the value does not contain a unit, a unit of seconds is assumed.

A value of zero implies no expiry.

Default value is zero.

<async-store-manager> Optional Configures the external cache to use an asynchronous storage manager wrapper for any other storage manager.
<custom-store-manager> Optional Configures the external cache to use a custom storage manager implementation.
<bdb-store-manager> Optional Configures the external cache to use Berkeley Database JE on-disk databases for cache storage.
<lh-file-manager> Optional Configures the external cache to use a Tangosol LH on-disk database for cache storage.
<nio-file-manager> Optional Configures the external cache to use a memory-mapped file for cache storage.
<nio-memory-manager> Optional Configures the external cache to use an off JVM heap, memory region for cache storage.

init-param

Used in: init-params.

Defines an individual initialization parameter.

Elements

The following table describes the elements you can define within the init-param element.

Element Required/Optional Description
<param-name> Optional Contains the name of the initialization parameter.

For example:

<class-name>com.mycompany.cache.CustomCacheLoader</class-name>
<init-params>
<init-param>
<param-name>sTableName</param-name>
<param-value>EmployeeTable</param-value>
</init-param>
<init-param>
<param-name>iMaxSize</param-name>
<param-value>2000</param-value>
</init-param>
</init-params>
<param-type> Optional Contains the Java type of the initialization parameter.

The following standard types are supported:

  • java.lang.String (a.k.a. string)
  • java.lang.Boolean (a.k.a. boolean)
  • java.lang.Integer (a.k.a. int)
  • java.lang.Long (a.k.a. long)
  • java.lang.Double (a.k.a. double)
  • java.math.BigDecimal
  • java.io.File
  • java.sql.Date
  • java.sql.Time
  • java.sql.Timestamp

For example:

<class-name>com.mycompany.cache.CustomCacheLoader</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>EmployeeTable</param-value>
</init-param>
<init-param>
<param-type>int</param-type>
<param-value>2000</param-value>
</init-param>
</init-params>

Please refer to the list of available Parameter Macros.

<param-value> Optional Contains the value of the initialization parameter.

The value is in the format specific to the Java type of the parameter.

Please refer to the list of available Parameter Macros.

invocation-scheme

Used in: caching-schemes.

Description

Defines an Invocation Service. The invocation service may be used to perform custom operations in parallel on any number of cluster nodes. See the com.tangosol.net.InvocationService API for additional details.

Elements

The following table describes the elements you can define within the invocation-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<thread-count> Optional Specifies the number of daemon threads used by the invocation service.

If zero, all relevant tasks are performed on the service thread.

Legal values are positive integers or zero.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<autostart> Optional The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not this service should be automatically started at a cluster node.

Legal values are true or false.

Default value is false.

jms-acceptor

Used in: acceptor-config.

Description

The jms-acceptor element specifies the configuration info for a connection acceptor that accepts connections from Coherence*Extend clients over JMS.

For additional details and example configurations see Configuring and Using Coherence*Extend.

Elements

The following table describes the elements you can define within the jms-acceptor element.

Element Required/Optional Description
<queue-connection-factory-name> Required Specifies the JNDI name of the JMS QueueConnectionFactory used by the connection acceptor.
<queue-name> Required Specifies the JNDI name of the JMS Queue used by the connection acceptor.

jms-initiator

Used in: initiator-config.

Description

The jms-initiator element specifies the configuration info for a connection initiator that enables Coherence*Extend clients to connect to a remote cluster via JMS.

For additional details and example configurations see Configuring and Using Coherence*Extend.

Elements

The following table describes the elements you can define within the jms-initiator element.

Element Required/Optional Description
<queue-connection-factory-name> Required Specifies the JNDI name of the JMS QueueConnectionFactory used by the connection initiator.
<queue-name> Required Specifies the JNDI name of the JMS Queue used by the connection initiator.
<connect-timeout> Optional Specifies the maximum amount of time to wait while establishing a connection with a connection acceptor.

The value of this element must be in the following format:

[\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]?

where the first non-digits (from left to right) indicate the unit of time duration:

  • MS or ms (milliseconds)
  • S or s (seconds)
  • M or m (minutes)
  • H or h (hours)
  • D or d (days)

If the value does not contain a unit, a unit of milliseconds is assumed.

Default value is an infinite timeout.

key-associator

Used in: distributed-scheme

Description

Specifies an implementation of a com.tangosol.net.partition.KeyAssociator which will be used to determine associations between keys, allowing related keys to reside on the same partition.

Alternatively the cache's keys may manage the association by implementing the com.tangosol.net.cache.KeyAssociation interface.

Elements

The following table describes the elements you can define within the key-associator element.

Element Required/Optional Description
<class-name> Required The name of a class that implements the com.tangosol.net.partition.KeyAssociator interface. This implementation must have a zero-parameter public constructor.

Default value is the value specified in the tangosol-coherence.xml descriptor.

key-partitioning

Used in: distributed-scheme

Description

Specifies an implementation of a com.tangosol.net.partition.KeyPartitioningStrategy which will be used to determine the partition in which a key will reside.

Elements

The following table describes the elements you can define within the key-partitioning element.

Element Required/Optional Description
<class-name> Required The name of a class that implements the com.tangosol.net.partition.KeyPartitioningStrategy interface. This implementation must have a zero-parameter public constructor.

Default value is the value specified in the tangosol-coherence.xml descriptor.

lh-file-manager

Used in: external-scheme, paged-external-scheme, async-store-manager.

Description

Configures a store manager which will use a Tangosol LH on-disk embedded database for storage. See the persistent disk cache and overflow cache samples for examples of LH based store configurations.

Implementation

Implemented by the com.tangosol.io.lh.LHBinaryStoreManager class. The BinaryStore objects created by this class are instances of com.tangosol.io.lh.LHBinaryStore.

Elements

The following table describes the elements you can define within the lh-file-manager element.

Element Required/Optional Description
<class-name> Optional Specifies a custom implementation of the LH BinaryStoreManager.

Any custom implementation must extend the com.tangosol.io.lh.LHBinaryStoreManager class and declare the exact same set of public constructors.
<init-params> Optional Specifies initialization parameters, for use in custom LH file manager implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<directory> Optional Specifies the pathname for the root directory that the LH file manager will use to store files in. If not specified or specifies a non-existent directory, a temporary file in the default location will be used.
<file-name> Optional Specifies the name for a non-temporary (persistent) file that the LH file manager will use to store data in. Specifying this parameter will cause the lh-file-manager to use non-temporary database instances. This is intended only for local caches that are backed by a cache loader from a non-temporary file, so that the local cache can be pre-populated from the disk file on startup. When specified it is recommended that it utilize the {cache-name} macro.

Normally this parameter should be left unspecified, indicating that temporary storage is to be used.

listener

Used in: local-scheme, external-scheme, paged-external-scheme, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme.

Description

The Listener element specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on a cache.

Elements

The following table describes the elements you can define within the listener element.

Element Required/Optional Description
<class-scheme> Required Specifies the full class name of listener implementation to use.

The specified class must implement the com.tangosol.util.MapListener interface.

local-scheme

Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme

Description

Local cache schemes define in-memory "local" caches. Local caches are generally nested within other cache schemes, for instance as the front-tier of a near-scheme. See the local cache samples for examples of various local cache configurations.

Implementation

Local caches are implemented by the com.tangosol.net.cache.LocalCache class.

Cache of an External Store

A local cache may be backed by an external cache store, cache misses will read-through to the back end store to retrieve the data. If a writable store is provided, cache writes will propagate to the cache store as well. For optimizing read/write access against a cache store see the read-write-backing-map-scheme.

Size Limited Cache

The cache may be configured as size-limited, which means that once it reaches its maximum allowable size it prunes itself back to a specified smaller size, choosing which entries to evict according to its eviction-policy. The entries and size limitations are measured in terms of units as calculated by the scheme's unit-calculator.

Entry Expiration

The local cache supports automatic expiration of entries based on the age of the value, as configured by the expiry-delay.

Elements

The following table describes the elements you can define within the local-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<service-name> Optional Specifies the name of the service which will manage caches created from this scheme.

Services are configured from within the operational descriptor.
<class-name> Optional Specifies a custom implementation of the local cache.

Any custom implementation must extend the com.tangosol.net.cache.LocalCache class and declare the exact same set of public constructors.
<init-params> Optional Specifies initialization parameters, for use in custom local cache implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occuring on the cache.
<cachestore-scheme> Optional Specifies the store which is being cached. If unspecified the cached data will only reside in memory, and only reflect operations performed on the cache itself.
<eviction-policy> Optional Specifies the type of eviction policy to use.

Legal values are:

  • LRU - Least Recently Used eviction policy chooses which entries to evict based on how recently they were last accessed, evicting those that were not accessed the for the longest period first.
  • LFU - Least Frequently Used eviction policy chooses which entries to evict based on how often they are being accessed, evicting those that are accessed least frequently first.
  • HYBRID - Hybrid eviction policy chooses which entries to evict based the combination (weighted score) of how often and recently they were accessed, evicting those that are accessed least frequently and were not accessed for the longest period first.
  • <class-scheme> - A custom eviction policy, specified as a class-scheme. The class specified within this scheme must implement the com.tangosol.net.cache.LocalCache.EvictionPolicy interface.

Default value is HYBRID.

<high-units> Optional Used to limit the size of the cache. Contains the maximum number of units that can be placed in the cache before pruning occurs. An entry is the unit of measurement, unless it is overridden by an alternate unit-calculator. Once this limit is exceeded, the cache will begin the pruning process, evicting entries according to the eviction policy until the low-units size is reached.

Legal values are positive integers or zero. Zero implies no limit.

Default value is 0.
<low-units> Optional Contains the number of units that the cache will be pruned down to when pruning takes place. An entry is the unit of measurement, unless it is overridden by an alternate unit-calculator. When pruning occurs entries will continue to be evicted according to the eviction policy until this size.

Legal values are positive integers or zero. Zero implies the default.

Default value is 75% of the high-units setting (i.e. for a high-units setting of 1000 the default low-units will be 750).
<unit-calculator> Optional Specifies the type of unit calculator to use.

A unit calculator is used to determine the cost (in "units") of a given object.

Legal values are:

  • FIXED - A unit calculator that assigns an equal weight of 1 to all cached objects.
  • BINARY - A unit calculator that assigns an object a weight equal to the number of bytes of memory required to cache the object. This requires that the objects are Binary instances, as in a Partitioned cache. See com.tangosol.net.cache.BinaryMemoryCalculator for additional details.
  • <class-scheme> - A custom unit calculator, specified as a class-scheme. The class specified within this scheme must implement the com.tangosol.net.cache.LocalCache.UnitCalculator interface.

Default value is FIXED.

<expiry-delay> Optional Specifies the amount of time from last update that entries will be kept by the cache before being marked as expired. Any attempt to read an expired entry will result in a reloading of the entry from the configured cache store. Expired values are periodically discarded from the cache based on the flush-delay.

The value of this element must be in the following format:
[\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]?

where the first non-digits (from left to right) indicate the unit of time duration:

  • MS or ms (milliseconds)
  • S or s (seconds)
  • M or m (minutes)
  • H or h (hours)
  • D or d (days)

If the value does not contain a unit, a unit of seconds is assumed.

A value of zero implies no expiry.

Default value is zero.

<flush-delay> Optional Specifies the time interval between periodic cache flushes, which will discard expired entries from the cache, thus freeing resources.

The value of this element must be in the following format:
[\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]?

where the first non-digits (from left to right) indicate the unit of time duration:

  • MS or ms (milliseconds)
  • S or s (seconds)
  • M or m (minutes)
  • H or h (hours)
  • D or d (days)

If the value does not contain a unit, a unit of seconds is assumed.

If expiry is enabled, the default flush-delay is 1m, otherwise a default of zero is used and automatic flushes are disabled.

near-scheme

Used in: caching-schemes.

Description

The near-scheme defines a two tier cache consisting of a front-tier which caches a subset of a back-tier cache. The front-tier is generally a fast, size limited cache, while the back-tier is slower, but much higher capacity. A typical deployment might use a local-scheme for the front-tier, and a distributed-scheme for the back-tier. The result is that a portion of a large partitioned cache will be cached locally in-memory allowing for very fast read access. See the near cache sample for an example of a near cache configurations.

Implementation

The near scheme is implemented by the com.tangosol.net.cache.NearCache class.

Front-tier Invalidation

Specifying an invalidation-strategy defines a strategy that is used to keep the front tier of the near cache in sync with the back tier. Depending on that strategy a near cache is configured to listen to certain events occurring on the back tier and automatically update (or invalidate) the front portion of the near cache.

Elements

The following table describes the elements you can define within the near-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<class-name> Optional Specifies a custom implementation of the near cache.

Any custom implementation must extend the com.tangosol.net.cache.NearCache class and declare the exact same set of public constructors.
<init-params> Optional Specifies initialization parameters, for use in custom near cache implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache.
<front-scheme> Required Specifies the cache-scheme to use in creating the front-tier cache.

Legal values are:

The eviction policy of the front-scheme defines which entries will be cached locally.
For example:

<front-scheme>
<local-scheme>
<eviction-policy>HYBRID</eviction-policy>
<high-units>1000</high-units>
</local-scheme>
</front-scheme>
<back-scheme> Required Specifies the cache-scheme to use in creating the back-tier cache.

Legal values are:

For example:

<back-scheme>
<distributed-scheme>
<scheme-ref>default-distributed</scheme-ref>
</distributed-scheme>
</back-scheme>
<invalidation-strategy> Optional Specifies the strategy used keep the front-tier in-sync with the back-tier.

Please see com.tangosol.net.cache.NearCache for more details.

Legal values are:

  • none - instructs the cache not to listen for invalidation events at all. This is the best choice for raw performance and scalability when business requirements permit the use of data which might not be absolutely current. Freshness of data can be guaranteed by use of a sufficiently brief eviction policy. The worst case performance is identical to a standard Distributed cache.
  • present - instructs the near cache to listen to the back map events related only to the items currently present in the front map.
    This strategy works best when cluster nodes have sticky data access patterns (for example, HTTP session management with a sticky load balancer).
  • all - instructs the near cache to listen to all back map events.
    This strategy is optimal for read-heavy access patterns where there is significant overlap between the front caches on each cluster member.
  • auto - instructs the near cache to switch between present and all strategies automatically based on the cache statistics.


Default value is auto.

<autostart> Optional The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not the cache services associated with this cache scheme should be automatically started at a cluster node.

Legal values are true or false.

Default value is false.

nio-file-manager

Used in: external-scheme, paged-external-scheme, async-store-manager.

Description

Configures an external store which uses memory-mapped file for storage.

Implementation

This store manager is implemented by the com.tangosol.io.nio.MappedStoreManager class. The BinaryStore objects created by this class are instances of the com.tangosol.io.nio.BinaryMapStore.

Elements

The following table describes the elements you can define within the nio-file-manager element.

Element Required/Optional Description
<class-name> Optional Specifies a custom implementation of the local cache.

Any custom implementation must extend the com.tangosol.io.nio.MappedStoreManager class and declare the exact same set of public constructors.
<init-params> Optional Specifies initialization parameters, for use in custom nio-file-manager implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<initial-size> Optional Specifies the initial buffer size in megabytes.

The value of this element must be in the following format:
[\d]+[[.][\d]+]?[K|k|M|m]?[B|b]?

where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:

  • K or k (kilo, 210)
  • M or m (mega, 220)
  • G or g (giga, 230)
  • T or t (tera, 240)

If the value does not contain a factor, a factor of mega is assumed.

Legal values are positive integers between 1 and Integer.MAX_VALUE - 1023.

Default value is 1MB.

<maximum-size> Optional Specifies the maximum buffer size in bytes.

The value of this element must be in the following format:
[\d]+[[.][\d]+]?[K|k|M|m]?[B|b]?

where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:

  • K or k (kilo, 210)
  • M or m (mega, 220)
  • G or g (giga, 230)
  • T or t (tera, 240)

If the value does not contain a factor, a factor of mega is assumed.

Legal values are positive integers between 1 and Integer.MAX_VALUE - 1023.

Default value is 1024MB.

<directory> Optional Specifies the pathname for the root directory that the manager will use to store files in. If not specified or specifies a non-existent directory, a temporary file in the default location will be used.

nio-memory-manager

Used in: external-scheme, paged-external-scheme, async-store-manager.

Description

Configures a store-manager which uses an off JVM heap, memory region for storage, which means that it does not affect the Java heap size and the related JVM garbage-collection performance that can be responsible for application pauses. See the NIO cache sample for an example of an NIO cache configuration.

Some JVMs (starting with 1.4) require the use of a command line parameter if the total NIO buffers will be greater than 64MB. For example: -XX:MaxDirectMemorySize=512M

Implementation

Implemented by the com.tangosol.io.nio.DirectStoreManager class. The BinaryStore objects created by this class are instances of the com.tangosol.io.nio.BinaryMapStore.

Elements

The following table describes the elements you can define within the nio-memory-manager element.

Element Required/Optional Description
<class-name> Optional Specifies a custom implementation of the local cache.

Any custom implementation must extend the com.tangosol.io.nio.DirectStoreManager class and declare the exact same set of public constructors.
<init-params> Optional Specifies initialization parameters, for use in custom nio-memory-manager implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<initial-size> Optional Specifies the initial buffer size in bytes.

The value of this element must be in the following format:
[\d]+[[.][\d]+]?[K|k|M|m]?[B|b]?

where the first non-digit (from left to right) indicates the factor with which the preceeding decimal value should be multiplied:

  • K or k (kilo, 210)
  • M or m (mega, 220)
  • G or g (giga, 230)
  • T or t (tera, 240)

If the value does not contain a factor, a factor of mega is assumed.

Legal values are positive integers between 1 and Integer.MAX_VALUE - 1023.

Default value is 1MB.

<maximum-size> Optional Specifies the maximum buffer size in bytes.

The value of this element must be in the following format:
[\d]+[[.][\d]+]?[K|k|M|m]?[B|b]?

where the first non-digit (from left to right) indicates the factor with which the preceeding decimal value should be multiplied:

  • K or k (kilo, 210)
  • M or m (mega, 220)
  • G or g (giga, 230)
  • T or t (tera, 240)

If the value does not contain a factor, a factor of mega is assumed.

Legal values are positive integers between 1 and Integer.MAX_VALUE - 1023.

Default value is 1024MB.

optimistic-scheme

Used in: caching-schemes, near-scheme, versioned-near-scheme, overflow-scheme

The optimistic scheme defines a cache which fully replicates all of its data to all cluster nodes that are running the service.

Optimistic Locking

Unlike the replicated and partitioned caches, optimistic caches do not support concurrency control (locking). Individual operations against entries are atomic but there is no guarantee that the value stored in the cache does not change between atomic operations. The lack of concurrency control allows optimistic caches to support very fast write operations.

Cache Storage (Backing Map)

Storage for the cache is specified via the backing-map-scheme. For instance an optimistic cache which uses a local cache for its backing map will result in cache entries being stored in-memory.

Elements

The following table describes the elements you can define within the optimistic-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<service-name> Optional Specifies the name of the service which will manage caches created from this scheme.

Services are configured from within the operational descriptor.
<listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache.
<backing-map-scheme> Optional Specifies what type of cache will be used within the cache server to store the entries.

Legal values are:

In order to ensure cache coherence, the backing-map of an optimistic cache must not use a read-through pattern to load cache entries. Either use a cache-aside pattern from outside the cache service, or switch to the distributed-scheme, which supports read-through clustered caching.

<autostart> Optional The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not the cache services associated with this cache scheme should be automatically started at a cluster node.

Legal values are true or false.

Default value is false.

overflow-scheme

Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme.

Description

The overflow-scheme defines a two tier cache consisting of a fast, size limited front-tier, and slower but much higher capacity back-tier cache. When the size limited front fills up, evicted entries are transparently moved to the back. In the event of a cache miss, entries may move from the back to the front. A typical deployment might use a local-scheme for the front-tier, and a external-scheme for the back-tier, allowing for fast local caches with capacities larger the the JVM heap would allow.

Implementation

Implemented by either com.tangosol.net.cache.OverflowMap or com.tangosol.net.cache.SimpleOverflowMap, see expiry-enabled for details.

Entry Expiration

Overflow supports automatic expiration of entries based on the age of the value, as configured by the expiry-delay.

Elements

The following table describes the elements you can define within the overflow-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<class-name> Optional Specifies a custom implementation of the overflow cache.

Any custom implementation must extend either the com.tangosol.net.cache.OverflowMap or com.tangosol.net.cache.SimpleOverflowMap class, and declare the exact same set of public constructors.
<init-params> Optional Specifies initialization parameters, for use in custom overflow cache implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache.
<front-scheme> Required Specifies the cache-scheme to use in creating the front-tier cache.

Legal values are:

The eviction policy of the front-scheme defines which entries which items are in the front vs back tiers.
For Example:

<front-scheme>
<local-scheme>
<eviction-policy>HYBRID</eviction-policy>
<high-units>1000</high-units>
</local-scheme>
</front-scheme>
<back-scheme> Required Specifies the cache-scheme to use in creating the back-tier cache.

Legal values are:

For Example:

<back-scheme>
<external-scheme>
<lh-file-manager/>
</external-scheme>
</back-scheme>
<miss-cache-scheme> Optional Specifies a cache-scheme for maintaining information on cache misses. For caches which are not expiry-enabled, the miss-cache is used track keys which resulted in both a front and back tier cache miss. The knowledge that a key is not in either tier allows some operations to perform faster, as they can avoid querying the potentially slow back-tier. A size limited scheme may be used to control how many misses are tracked. If unspecified no cache-miss data will be maintained.

Legal values are:

<expiry-enabled> Optional Turns on support for automatically-expiring data, as provided by the com.tangosol.net.cache.CacheMap API.

When enabled the overflow-scheme will be implemented using com.tangosol.net.cache.OverflowMap, rather then com.tangosol.net.cache.SimpleOverflowMap.

Legal values are true or false.

Default value is false.
<expiry-delay> Optional Specifies the amount of time from last update that entries will be kept by the cache before being expired. Entries that are expired will not be accessible and will be evicted.

The value of this element must be in the following format:
[\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]?

where the first non-digits (from left to right) indicate the unit of time duration:

  • MS or ms (milliseconds)
  • S or s (seconds)
  • M or m (minutes)
  • H or h (hours)
  • D or d (days)

If the value does not contain a unit, a unit of seconds is assumed.

A value of zero implies no expiry.

Default value is zero.

<autostart> Optional The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not the cache services associated with this cache scheme should be automatically started at a cluster node.

Legal values are true or false.

Default value is false.

paged-external-scheme

Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme

Description

As with external-schemes, paged-external-schemes define caches which are not JVM heap based, allowing for greater storage capacity. The paged-external-scheme optimizes LRU eviction by using a paging approach.

Implementation

This scheme is implemented by the com.tangosol.net.cache.SerializationPagedCache class.

Paging

Cache entries are maintained over a series of pages, where each page is a separate com.tangosol.io.BinaryStore, obtained from the configured storage manager. When a page is created it is considered to be the "current" page, and all write operations are performed against this page. On a configurable interval the current page is closed and a new current page is created. Read operations for a given key are performed against the last page in which the key was stored. When the number of pages exceeds a configured maximum, the oldest page is destroyed and those items which were not updated since the page was closed are be evicted. For example configuring a cache with a duration of ten minutes per page, and a maximum of six pages, will result in entries being cached for at most an hour.

Paging improves performance by avoiding individual delete operations against the storage manager as cache entries are removed or evicted. Instead the cache simply releases its references to those entries, and relies on the eventual destruction of an entire page to free the associated storage of all page entries in a single stroke.

Pluggable Storage Manager

External schemes use a pluggable store manager to create and destroy pages, as well as to access entries within those pages. Supported store-managers include:

Persistence (long-term storage)

Paged external caches are used for temporary storage of large data sets, for example as the back-tier of an overflow-scheme. These caches are not usable as for long-term storage (persistence), and will not survive beyond the life of the JVM. Clustered persistence should be configured via a read-write-backing-map-scheme on a distributed-scheme. If a non-clustered persistent cache is what is needed, refer to the external-scheme.

Elements

The following table describes the elements you can define within the paged-external-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<class-name> Optional Specifies a custom implementation of the external paged cache.

Any custom implementation must extend the com.tangosol.net.cache.SerializationPagedCache class and declare the exact same set of public constructors.
<init-params> Optional Specifies initialization parameters, for use in custom external paged cache implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache.
<page-limit> Required Specifies the maximum number of active pages for the paged cache.

Legal values are positive integers between 2 and 3600.
<page-duration> Optional Specifies the length of time, in seconds, that a page in the paged cache is current.

The value of this element must be in the following format:
[\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]?

where the first non-digits (from left to right) indicate the unit of time duration:

  • MS or ms (milliseconds)
  • S or s (seconds)
  • M or m (minutes)
  • H or h (hours)
  • D or d (days)

If the value does not contain a unit, a unit of seconds is assumed.

Legal values are between 5 and 604800 seconds (one week) and zero (no expiry).

Default value is zero

<async-store-manager> Optional Configures the paged external cache to use an asynchronous storage manager wrapper for any other storage manager.
<custom-store-manager> Optional Configures the paged external cache to use a custom storage manager implementation.
<bdb-store-manager> Optional Configures the paged external cache to use Berkeley Database JE on-disk databases for cache storage.
<lh-file-manager> Optional Configures the paged external cache to use a Tangosol LH on-disk database for cache storage.
<nio-file-manager> Optional Configures the paged external cache to use a memory-mapped file for cache storage.
<nio-memory-manager> Optional Configures the paged external cache to use an off JVM heap, memory region for cache storage.

proxy-config

Used in: proxy-scheme.

Description

The proxy-config element specifies the configuration info for the clustered service proxies managed by a proxy service. A service proxy is an intermediary between a remote client (connected to the cluster via a connection acceptor) and a clustered service used by the remote client.

Elements

The following table describes the elements you can define within the proxy-config element.

Element Required/Optional Description
<cache-service-proxy> Optional Specifies the configuration info for a clustered cache service proxy managed by the proxy service.

proxy-scheme

Used in: caching-schemes.

Description

The proxy-scheme element contains the configuration info for a clustered service that allows Coherence*Extend clients to connect to the cluster and use clustered services without having to join the cluster.

Elements

The following table describes the elements you can define within the proxy-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<service-name> Optional Specifies the name of the service.
<thread-count> Optional Specifies the number of daemon threads used by the service.

If zero, all relevant tasks are performed on the service thread.

Legal values are positive integers or zero.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<acceptor-config> Required Contains the configuration of the connection acceptor used by the service to accept connections from Coherence*Extend clients and to allow them to use the services offered by the cluster without having to join the cluster.
<proxy-config> Optional Contains the configuration of the clustered service proxies managed by this service.
<autostart> Optional The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not this service should be automatically started at a cluster node.

Legal values are true or false.

Default value is false.

read-write-backing-map-scheme

Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme.

Description

The read-write-backing-map-scheme defines a backing map which provides a size limited cache of a persistent store.

Implementation

The read-write-backing-map-scheme is implemented by the com.tangosol.net.cache.ReadWriteBackingMap class.

Cache of an External Store

A read write backing map maintains a cache backed by an external persistent cache store, cache misses will read-through to the backend store to retrieve the data. If a writable store is provided, cache writes will propogate to the cache store as well.

Refresh-Ahead Caching

When enabled the cache will watch for recently accessed entries which are about to expire, and asynchronously reload them from the cache store. This insulates the application from potentially slow reads against the cache store, as items periodically expire.

Write-Behind Caching

When enabled the cache will delay writes to the backend cache store. This allows for the writes to be batched into more efficient update blocks, which occur asynchronously from the client thread.

Elements

The following table describes the elements you can define within the read-write-backing-map-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<class-name> Optional Specifies a custom implementation of the read write backing map.

Any custom implementation must extend the com.tangosol.net.cache.ReadWriteBackingMap class and declare the exact same set of public constructors.
<init-params> Optional Specifies initialization parameters, for use in custom read write backing map implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache.
<cachestore-scheme> Optional Specifies the store to cache. If unspecified the cached data will only reside within the internal cache, and only reflect operations performed on the cache itself.
<internal-cache-scheme> Required Specifies a cache-scheme which will be used to cache entries.

Legal values are:

<miss-cache-scheme> Optional Specifies a cache-scheme for maintaining information on cache misses. The miss-cache is used track keys which were not found in the cache store. The knowledge that a key is not in the cache store allows some operations to perform faster, as they can avoid querying the potentially slow cache store. A size-limited scheme may be used to control how many misses are cached. If unspecified no cache-miss data will be maintained.

Legal values are:

<read-only> Optional Specifies if the cache is read only. If true the cache will load data from cachestore for read operations and will not perform any writing to the cachestore when the cache is updated.

Legal values are true or false.

Default value is false.
<write-delay> Optional Specifies the time interval for a write-behind queue to defer asynchronous writes to the cachestore by.

The value of this element must be in the following format:

[\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]?

where the first non-digits (from left to right) indicate the unit of time duration:

  • MS or ms (milliseconds)
  • S or s (seconds)
  • M or m (minutes)
  • H or h (hours)
  • D or d (days)

If the value does not contain a unit, a unit of seconds is assumed.

If zero, synchronous writes to the cachestore (without queuing) will take place, otherwise the writes will be asynchronous and deferred by the number of seconds after the last update to the value in the cache.

Default is zero.

<write-batch-factor> Optional The write-batch-factor element is used to calculate the "soft-ripe" time for write-behind queue entries.

A queue entry is considered to be "ripe" for a write operation if it has been in the write-behind queue for no less than the write-delay interval. The "soft-ripe" time is the point in time prior to the actual "ripe" time after which an entry will be included in a batched asynchronous write operation to the CacheStore (along with all other "ripe" and "soft-ripe" entries).

This element is only applicable if asynchronous writes are enabled (i.e. the value of the write-delay element is greater than zero) and the CacheStore implements the storeAll() method.

The value of the element is expressed as a percentage of the write-delay interval. For example, if the value is zero, only "ripe" entries from the write-behind queue will be batched. On the other hand, if the value is 1.0, all currently queued entries will be batched and the value of the write-delay element will be effectively ignored.

Legal values are non-negative doubles less than or equal to 1.0.

Default is zero.
<write-requeue-threshold> Optional Specifies the maximum size of the write-behind queue for which failed cachestore write operations are requeued.

The purpose of this setting is to prevent flooding of the write-behind queue with failed cachestore operations. This can happened in situations where a large number of successive write operations fail.

If zero, write-behind requeueing is disabled.

Legal values are positive integers or zero.

Default is zero.
<refresh-ahead-factor> Optional The refresh-ahead-factor element is used to calculate the "soft-expiration" time for cache entries.

Soft-expiration is the point in time prior to the actual expiration after which any access request for an entry will schedule an asynchronous load request for the entry.

This attribute is only applicable if the internal cache is a LocalCache, configured with automatic expiration.

The value is expressed as a percentage of the internal LocalCache expiration interval. If zero, refresh-ahead scheduling will be disabled. If 1.0, then any get operation will immediately trigger an asynchronous reload.

Legal values are non-negative doubles less than or equal to 1.0.

Default value is zero.
<rollback-cachestore-failures> Optional Specifies whether or not exceptions caught during synchronous cachestore operations are rethrown to the calling thread (possibly over the network to a remote member).

If the value of this element is false, an exception caught during a synchronous cachestore operation is logged locally and the internal cache is updated.

If the value is true, the exception is rethrown to the calling thread and the internal cache is not changed. If the operation was called within a transactional context, this would have the effect of rolling back the current transaction.

Legal values are true or false.

Default value is false.

remote-cache-scheme

Used in: cachestore-scheme, caching-schemes, near-scheme.

Description

The remote-cache-scheme element contains the configuration info necessary to use a clustered cache from outside the cluster via Coherence*Extend.

Elements

The following table describes the elements you can define within the remote-cache-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<service-name> Optional Specifies the name of the service which will manage caches created from this scheme.
<initiator-config> Required Contains the configuration of the connection initiator used by the service to establish a connection with the cluster.

remote-invocation-scheme

Used in: caching-schemes

Description

The remote-invocation-scheme element contains the configuration info necessary to execute tasks within the context of a cluster without having to first join the cluster. This scheme uses Coherence*Extend to connect to the cluster.

Elements

The following table describes the elements you can define within the remote-invocation-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<service-name> Optional Specifies the name of the service.
<initiator-config> Required Contains the configuration of the connection initiator used by the service to establish a connection with the cluster.

replicated-scheme

Used in: caching-schemes, near-scheme, versioned-near-scheme, overflow-scheme, versioned-backing-map-scheme

Description

The replicated scheme defines caches which fully replicate all their cache entries on each cluster nodes running the specified service. See the service overview for a more detailed description of replicated caches.

Clustered Concurrency Control

Replicated caches support cluster wide key-based locking so that data can be modified in a cluster without encountering the classic missing update problem. Note that any operation made without holding an explicit lock is still atomic but there is no guarantee that the value stored in the cache does not change between atomic operations.

Cache Storage (Backing Map)

Storage for the cache is specified via the backing-map scheme. For instance a replicated cache which uses a local cache for its backing map will result in cache entries being stored in-memory.

Elements

The following table describes the elements you can define within the replicated-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<service-name> Optional Specifies the name of the service which will manage caches created from this scheme.

Services are configured from within the operational descriptor.
<listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache.
<backing-map-scheme> Optional Specifies what type of cache will be used within the cache server to store the entries.

Legal values are:

In order to ensure cache coherence, the backing-map of an replicated cache must not use a read-through pattern to load cache entries. Either use a cache-aside pattern from outside the cache service, or switch to the distributed-scheme, which supports read-through clustered caching.

<standard-lease-milliseconds> Optional Specifies the duration of the standard lease in milliseconds. Once a lease has aged past this number of milliseconds, the lock will automatically be released. Set this value to zero to specify a lease that never expires. The purpose of this setting is to avoid deadlocks or blocks caused by stuck threads; the value should be set higher than the longest expected lock duration (e.g. higher than a transaction timeout). It's also recommended to set this value higher then packet-delivery/timeout-milliseconds value.

Legal values are from positive long numbers or zero.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<lease-granularity> Optional Specifies the lease ownership granularity. Available since release 2.3.

Legal values are:

  • thread
  • member

A value of thread means that locks are held by a thread that obtained them and can only be released by that thread. A value of member means that locks are held by a cluster node and any thread running on the cluster node that obtained the lock can release it.

Default value is the value specified in the tangosol-coherence.xml descriptor.

<mobile-issues> Optional Specifies whether or not the lease issues should be transfered to the most recent lock holders.

Legal values are true or false.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<autostart> Optional The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not the cache services associated with this cache scheme should be automatically started at a cluster node.

Legal values are true or false.

Default value is false.

tcp-acceptor

Used in: acceptor-config.

Description

The tcp-initiator element specifies the configuration info for a connection acceptor that accepts connections from Coherence*Extend clients over TCP/IP.

For additional details and example configurations see Configuring and Using Coherence*Extend.

Elements

The following table describes the elements you can define within the tcp-acceptor element.

Element Required/Optional Description
<local-address> Required Specifies the local address (IP or DNS name) and port that the TCP/IP ServerSocket opened by the connection acceptor will listen on.

For example, the following will instruct the connection acceptor to bind the TCP/IP ServerSocket to 192.168.0.2:9099:

<local-address>
<address>192.168.0.2</address>
<port>9099</port>
</local-address>

tcp-initiator

Used in: initiator-config.

Description

The tcp-initiator element specifies the configuration info for a connection initiator that enables Coherence*Extend clients to connect to a remote cluster via TCP/IP.

For additional details and example configurations see Configuring and Using Coherence*Extend.

Elements

The following table describes the elements you can define within the tcp-initiator element.

Element Required/Optional Description
<local-address> Optional Specifies the local address (IP or DNS name) that the TCP/IP Socket opened by the connection initiator will be bound to.

For example, the following will instruct the connection initiator to bind the TCP/IP Socket to the IP address 192.168.0.1:

<local-address>
<address>192.168.0.1</address>
</local-address>
<remote-addresses> Required Contains the <socket-address> of one or more TCP/IP connection acceptors. The TCP/IP connection initiator uses this information to establish a TCP/IP connection with a remote cluster. The TCP/IP connection initiator will attempt to connect to the addresses in a random order, until either the list is exhausted or a TCP/IP connection is established.

For example, the following will instruct the connection initiator to attempt to connect to 192.168.0.2:9099 and 192.168.0.3:9099 in a random order:

<remote-addresses>
<socket-address>
<address>192.168.0.2</address>
<port>9099</port>
</socket-address>
<socket-address>
<address>192.168.0.3</address>
<port>9099</port>
</socket-address>
</remote-addresses>
<connect-timeout> Optional Specifies the maximum amount of time to wait while establishing a connection with a connection acceptor.

The value of this element must be in the following format:

[\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]?

where the first non-digits (from left to right) indicate the unit of time duration:

  • MS or ms (milliseconds)
  • S or s (seconds)
  • M or m (minutes)
  • H or h (hours)
  • D or d (days)

If the value does not contain a unit, a unit of milliseconds is assumed.

Default value is an infinite timeout.

version-persistent-scheme

Used in: versioned-backing-map-scheme.

Description

The version-persistent-scheme defines a cache for storing object versioning information in a clustered cache. Specifying a size limit on the specified scheme's backing-map allows control over how many version identifiers are tracked.

Elements

The following table describes the elements you can define within the version-persistent-scheme element.

Element Required/Optional Description
<cache-name-suffix> Optional Specifies the name modifier that is used to create a cache of version objects associated with a given cache. The value of this element is appended to the base cache name.

Legal value is a string.

Default value is "-persist".

For example, if the base case is named "Sessions" and this name modifier is set to "-persist", the associated version cache will be named "Sessions-persist".
<replicated-scheme>
or
<distributed-scheme>
Required Specifies the scheme for the cache used to maintain the versioning information.

Legal values are:

version-transient-scheme

Used in: versioned-near-scheme, versioned-backing-map-scheme.

Description

The version-transient-scheme defines a cache for storing object versioning information for use in versioned near-caches. Specifying a size limit on the specified scheme's backing-map allows control over how many version identifiers are tracked.

Elements

The following table describes the elements you can define within the version-transient-scheme element.

Element Required/Optional Description
<cache-name-suffix> Optional Specifies the name modifier that is used to create a cache of version objects associated with a given cache. The value of this element is appended to the base cache name.

Legal value is a string.

Default value is "-version".

For example, if the base case is named "Sessions" and this name modifier is set to "-version", the associated version cache will be named "Sessions-version".
<replicated-scheme>
or
<distributed-scheme>
Required Specifies the scheme for the cache used to maintain the versioning information.

Legal values are:

versioned-backing-map-scheme

Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme.

Description

The versioned-backing-map-scheme is an extension of a read-write-backing-map-scheme, defining a size limited cache of a persistent store. It utilizes object versioning to determine what updates need to be written to the persistent store.

Implementation

The versioned-backing-map-scheme scheme is implemented by the com.tangosol.net.cache.VersionedBackingMap class.

Cache of an External Store

As with the read-write-backing-map-scheme, a versioned backing map maintains a cache backed by an external persistent cache store, cache misses will read-through to the backend store to retrieve the data. Cache stores may also support updates to the backend data store.

Refresh-Ahead and Write-Behind Caching

As with the read-write-backing-map-scheme both the refresh-ahead and write-behind caching optimizations are supported.

Versioning

For entries whose values implement the com.tangosol.util.Versionable interface, the versioned backing map will utilize the version identifier to determine if an update needs to be written to the persistent store. The primary benefit of this feature is that in the event of cluster node failover, the backup node can determine if the most recent version of an entry has already been written to the persistent store, and if so it can avoid an extraneous write.

Elements

The following table describes the elements you can define within the versioned-backing-map-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<class-name> Optional Specifies a custom implementation of the versioned backing map.

Any custom implementation must extend the com.tangosol.net.cache.VersionedBackingMap class and declare the exact same set of public constructors.
<init-params> Optional Specifies initialization parameters, for use in custom versioned backing map implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache.
<cachestore-scheme> Optional Specifies the store to cache. If unspecified the cached data will only reside within the internal cache, and only reflect operations performed on the cache itself.
<internal-cache-scheme> Required Specifies a cache-scheme which will be used to cache entries.

Legal values are:

<miss-cache-scheme> Optional Specifies a cache-scheme for maintaining information on cache misses. The miss-cache is used track keys which were not found in the cache store. The knowledge that a key is not in the cache store allows some operations to perform faster, as they can avoid querying the potentially slow cache store. A size-limited scheme may be used to control how many misses are cached. If unspecified no cache-miss data will be maintained.

Legal values are:

<read-only> Optional Specifies if the cache is read only. If true the cache will load data from cachestore for read operations and will not perform any writing to the cachestore when the cache is updated.

Legal values are true or false.

Default value is false.
<write-delay> Optional Specifies the time interval for a write-behind queue to defer asynchronous writes to the cachestore by.

The value of this element must be in the following format:

[\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]?

where the first non-digits (from left to right) indicate the unit of time duration:

  • MS or ms (milliseconds)
  • S or s (seconds)
  • M or m (minutes)
  • H or h (hours)
  • D or d (days)

If the value does not contain a unit, a unit of seconds is assumed.

If zero, synchronous writes to the cachestore (without queuing) will take place, otherwise the writes will be asynchronous and deferred by the number of seconds after the last update to the value in the cache.

Default is zero.

<write-batch-factor> Optional The write-batch-factor element is used to calculate the "soft-ripe" time for write-behind queue entries.

A queue entry is considered to be "ripe" for a write operation if it has been in the write-behind queue for no less than the write-delay interval. The "soft-ripe" time is the point in time prior to the actual "ripe" time after which an entry will be included in a batched asynchronous write operation to the CacheStore (along with all other "ripe" and "soft-ripe" entries).

This element is only applicable if asynchronous writes are enabled (i.e. the value of the write-delay element is greater than zero) and the CacheStore implements the storeAll() method.

The value of the element is expressed as a percentage of the write-delay interval. For example, if the value is zero, only "ripe" entries from the write-behind queue will be batched. On the other hand, if the value is 1.0, all currently queued entries will be batched and the value of the write-delay element will be effectively ignored.

Legal values are non-negative doubles less than or equal to 1.0.

Default is zero.
<write-requeue-threshold> Optional Specifies the maximum size of the write-behind queue for which failed cachestore write operations are requeued.

The purpose of this setting is to prevent flooding of the write-behind queue with failed cachestore operations. This can happened in situations where a large number of successive write operations fail.

If zero, write-behind requeueing is disabled.

Legal values are positive integers or zero.

Default is zero.
<refresh-ahead-factor> Optional The refresh-ahead-factor element is used to calculate the "soft-expiration" time for cache entries.

Soft-expiration is the point in time prior to the actual expiration after which any access request for an entry will schedule an asynchronous load request for the entry.

This attribute is only applicable if the internal cache is a LocalCache, configured with automatic expiration.

The value is expressed as a percentage of the internal LocalCache expiration interval. If zero, refresh-ahead scheduling will be disabled. If 1.0, then any get operation will immediately trigger an asynchronous reload.

Legal values are non-negative doubles less than or equal to 1.0.

Default value is zero.
<rollback-cachestore-failures> Optional Specifies whether or not exceptions caught during synchronous cachestore operations are rethrown to the calling thread (possibly over the network to a remote member).

If the value of this element is false, an exception caught during a synchronous cachestore operation is logged locally and the internal cache is updated.

If the value is true, the exception is rethrown to the calling thread and the internal cache is not changed. If the operation was called within a transactional context, this would have the effect of rolling back the current transaction.

Legal values are true or false.

Default value is false.
<version-persistent-scheme> Optional Specifies a cache-scheme for tracking the version identifier for entries in the persistent cachestore.
<version-transient-scheme> Optional Specifies a cache-scheme for tracking the version identifier for entries in the transient internal cache.
<manage-transient> Optional Specifies if the backing map is responsible for keeping the transient version cache up to date.

If disabled the backing map manages the transient version cache only for operations for which no other party is aware (such as entry expiry). This is used when there is already a transient version cache of the same name being maintained at a higher level, for instance within a versioned-near-scheme.

Legal values are true or false.

Default value is false.

versioned-near-scheme

Used in: caching-schemes.

As of Coherence release 2.3, it is suggested that a near-scheme be used instead of versioned-near-scheme. Legacy Coherence applications use versioned-near-scheme to ensure coherence through object versioning. As of Coherence 2.3 the near-scheme includes a better alternative, in the form of reliable and efficient front cache invalidation.

Description

As with the near-scheme, the versioned-near-scheme defines a two tier cache consisting of a small and fast front-end, and higher-capacity but slower back-end cache. The front-end and back-end are expressed as normal cache-schemes. A typical deployment might use a local-scheme for the front-end, and a distributed-scheme for the back-end.

Implementation

The versioned near scheme is implemented by the com.tangosol.net.cache.VersionedNearCache class.

Versioning

Object versioning is used to ensure coherence between the front and back tiers.

Elements

The following table describes the elements you can define within the near-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<class-name> Optional Specifies a custom implementation of the versioned near cache.

The specified class must extend the com.tangosol.net.cache.VersionedNearCache class and declare the exact same set of public constructors.
<init-params> Optional Specifies initialization parameters, for use in custom versioned near cache implementations which implement the com.tangosol.run.xml.XmlConfigurable interface.
<listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache.
<front-scheme> Required Specifies the cache-scheme to use in creating the front-tier cache.

Legal values are:

For Example:

<front-scheme>
<local-scheme>
<scheme-ref>default-eviction</scheme-ref>
</local-scheme>
</front-scheme>

or

<front-scheme>
<class-scheme>
<class-name>com.tangosol.util.SafeHashMap</class-name>
<init-params></init-params>
</class-scheme>
</front-scheme>
<back-scheme> Required Specifies the cache-scheme to use in creating the back-tier cache.

Legal values are:


For Example:

<back-scheme>
<distributed-scheme>
<scheme-ref>default-distributed</scheme-ref>
</distributed-scheme>
</back-scheme>

<version-transient-scheme> Optional Specifies a scheme for versioning cache entries, which ensures coherence between the front and back tiers.
<autostart> Optional The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not the cache services associated with this cache scheme should be automatically started at a cluster node.

Legal values are true or false.

Default value is false.