Characteristics of a Distributed Caching Scheme

The cache characteristics are defined by a caching scheme.

TIBCO BusinessEvents uses a distributed caching scheme, in which the cached object data is partitioned between the storage PUs in the cache cluster for efficient use of memory. This means that no two storage PUs are responsible for the same item of data.

A distributed caching scheme has the following characteristics:

  • Data is written to the cache and to one backup on a different JVM (replication count can be set to none, one, or more backup copies, depending on configuration). Therefore, memory usage and write performance are better than in a replicated cache scheme. There is a slight performance penalty because modifications to the cache are not considered complete until all backups have acknowledged receipt of the modification. The benefit is that data consistency is assured.
    Tip: Each piece of data is managed by only one cluster node, so data access over the network is a "single-hop" operation. This type of access is extremely scalable, because it can use point-to-point communication and take advantage of a switched network.
  • Read access is slightly affected because data is not local. The cache is distributed between the cache agent nodes.
  • Data is distributed evenly across the JVMs, so the responsibility for managing the data is automatically load-balanced across the cluster. The physical location of each cache is transparent to services (so, for example, API developers do not need to be concerned about cache location).
  • You can add more cache agents as needed for easy scaling.
  • The system can scale in a linear manner. No two servers (JVMs) are responsible for the same piece of cached data, so the size of the cache and the processing power associated with the management of the cache can grow linearly as the cluster grows.

Overall, the distributed cache system is the best option for systems with a large data footprint in memory.