The cache characteristics are defined by a caching scheme. The provided caching schemes are all distributed caching schemes (see Provided Caching Schemes). This section explains why a distributed scheme is the best option.In a distributed cache, cached object data is partitioned between the storage nodes in the cache cluster for efficient use of memory. This means that no two storage nodes are responsible for the same item of data. A distributed caching scheme has the following characteristics:
● Data is written to the cache and to one backup on a different JVM (or to more than one backup copy, depending on configuration). Therefore, memory usage and write performance are better than in replicated cache.There is a slight performance penalty because modifications to the cache are not considered complete until all backups have acknowledged receipt of the modification. The benefit is that data consistency is assured.
Each piece of data is managed by only one cluster node, so data access over the network is a "single-hop" operation. This type of access is extremely scalable, because it can use point-to-point communication and take advantage of a switched network.
● Read access is slightly slower than with replicated cache because data is not local. The cache is distributed between the nodes.
● Data is distributed evenly across the JVMs, so the responsibility for managing the data is automatically load-balanced across the cluster. The physical location of each cache is transparent to services (so, for example, API developers don’t need to be concerned about cache location).
● The system can scale in a linear manner. No two servers (nodes) are responsible for the same piece of cached data, so the size of the cache and the processing power associated with the management of the cache can grow linearly as the cluster grows.Overall, the distributed cache system is the best option for systems with a large data footprint in memory.The object manager handles failover of the cache data on a failed storage node and it handles failback when the node recovers.When a storage node (that is a node hosting a data cache) fails, the object manager redistributes objects among the remaining storage nodes using backup copies, if the remaining number of cache nodes are sufficient to provide the number of backups, and if they have sufficient memory to handle the additional load.However, note that if one node fails, and then another cache node fails before the data can be redistributed, data loss may occur.If redistribution is successful, the complete cache of all objects, plus the specified number of backups, is restored. When the failed node starts again, the object management layer again redistributes cache data.Specifically, when a node fails, the node that maintains the backup of failed node’s cache data takes over primary responsibility for that data. If two backup copies are specified, then the node responsible for the second backup copy is promoted to primary backup node. Additional backup copies are made according to the configuration requirements. When a new cache node comes up, data is again redistributed across the cluster to make use of this new node.The size of a cache can be unlimited or limited. The limit is specified as a number of cache entries. By default, the cache is of unlimited size, but it can be limited using configuration options.Performance is best when all the data is in cache. But if the amount of data exceeds the amount of memory available in the cache machines, you must limit the cache size and use a backing store to store additional data. With a limited cache, objects are evicted from the cache when the number of entries exceeds the limit. The evicted objects are transparently loaded from the backing store when needed by agents.
Only use an unlimited caching scheme if you deploy enough cache servers to handle the data. Otherwise out of memory errors may occur.Only use a limited caching scheme if you have configured a backing store. Evicted objects are lost if there is no backing store.
Copyright © TIBCO Software Inc. All Rights Reserved.