public static final class Partition.Properties extends java.lang.Object
| Constructor and Description |
|---|
Partition.Properties()
Default constructor.
|
| Modifier and Type | Method and Description |
|---|---|
void |
forceReplication(boolean enabled)
Determine how objects are replicated during a migrate of
an object partition.
|
boolean |
getForceReplication()
Get the current forceReplication property value.
|
long |
getObjectsLockedPerTransaction()
Get the current objectsLockedPerTransaction property value.
|
java.lang.String |
getRestoreFromNode()
Get the current restoreFromNode property value.
|
void |
restoreFromNode(java.lang.String nodeName)
Define the node that the partition should be restored from.
|
void |
setObjectsLockedPerTransaction(long objectsLockedPerTransaction)
Define the number of objects locked in a transaction when performing
a migrate() or update().
|
public Partition.Properties()
public void restoreFromNode(java.lang.String nodeName) throws java.lang.IllegalArgumentException
When this property is set, the partition defined on the given remote node is loaded to the local node. This should be done when restoring a node from a split-brain situation, where nodeName is the node in the cluster where all objects should be preserved, and the local node is the node being restored. Any conflicts during restore will preserve the objects on nodeName, and remove the conflicting objects on the local node.
A restore is needed when multiple nodes are currently the active node for a partition in a cluster due to a split-brain scenario. In this case, the application needs to decide which active node will be the node where the objects are preserved during a restore. Note that the nodeName does not necessarily have to be the node which becomes the partition's active node after the restore completes.
The actual restore of the partition is done in the enablePartitions() method when the JOIN_CLUSTER_RESTORE EnableAction is used. If any other EnableAction is used, object data isn't preserved, and no restoration of partition objects is done.
If restoreFromNode isn't set after a split-brain scenario, the runtime will perform a cluster wide broadcast to find the current active node, and use that node to restore instances in the partition. If multiple active nodes are found, the first responder is chosen.
nodeName - The remote node to use when restoring the
partition's objects.java.lang.IllegalArgumentException - The nodeName was empty.PartitionManager.definePartition(String,
Partition.Properties, String, ReplicaNode []),
PartitionManager.EnableActionpublic final java.lang.String getRestoreFromNode()
public void forceReplication(boolean enabled)
When set to true, a migrate() or definePartition()/enablePartition() will force the copy of partitioned objects to all pre-existing replica nodes. The default value for this property is false, objects are only copied to new replicas as they are added since the objects should already exist on the pre-existing replica nodes.
Normally, a migrate will skip the replication of objects to pre-existing nodes in the partition's replica node list. This allows applications to incrementally add replica nodes without having to copy the objects to replicas that already exist in the partition. However, if one or more replicas have gone offline, or were not discovered when the partition was first enabled, this property can be set to insure that objects are pushed to all replicas in the cluster.
Warning: This is performance hostile, and should only be done if the replica can't be manually taken offline and restored.
The value passed into definePartition() is stored and used in failover. The value passed to migrate() overrides the value passed to definePartition().
enabled - If true, force the copy of objects to all replicas
when a migrate() or enablePartition() is executed.public final boolean getForceReplication()
public void setObjectsLockedPerTransaction(long objectsLockedPerTransaction)
When distribution performs a migrate() or update(), or when it performs a migrate() as part of failover, the work is done in units of work defined by objectsLockedPerTransaction. This allows applications to concurrently run while the work is in progress, otherwise all partitioned instances would be locked in the calling transaction, preventing application code from establishing a write lock on instances on the active node, or establishing a read lock on instances on replica nodes.
The default is defined by Partition.DefaultObjectsLockedPerTransaction
If objectsLockedPerTransaction is set to zero, all instances will be processed in the caller's transaction.
If the calling transaction has one or more partitioned instances locked in the transaction, objectsLockedPerTransaction is ignored, and all work is done in the caller's transaction. To insure that migrate() or update() minimizes the number of locks taken, they should be run in separate transactions that have no partitioned instances locked.
The value passed into definePartition() is stored and used in failover. The value passed to migrate() and update() override the value passed to definePartition().
objectsLockedPerTransaction - Number of objects locked
per transaction when sending data to remote nodes.java.lang.IllegalArgumentException - The objectsLockedPerTransaction
value was negative.public final long getObjectsLockedPerTransaction()