Cluster Details Panel

The Cluster details panel presents the details of a persistence cluster definition. In edit mode, you can modify the parameter values.

From the left menu, select Clusters, select the three dots above a cluster, then select View Details.

Heartbeats and Timeouts

You can adjust the persistence heartbeat and timeout parameters, which detect persistence service availability. For best results, use the default values, except to resolve specific issues or as directed by TIBCO personnel.

GUI Parameter JSON Attribute Description

Persistence Heartbeat

(Server to the Client)

client_pserver_heartbeat

The leader persistence service sends heartbeats to its clients at this interval, in seconds. The default is 2 seconds.

Timeout Interval

(Sever to Client)

client_timeout_pserver

When the leader’s heartbeat is silent for this interval, in seconds, its clients seek to connect to a new leader from among the other services in the cluster. The default is 5 seconds.

When a client's heartbeat is silent for this interval, the persistence service clears all state associated with the client.

Persistence Heartbeat

(Server to Server)

pserver_pserver_heartbeat Persistence services in the cluster exchange heartbeats at this interval, in seconds. The default is 0.5 seconds.

Timeout Interval

(Server to Server)

pserver_timeout_pserver When the heartbeat of any service in the cluster is silent for this interval, in seconds, the remaining services attempt to form a new quorum. The default is 3 seconds.

Persistence Heartbeat

(Cluster to Cluster)

inter_cluster_heartbeat The leader persistence services of different persistence clusters in a forwarding zone exchange heartbeats at this interval, in seconds. The default is 2 seconds.

Timeout Interval

(Cluster to Cluster)

inter_cluster_timeout When the heartbeat from a leader persistence service in a second cluster is silent for this interval, in seconds, the leader of the first cluster will attempt to reconnect. The default is 5 seconds.

Disk Settings

You can enable or disable the ability for this cluster's store to use disk-based persistence, or disk swap space when needed.

You can enable automatic disk persistence compaction so the persistence services in the cluster automatically starts a compaction when certain conditions are met. For details, see Compact Disk Persistence Files with Persistence Service Online.

GUI Parameter JSON Attribute Description
Message Swapping disk_swap

Toggle switch to enable or disable message swapping for this cluster. The default setting is Disabled.

When message swapping is enabled, the server can swap messages from process memory to disk. Swapping allows the server to free process memory for more incoming messages, and to process message volume in excess of memory limit. When the server swaps a message to disk, a small record of the swapped message remains in memory.

This can be enabled with or without disk persistence. If enabled without disk persistence, data is written to a temporary swap file. This option can apply to replicated or non-replicated stores.

Note: When using message swapping, if all FTL servers in the cluster are restarted simultaneously, messages in the store prior to the restart are not retrievable after the restart unless disk persistence is also enabled.

Disk Persistence Mode disk_persistence

Select the disk persistence mode.

None
Disable disk persistence.
sync
The client returns from a send-message call after the message has been written to a majority of disks. This mode generally provides consistent data and robustness, but at the cost of increased latency and lower throughput. If the cluster restarts, no data is lost; performance is subject to disk performance.
async
The client may return from a send-message call before the message has been written to disk by majority of the FTL servers. This mode generally provides less latency and more throughput, but messages could be lost if a majority of servers restart shortly after the API call.
Automatic compaction disk_compact

Set to enable or disable automatic disk compaction. See the "Compact when in-use ration below" setting which follows in this table.

Changing the setting does not require a restart.

Compact when in-use ratio below

disk_compact_settings.min_disk_inuse_ratio

Set a value between 0 and 1, inclusive. The persistence service attempts to keep the ratio of disk inuse size to disk allocated size at or above the specified value. If the ratio of in-use to allocated size falls below this value, the persistence service can start a compaction. Automatic compaction does not occur if the disk inuse size is extremely small.

To use disk_compact_settings.min_disk_inuse_ratio, automatic compaction must be enabled via disk_compact.

Cluster Limits and Force Quorum Settings

GUI Parameter JSON Attribute Description
Force Quorum Delay

force_quorum_delay

Automatically force a quorum after this delay, in seconds. Zero is a special value indicating that automatic force formation is disabled.

This setting can be used to ensure that the quorum always forms without administrator intervention. A majority of non-empty servers is usually enough to form a quorum, however in some situations (such as DR activation or recovery from a backup file), the quorum can't form unless all members are present. If some servers are still missing, this option forces the quorum to form automatically after the specified delay.

Administrators may still manually force a quorum if this setting is configured. See Quorum Behaviors and Before Forcing a Quorum.

Byte Limit

bytelimit

This parameter limits the volume of message data in the cluster. If publishing another message would exceed this limit, the cluster rejects that message.

The store may grow until it exhausts available memory or disk of any one of its persistence services.

Zero is a special value, indicating no limit. The store may grow until it exhausts available memory of any one of its services.

When messages are stored on disk, message metadata is still stored in memory. Use this setting to constrain memory for message metadata.

See Size Units Reference.

Message Limit message_limit

This parameter limits the number of messages that can be held by the cluster. When this limit is reached, the client receives an exception on tibPublisher_Send or tibPublisher_SendMesages (when cluster_confirm_send mode is set on the cluster). However, the client library retries the sends. To have the exception presented to the user, the client application must specify a finite persistence retry duration.

Note that even when messages are stored on disk, message metadata is still stored in memory. Hence, use this setting to constrain memory use.

Zero is the default value indicating no limit. The cluster may grow until it exhausts available memory or disk of any one of its persistence services.

Externally Reachable Addresses

When using persistence with zones of clusters that use an auto inter-cluster transport to address the cluster (or to address a load balancer's interface), use this field to identify one or more ports for the core servers of this cluster (or the port for the load balancer).

GUI Parameter Description
Host Enter either an IP address or a resolvable hostname.
Port Enter the port number to use to access this host.