Performance Measurement (Persistence)
FTL has sample applications to measure:
-
Latency: The time delay between message generation and delivery
-
Throughput: The number of messages processed in a given time
The samples can be run with any persistence store. For simplicity, the sample realm configuration, tibrealm.json, contains a definition of a persistence cluster with one persistence store and one persistence service, using DTCP transports. Since there is only one persistence service, this configuration does not provide fault tolerance or replication, and is not recommended for production use. See the samples/yaml/persistence directory for additional examples.
Persistence clusters require TCP-based transports. The DTCP transport provides maximum performance for persistence. For minimum latency it is possible to go a step further and set a receive spin limit for all persistence transports. However this requires dedicating significant CPU resources to the persistence service and all clients. Since this is not best for all systems, the sample realm configuration, tibrealm.json does not set a receive spin limit.
When configuring a persistence store, the strongest delivery assurance can be obtained by making the store replicated and using publisher mode store_confirm_send. The sample realm configuration, tibrealm.json, sets both these options. Configuring a store to be non-replicated, and/or to use publisher mode store_send_noconfirm, can offer better performance at the cost of weaker delivery guarantees. For more details, see the Administration guide, Persistence: Stores and Durables.
Latency Test
The tiblatsend1to1 and tiblatrecv1to1 applications work together to provide a latency value calculated by averaging the time needed to send 50 thousand messages from the sender to the receiver. If using a persistence store with publisher mode store_confirm_send, this includes the time required for the publisher to obtain a confirmation from the persistence service that the message is stored.
Perform these steps:
-
Open two terminals. Make sure you have run the
setupcommand and started the FTL server. See the Command Reference. Note that theftlstartscript automatically loads the realm configuration needed for this test. -
From
samples\bin\advancedin one terminal run:-
Linux and macOS:
$./tib
latrecv1to1localhost:8080 -
Windows:
>tib
latrecv1to1localhost:8080
-
-
From
samples\bin\advancedin the other terminal run:-
Linux and macOS:
$./tiblatsend1to1 localhost:8080
-
Windows:
>tiblatsend1to1 localhost:8080
-
The tiblatsend1to1 terminal displays a summary of the number of messages sent and an average representing the per message latency.
#
# tiblatsend1to1
#
# TIBCO FTL Version <n.n.n>
Invoked as: ./tiblatsend1to1 http://localhost:8080
Calibrating tsc timer... done.
CPU speed estimated to be 3.19E+09 Hz
Sending 50000 messages with payload size 16.
Round-trip time: 109.60E-06 seconds.
You can use these options with syntax for your platform:
-
countto control the number of messages sent. -
sizeto change the size of the messages sent. -
helpto see the compete set of command line options.
Try different values to see the impact on latency using your hardware.
Throughput Test
Throughput measures the number of messages processed between applications in a given time. Sending larger and fewer messages typically reduces the impact of per-message overhead on messaging throughput.
The tibthrurecv1to1 and tibthrusend1to1 applications work together to provide a throughput value by measuring the time needed to send 500 thousand messages from the sender to the receiver.
tibrealm.json, defines a persistence store with publisher mode store_confirm_send, which allows the persistence service to regulate the send rate of the publisher. If using a persistence store with publisher mode store_send_noconfirm, the application must regulate the send rate. For this purpose both tibthrurecv1to1 and tibthrusend1to1 have a "--flow-control" command line option.Perform these steps:
-
Open two terminal and make sure you have run the
setupcommand and started the FTL server. See the Command Reference, if necessary. Note that the ftlstart script automatically loads the realm configuration needed for this test. -
From
samples\bin\advancedin one terminal run:-
Linux and macOS:
$./tibthrurecv1to1 localhost:8080
-
Windows:
>tibthrurecv1to1 localhost:8080
-
-
From
samples\bin\advancedin the other terminal run:-
Linux and macOS:
$./tibthrusend1to1 localhost:8080
-
Windows:
>tibthrusend1to1 localhost:8080
-
The tibthrusend1to1 terminal displays a summary of:
-
The number of messages sent (together with the number of batches and the batch size which can be set using the
--batchsizecommand line option) - The aggregate message send rate
Typical output from tibthrusend1to1 follows.
#
# tibthrusend1to1
#
# TIBCO FTL Version <n.n.n>
#
Invoked as: ./tibthrusend1to1 http://localhost:8080
Calibrating tsc timer... done.
CPU speed estimated to be 3.19E+09 Hz
Sender Report:
Requested 500000 messages.
Sending 500000 messages (5000 batches of 100) with payload size 16
Sent 500.00E+03 messages in 2.16E+00 seconds. ( 231.57E+03 messages per second)
Sent 8.00E+06 bytes in 2.16E+00 seconds. ( 3.71E+06 bytes per second)
Performance is impacted by the performance and load of the system you are running on.
You can use these options with syntax for your platform:
-
countto control the number of messages sent usingtibthrusend1to1. Keep these values relatively large in order to get representative results. -
sizeto specify the size of the messages in bytes. The default size of 16 bytes is a relatively small message and might not be representative of what your applications would be using. -
helpto see the compete set of command line options for bothtibthrusend1to1andtibthrusend1to1.
Additional Tests to Try
Try different values for --size to see its impact on throughput using your hardware.
If your system has the capacity, try setting a receive spin limit of 10 milliseconds for all persistence transports.
By default tibthrusend1to1 uses the vectored send call, tibPublisher_SendMessages, which offers maximum performance. However if an application wishes to use the simple send call, tibPublisher_Send, the application may still obtain good performance by using the non-inline send policy, which allows the FTL library to send messages and receive confirmations from the persistence service in the background. The tibthrusend1to1 application can demonstrate this with the --single command line option. For more details, see the Development guide, Publisher Mode and Send Policy.