Performance Measurement (Persistence)

FTL has sample applications to measure:

  • Latency: The time delay between message generation and delivery

  • Throughput: The number of messages processed in a given time

The samples can be run with any persistence store. For simplicity, the sample realm configuration, tibrealm.json, contains a definition of a persistence cluster with one persistence store and one persistence service, using DTCP transports. Since there is only one persistence service, this configuration does not provide fault tolerance or replication, and is not recommended for production use. See the samples/yaml/persistence directory for additional examples.

Persistence clusters require TCP-based transports. The DTCP transport provides maximum performance for persistence. For minimum latency it is possible to go a step further and set a receive spin limit for all persistence transports. However this requires dedicating significant CPU resources to the persistence service and all clients. Since this is not best for all systems, the sample realm configuration, tibrealm.json does not set a receive spin limit.

When configuring a persistence store, the strongest delivery assurance can be obtained by making the store replicated and using publisher mode store_confirm_send. The sample realm configuration, tibrealm.json, sets both these options. Configuring a store to be non-replicated, and/or to use publisher mode store_send_noconfirm, can offer better performance at the cost of weaker delivery guarantees. For more details, see the Administration guide, Persistence: Stores and Durables.

Note: When an application sends and receives messages on the same persistence store, by default the application will receive its own messages. (This is not true for peer-to-peer transports.) As a result, the sample applications used to measure performance in this section create content matchers to filter out locally generated messages.

Latency Test

The tiblatsend1to1 and tiblatrecv1to1 applications work together to provide a latency value calculated by averaging the time needed to send 50 thousand messages from the sender to the receiver. If using a persistence store with publisher mode store_confirm_send, this includes the time required for the publisher to obtain a confirmation from the persistence service that the message is stored.

Note: Unlike the samples used to measure peer-to-peer latency in the previous section, these samples report the round-trip time (sender to receiver and back again). The one-way latency can be obtained by dividing the round-trip time by 2.

Perform these steps:

  1. Open two terminals. Make sure you have run the setup command and started the FTL server. See the Command Reference. Note that the ftlstart script automatically loads the realm configuration needed for this test.

  2. From samples\bin\advanced in one terminal run:

    • Linux and macOS:

      $./tiblatrecv1to1 localhost:8080
    • Windows:

      >tiblatrecv1to1 localhost:8080
  3. From samples\bin\advanced in the other terminal run:

    • Linux and macOS:

      $./tiblatsend1to1 localhost:8080
    • Windows:

      >tiblatsend1to1 localhost:8080

The tiblatsend1to1 terminal displays a summary of the number of messages sent and an average representing the per message latency.

#

# tiblatsend1to1

#

# TIBCO FTL Version <n.n.n>

Invoked as: ./tiblatsend1to1 http://localhost:8080

Calibrating tsc timer... done.

CPU speed estimated to be 3.19E+09 Hz

Sending 50000 messages with payload size 16.

Round-trip time: 109.60E-06 seconds.

You can use these options with syntax for your platform:

  • count to control the number of messages sent.

  • size to change the size of the messages sent.

  • help to see the compete set of command line options.

Try different values to see the impact on latency using your hardware.

Throughput Test

Throughput measures the number of messages processed between applications in a given time. Sending larger and fewer messages typically reduces the impact of per-message overhead on messaging throughput.

The tibthrurecv1to1 and tibthrusend1to1 applications work together to provide a throughput value by measuring the time needed to send 500 thousand messages from the sender to the receiver.

Note: The sample realm configuration, tibrealm.json, defines a persistence store with publisher mode store_confirm_send, which allows the persistence service to regulate the send rate of the publisher. If using a persistence store with publisher mode store_send_noconfirm, the application must regulate the send rate. For this purpose both tibthrurecv1to1 and tibthrusend1to1 have a "--flow-control" command line option.

Perform these steps:

  1. Open two terminal and make sure you have run the setup command and started the FTL server. See the Command Reference, if necessary. Note that the ftlstart script automatically loads the realm configuration needed for this test.

  2. From samples\bin\advanced in one terminal run:

    • Linux and macOS:

      $./tibthrurecv1to1 localhost:8080
    • Windows:

      >tibthrurecv1to1 localhost:8080
  1. From samples\bin\advanced in the other terminal run:

    • Linux and macOS:

      $./tibthrusend1to1 localhost:8080
    • Windows:

      >tibthrusend1to1 localhost:8080

The tibthrusend1to1 terminal displays a summary of:

  • The number of messages sent (together with the number of batches and the batch size which can be set using the --batchsize command line option)

  • The aggregate message send rate

Typical output from tibthrusend1to1 follows.

#

# tibthrusend1to1

#

# TIBCO FTL Version <n.n.n>

#

Invoked as: ./tibthrusend1to1 http://localhost:8080

Calibrating tsc timer... done.

CPU speed estimated to be 3.19E+09 Hz

Sender Report:

Requested 500000 messages.

Sending 500000 messages (5000 batches of 100) with payload size 16

Sent 500.00E+03 messages in 2.16E+00 seconds. ( 231.57E+03 messages per second)

Sent 8.00E+06 bytes in 2.16E+00 seconds. ( 3.71E+06 bytes per second)

Performance is impacted by the performance and load of the system you are running on.

You can use these options with syntax for your platform:

  • count to control the number of messages sent using tibthrusend1to1. Keep these values relatively large in order to get representative results.

  • size to specify the size of the messages in bytes. The default size of 16 bytes is a relatively small message and might not be representative of what your applications would be using.

  • help to see the compete set of command line options for both tibthrusend1to1 and tibthrusend1to1.

Additional Tests to Try

Try different values for --size to see its impact on throughput using your hardware.

If your system has the capacity, try setting a receive spin limit of 10 milliseconds for all persistence transports.

By default tibthrusend1to1 uses the vectored send call, tibPublisher_SendMessages, which offers maximum performance. However if an application wishes to use the simple send call, tibPublisher_Send, the application may still obtain good performance by using the non-inline send policy, which allows the FTL library to send messages and receive confirmations from the persistence service in the background. The tibthrusend1to1 application can demonstrate this with the --single command line option. For more details, see the Development guide, Publisher Mode and Send Policy.