Threading Models and Tuning

Event preprocessing is multithreaded, and for each destination you need to choose a threading option: Shared Queue, Destination Queue, or the Caller’s Thread.

Note: Content relating to Cache OM is not relevant to TIBCO BusinessEvents Express edition.
Threading Models Quick Reference

A detailed example for the threading model.

Agent threading example — shared threads, concurrent RTC, cache aside
  • Shared Queue and Destination Queue threads are released at the end of the RTC (post-RTC phase uses different threads).
  • For the RTC phase, you can choose single or concurrent RTC options.
  • For the post-RTC phase, you can choose a cache-aside or write-behind thread management strategy. Cache aside is shown in the diagram above.
  • With cache aside you can use parallel operations or (for special cases) serial operations. Use of parallel operations generally requires locking.
  • Events that are to be sent out, for example using Event.sendEvent(), are actually sent in the event preprocessor or in the post-RTC phase, depending where the function is called.
  • Acknowledgements are sent out after the post RTC phase.
    • The exception is events consumed in a preprocessor. In this case acknowledgements are sent immediately.

Scaling Considerations

When you begin to scale up messaging the following are the potential bottlenecks:

  • Messages are coming in too fast into the inference engine, or
  • Inference engines are not handing off objects fast enough to the cache agents (if write-behind strategy is used) or to the backing store (if cache-aside strategy is used), or
  • Cache agents are not accepting the objects fast enough, or
  • Backing store is not accepting the objects fast enough.

These points are related. You can add more inference agents and more cache agents to address these issues, depending on where the bottlenecks are occurring.