Threading Models and Tuning
Event preprocessing is multithreaded, and for each destination you need to choose a threading option: Shared Queue, Destination Queue, or the Caller’s Thread.
A detailed example for the threading model.
- Shared Queue and Destination Queue threads are released at the end of the RTC (post-RTC phase uses different threads).
- For the RTC phase, you can choose single or concurrent RTC options.
- For the post-RTC phase, you have the cache-aside thread management strategy. It is shown in the diagram above.
- You can use parallel operations or (for special cases) serial operations. Use of parallel operations generally requires locking.
- Events that are to be sent out, for example using
Event.sendEvent()
, are actually sent in the event preprocessor or in the post-RTC phase, depending where the function is called. - Acknowledgements are sent out after the post RTC phase.
- The exception is events consumed in a preprocessor. In this case acknowledgements are sent immediately.
Scaling Considerations
When you begin to scale up messaging the following are the potential bottlenecks:
- Messages are coming in too fast into the inference engine, or
- Inference engines are not handing off objects fast enough to the backing store , or
- Cache agents are not accepting the objects fast enough, or
- Backing store is not accepting the objects fast enough.
These points are related. You can add more inference agents and more cache agents to address these issues, depending on where the bottlenecks are occurring.
Subtopics