Threading Models and Tuning : Threading Models Overview

Threading Models Overview
When you begin to scale up messaging the following are the potential bottlenecks:
Or the inference engines are not handing off objects fast enough to the cache agents (if write-behind strategy is used) or to the backing store (if cache-aside strategy is used)
These points are related. You can add more inference agents and more cache agents to address these issues, depending on where the bottlenecks are occurring. Below is a representative example flow for an inference agent. Later sections show more detail for options available at each phase.
Figure 2 Agent threading example — shared threads, concurrent RTC, cache aside
Event preprocessing is multithreaded. For each destination you choose a threading option: shared queue and threads; dedicated worker threads; caller’s threads. These threads are released at the end of the RTC (post-RTC phase uses different threads).
For the post-RTC phase, you can choose a cache-aside or write-behind thread management strategy. Cache aside is shown in the diagram above.
Events can be sent out (and acknowledged) in the event preprocessor. Otherwise they are sent out in the post-RTC phase (Number 3 in the diagram above).