Threading Models and Tuning

Event preprocessing is multithreaded, and for each destination you need to choose a threading option: Shared Queue, Destination Queue, or the Caller’s Thread.

Threading Models Quick Reference

A detailed example for the threading model.

Agent threading example — shared threads, concurrent RTC, cache aside

Scaling Considerations

When you begin to scale up messaging the following are the potential bottlenecks:

  • Messages are coming in too fast into the inference engine, or
  • Inference engines are not handing off objects fast enough to the backing store , or
  • Cache agents are not accepting the objects fast enough, or
  • Backing store is not accepting the objects fast enough.

These points are related. You can add more inference agents and more cache agents to address these issues, depending on where the bottlenecks are occurring.