Copyright © TIBCO Software Inc. All Rights Reserved
Copyright © TIBCO Software Inc. All Rights Reserved


Chapter 4 TIBCO iProcess Engine Processes : Foreground Processes

Foreground Processes
The following table summarizes the function of each foreground process.
All of the foreground processes must operate on the master server. See Determining Where Processes Run for more information.
Work Queue Server
The iProcess work queues, which contain all the user’s work items, are managed by the following processes:
Work Queue Server (WQS), which handles the listing of queues. This process is run by SWDIR\etc\wqsrpc. There is only a single wqsrpc process running at any time.
Work Item Server (WIS), which handles the listing of work items in the queues. This process is run by SWDIR\etc\wisrpc. The number of wisrpc processes running is configured by the Process Sentinels (process_attribute table).
The work queue processes are started automatically when the other TIBCO iProcess Engine processes (such as the RPC pool servers) are started, and stopped when the other TIBCO iProcess Engine processes are stopped. The Process Sentinels start (and stop) processes in a specific sequence to make sure that processes have all their dependent processes running.
When a TIBCO iProcess Workspace logs in to the TIBCO iProcess Engine, the following sequence of events occur:
1.
2.
3.
4.
5.
6.
The TIBCO iProcess Workspace communicates with the WQS and the WIS via RPC.

 

 
Refer to TIBCO iProcess Workspace and TIBCO iProcess Engine Network Communication for more information about RPC calls and how the TIBCO iProcess Workspace and TIBCO iProcess Engine communicate over the network.
The Work Queue Server controls which user and group queues a user will see on their TIBCO iProcess Workspace and which WIS process is handling the queue. The WQS provides a mapping to the appropriate work queues that are held in the WIS processes. Each TIBCO iProcess Engine instance runs with one WQS process.
Shared memory is used for caching this information. This memory is released when the instance has finished.
The WQS allocates work queues to the WIS processes that are running using either round robin or on-demand allocation. You can configure which allocation method is used by modifying the WQS_ROUND_ROBIN parameter in SWDIR\etc\staffcfg.
Allocation of Work Queues to WIS Processes
The WQS process performs the work queue allocation. The WQS reads the list of users and groups from the database and sorts them alphabetically and allocates them to a particular WIS. The WQS allocates which WIS a work queue is sent to in one of two ways.
Round Robin Queue Allocation
The WQS allocates a work queue to each WIS alphabetically, cycling round until all the work queues are allocated. For example, if a system has 5 WIS processes and 15 work queues (A-O) then the following allocation is performed:
This method of allocation takes no account of queue size so it is best used when queues are evenly distributed with work items and user access is evenly spread.
On-Demand Queue Allocation
This method allocates work queues alphabetically but only to the first available WIS. Therefore if a WIS is allocated a large work queue, it will take some time before it is ready to accept another queue. This means that other WIS processes that have smaller queues can accept more queues.
The effect of using on-demand allocation is that a more even distribution of work is achieved. However, the initial allocation is based upon the initial loading size of each queue so it may not be representative of the amount of requests to that work queue.
Controlling the Assignment of Queues to WIS Processes
There are two additional methods you can use to customize the assignment process to better reflect your system requirements, and so optimize performance.
User queues and group queues frequently have different characteristics, in terms of the amount of load they carry. For example, if group queues are far more active than user queues on your system, you may want to give them higher priority for WIS allocation.
If you have certain queues that are very large or very busy, you may find it useful to dedicate specific WIS processes to handling only those queues (leaving the remaining queues to be dynamically assigned to the remaining WIS processes).
Refer to “Administering the Work Queue Server and Work Items Server” in the TIBCO iProcess Engine: Administrator's Guide for more information.
RPC Pool Server
This process is responsible for handling RPC requests from a TIBCO iProcess Workspace to access and update data in the iProcess Engine instance.
A number of RPC pool servers can be created by the RPC Listener when the TIBCO iProcess Engine is started and each RPC server will be responsible for a configured pool of TIBCO iProcess Workspace connections. You can set up a number of pool servers that are pre-loaded if you have lots of TIBCO iProcess Workspaces logging in quickly. You do this by defining the PRE_LOAD_POOL_SERVERS parameter in the SWDIR\etc\staffcfg file. TIBCO iProcess Workspaces can be allocated to pool servers using either a round robin or load balanced method.
The RPC Pool servers are started by the RPC TCP Listener. The number of users that each pool server can support is configured using the MAX_USERS_PER_PROCESS parameter in the SWDIR\etc\staffcfg file.
RPC Listeners
The RPC Listeners are started by the Process Sentinels and are the first TIBCO iProcess Engine foreground server processes to be started. A listener is started for both TCP and UDP protocols. The RPC number for the Listener process is the same for TCP and UDP and is a start-up configuration parameter for the TIBCO iProcess Engine. Line 11 of the SWDIR\swdefs file defines this RPC number.
This RPC number is the published initial connection port for a TIBCO iProcess Engine for use by any client applications built using a TIBCO iProcess Workspace interface (e.g. the iProcess Applications Layer).
Work Item Server
The WIS handles the listing of work items in the queues. The process executable is SWDIR\etc\wisrpc. A number of WIS processes can be run and this is controlled by the Process Sentinels.
A WIS process is one instance of a Work Item Server and caches work queues that the WQS has allocated to it. Every iProcess work queue is hosted by a WIS process and each WIS can process more than one work queue. The iProcess Engine runs two WIS processes by default but you can increase or decrease this number using the SWDIR\util\swadm utility.
The WIS processes are multi-threaded processes. Different threads are used to perform different tasks - for example, responding to RPC requests, caching queue information, filtering queues or updating CDQP information.
There are many work queue performance issues related to the number of WIS processes you have, how many work queues they process, how many threads they use for different tasks and so on. Refer to “Administering the Work Queue Server and Work Item Servers” in the TIBCO iProcess Engine: Administrator's Guide for more information.
The WIS processes maintain a cache of the information they contain (which is the user’s work queue). This cache is synchronized with the same information stored in the user or group’s work queue (staffo database table). You can view the information in this table using SWDIR\util\plist -m.
WIS Mbox Daemon
This process (WISMBD) operates between the WIS Mbox set and the WISRPC processes forwarding messages from one to the other. The executable is SWDIR\etc\wismbd.

 

 
The WISMBD process is configured to read from a configurable number of physical WIS Mbox sets in a round-robin manner and it will deliver the messages to the appropriate WIS process.
When the WIS Mbox sets are empty, the WISMBD will sleep for a configurable amount of time. This is defined by the EMPTYMBOXSLEEP, EMPTYMBOXSLEEP_INC and EMPTYMBOXSLEEP_MAX process attributes. Refer to “Administering Process Attributes” in the TIBCO iProcess Engine: Administrator's Guide.
The WISMBD sends synchronous RPC requests to the WIS that maintains the work queue to which the message is addressed.
The WISMBD is initially set to read from the following Mbox sets:

Copyright © TIBCO Software Inc. All Rights Reserved
Copyright © TIBCO Software Inc. All Rights Reserved