Copyright © Cloud Software Group, Inc. All Rights Reserved
Copyright © Cloud Software Group, Inc. All Rights Reserved


Chapter 11 Administering the Work Queue Server and Work Item Server Processes : The WIS Process

The WIS Process
The Work Item Server (WIS) process handles the listing of work items in user and group queues. Each WIS process is allocated one or more queues to handle by the WQS process and responds to client RPC requests to process work items held in these queues.
You can use the swadm add_process and delete_process commands to change the number of WIS processes on your system according to your requirements. See Using SWDIR\util\swadm to Administer Server Processes for more information about how to use these commands.
The WIS process is multi-threaded, allowing it to perform multiple tasks simultaneously. Different threads are used to:
cache the information that the WIS process maintains about each work queue that it is handling, allowing the WIS processes to respond quickly to RPC requests from client applications.
The following diagram shows:


 
Monitoring the WIS Processes
You can use the SWDIR\util\plist -w command to monitor the operation of the WIS processes. TIBCO recommends you do this regularly, particularly in the following circumstances:
The format of the SWDIR\util\plist -w command is:
plist -w[V][v] [WIS]
where:
V can be used to display additional information (the LastCacheTime and CDQPVer columns)
v can be used to display additional information (the Version, NewVers, DelVers, ExpVers, UrgVers, and QParamV columns)
WIS is the number of a specific WIS process, and can be used to display details only for that WIS process. If this parameter is omitted, the command displays details for all the WIS processes.
Use the plist -w command to view detailed information about the WIS processes such as the number of items in the queue, whether the queue is disabled, and the number of new items in each WIS process.
Use the plist -wVv command to view all the additional information that is returned by the plist -wV and the plist -wv commands.
For example (using plist -wV):

 
WIS QueueName    Flags     #Items #Newp  #Dead  #Urgent  LastCacheTime(ms)  CDQPVer
-----------------------------------------------------------------------------------
1   sblanch      -----NM   3000   3000   0      0        766                -1
1   steveb       -------   0      0      0      0        11                 -1
1   swadmin      -----NM   2      2      0      0        29                 -1
1   swgrp0000    --G----   0      0      0      0        12                 -1
1   swgrp0001    --G----   0      0      0      0        11                 -1
1   swgrp0002    --G----   0      0      0      0        11                 -1
1   swgrp0003    --G----   0      0      0      0        -1                 -1

 
The plist -w[V][v] [WIS] command displays the following information.
The number of this WIS process instance.
D = The queue is disabled (this would normally be when the system has just been started and the queues have not yet been allocated to a WIS.
U = There are urgent items in this queue.
G = This is a group queue.
T = This is a test queue.
D = There are items in this queue with deadlines set.
N = There is new mail in this queue.
M = There is mail in this queue (i.e. it is not empty).
Displayed if the -V option is used.
The number of milliseconds that the WIS process took to cache this work queue. Note that:
The time shown is the time taken when the queue was last cached (which could be either when the WIS process was started or when the queue was first accessed). The number of items in the queue at that time may have been different from the number of items currently in the queue as shown in the #Items column.
A value of -1 indicates that the queue has not been cached yet.
Displayed if the -V option is used.
Displayed if the -v option is used.
Displayed if the -v option is used.
Displayed if the -v option is used.
Displayed if the -v option is used.
Displayed if the -v option is used.
Displayed if the -v option is used.
Displayed if the -v option is used.
Configuring WIS RPC Request Processing
To process RPC requests, both the WIS and WQS processes access a pool of “worker” threads that is provided by a multi-threaded RPC server shared library (SWRPCMTS). You can use the RPC_SVR_NUM_THREADS process attribute to define the number of threads that are available in the SWRPCMTS library to process RPC requests.
You can adjust the value of this process attribute to optimize the WQS and WIS process’ response times when processing RPC requests against available CPU capacity. Increasing the number of threads will improve the throughput of client RPC requests, but at the cost of increased CPU usage.
The RPC processing threads perform their work independently of and concurrently with the queue update thread. In pre-10.4 versions of iProcess Engine, where the WIS process was single-threaded, the WIS process had to switch between processing RPC requests and updating work queues.
Configuring How Work Queues are Filtered
When filter criteria are applied to a work queue - for example, only show work items started by a particular user - the WIS process has to filter the work queue to find the correct items to display.
By default, the WIS process uses the thread that is processing an RPC request to perform any work queue filtering required by that RPC request. This is perfectly adequate if the queues are small and the filter criteria are simple. However, the time taken to filter a queue can increase significantly as the number of work items in the queue grows and/or the complexity of the filter criteria increases. This can result in a perceptible delay for the user viewing the work queue.
For example, filtering a queue that contains over 100000 work items using filter criteria that includes CDQPs can take over 6 seconds. (Obviously, CPU availability on the machine is also a factor in determining how long the filtering operation takes.)
To cope with this situation, the WIS process contains a pool of queue filtering threads that can be used to filter work queues more quickly. The following process attributes allow you to configure how and when these threads are used:
WIS_FILTER_THREAD_BOUNDARIES allows you to define when a work queue should be split into multiple "blocks" of work for filtering purposes. You can define up to 4 threshold values for the number of work items in a queue. As each threshold is passed, an additional block of filtering work is created, which will be handled by the first available queue filtering thread.
WIS_FILTER_THREAD_POOL_SIZE allows you to define the number of queue filtering threads in the pool. These threads are used to process all additional filtering blocks generated by the WIS_FILTER_THREAD_BOUNDARIES thresholds. Increasing the number of threads in this pool allows more blocks of filtering work to be processed in parallel, but at the cost of increasing the CPU usage of the WIS process.
For example, consider the following scenario:
WIS_FILTER_THREAD_BOUNDARIES has been set to create additional filtering blocks when a queue contains 100000 and 150000 work items.
The WIS process receives 5 RPC requests to filter the queue.
Each RPC request on the queue generates 2 additional filtering blocks (each of 60000 work items). The first filtering block is still handled by the RPC processing thread that is handling the RPC request.
The 5 RPC requests therefore generate 10 blocks of additional filtering work to be processed by the queue filtering threads. If WIS_FILTER_THREAD_POOL_SIZE is set to:
When altering the WIS_FILTER_THREAD_BOUNDARIES, WIS_FILTER_THREAD_POOL_SIZE or RPC_SVR_CONTROL process attributes, you should bear in mind that the more RPC processing threads there are and the larger the number of work items in a queue, the more threads in the queue filtering thread pool will be used by a single RPC request to filter a queue.
Configuring Queue Updates
The queue update thread performs two functions:
It goes through all the queues handled by the WIS process and checks for expired deadlines, priority escalations, redirection work, new or purged work items and so on.
It calls the WQS process for a new queue to handle when required (i.e. when the WQS process has processed a MOVESYSINFO event and sent out an SE_WQSQUEUE_ADDED event to the WIS process).
The queue update thread performs updates for WIS_UPDATE_LENGTH seconds or until all queues have been processed, at which point it will go to sleep for WIS_UPDATE_PERIOD seconds. If the thread hasn't gone through all the queues within the WIS_UPDATE_LENGTH time then it will start from the point it finished at on its previous update.
The queue update thread performs its work independently of and concurrently with the RPC processing threads. In pre-10.4 versions of iProcess Engine, where the WIS process was single-threaded, the WIS process had to switch between processing RPC requests and updating work queues.
Configuring When WIS Processes Cache Their Queues
The WQS/WIS processes maintain an in-memory cache of the information that each WIS process contains about each work queue that it is handling. Caching this information allows the WIS processes to respond quickly to RPC requests from client applications.
However, the amount of time that a WIS process takes to start up is heavily influenced by the number of queues that it has to cache, the number of work items in the queue, the number of CDQPs defined in the queue, and the general load on the machine.
You can monitor how long a WIS process is taking to start up using the plist -wV command, which is under the SWDIR\util directory (see Monitoring the WIS Processes). The LastCacheTime column shows the number of milliseconds that the WIS process took to cache each queue when it was last cached.
You can tailor this behavior to suit your particular requirements by configuring work queues to be cached either:
when they are first handled by a WIS process. This will be either when iProcess Engine starts up, or for queues that are added when the system is running, after a MoveSysInfo event request.
or
You control which queues are cached when they are first handled by a WIS process by using a combination of the WISCACHE queue attribute and the WIS_CACHE_THRESHOLD or RESTART_WIS_CACHE_THRESHOLD process attributes. When the WIS process first handles a queue, it checks the value of the queue’s WISCACHE attribute:
If WISCACHE is set to YES, the WIS process caches the queue (irrespective of how many work items the queue contains).
If WISCACHE has not been created, or has not been set, the WIS process only caches the queue if the queue contains a number of work items that equal or exceed the value of the WIS_CACHE_THRESHOLD or RESTART_WIS_CACHE_THRESHOLD process attributes.
When the WIS process starts up, it reads the number of work items in each work queue from the total_items column in the wqs_index database table. This table is populated from the contents of the WQS/WIS shared memory, which is written to the database every WQS_PERSIST_SHMEM seconds.
Any queue that is not cached now will be cached when it is first accessed by a client application.
Note that:
Queues are cached by a pool of threads in the WIS process. You can configure the number of threads in this pool by using the WIS_CACHE_POOL_SIZE
When an RPC client application makes an RPC call to a work queue that has not already been cached, the WIS process immediately begins caching it. If the value of the WIS_CACHE_WAIT_TIME process attribute is reached and the work queue has still not been cached, the WIS process returns an ER_CACHING error to the client application.
If the RPC client application is a TIBCO iProcess Workspace (Windows) session, the user will see the following message in the right-hand pane of the Work Queue Manager, instead of the expected list of work items:
The Work Item Server (WIS) is fetching the work items for this queue. Please wait...
The WISMBD process also makes RPC calls to WIS processes to pass instructions from the BG processes. If the WISMBD process receives an ER_CACHING error from the WIS process it retries the connection a number of times. If the attempt still fails, it requeues the message and writes a message (with ID 1984) to the sw_warn file, which is located in the SWDIR\logs directory.
See TIBCO iProcess Engine System Messages Guide for more information about this message.
Configuring more work queues to be cached when they are first accessed obviously improves the startup time for the WIS processes, but the potential cost is that users may have to wait to access their queues while they are being cached.
Setting the WISCACHE Attribute for a Queue
The WISCACHE queue attribute does not exist by default. If you wish to use it, you must first create it and then assign a value for it to any queues that you want to use it. To do this:
1.
2.
Define a new attribute called WISCACHE. This should have a Type of Text, with a Length of 4.
See “Adding a New Attribute” in TIBCO iProcess Workspace (Windows) Manager's Guide for more information.
3.
Assign a value of YES to WISCACHE for each queue that you want to be cached when the WIS process first handles it (irrespective of how many work items the queue contains).
All other queues (for which WISCACHE is not set) will be cached either when the WIS process first handles it or when they are first accessed by a client application, depending on the value of the WIS_CACHE_THRESHOLD process attribute.
See “Setting User Values for an Attribute” in TIBCO iProcess Workspace (Windows) Manager's Guide for more information.
4.
Save your changes, exit from User Manager and perform a MoveSysInfo to register your changes on iProcess Engine.
See “Moving System Information” in TIBCO iProcess Workspace (Windows) Manager's Guide for more information.
Configuring CDQP Updates
CDQPs allow values from case data to be used by client applications to sort, display and filter work items lists, and to find specific work items.
When the WIS process starts up it caches all the CDQP definitions that are used by the queues it is handling, and uses the cached values when displaying CDQPs in its work queues.
The WIS process obtains the field values of fields that are defined as CDQPs from the pack_data database table.
You can change existing CDQP definitions or create new ones by using the swutil QINFO command. By default, you then have to restart iProcess Engine to allow the WIS process to pick up the changed definitions and update its work queues with them.
However, you can dynamically pick up changes to CDQP definitions without having to restart iProcess Engine, by using the PUBLISH parameter with the QINFO command. This publishes an event that signals that updated CDQP definitions are available.
When the WIS process detects this event its CDQP update thread wakes up and updates the CDQP definitions for all work items in its queues. Work items are updated in batches, the size of which is determined by the value of the WIS_CDQP_DATA_RECACHE_BATCH process attribute.
See "Case Data Queue Parameters" in TIBCO iProcess swutil and swbatch Reference Guide for more information about CDQPs and the QINFO command.

Copyright © Cloud Software Group, Inc. All Rights Reserved
Copyright © Cloud Software Group, Inc. All Rights Reserved