Copyright © TIBCO Software Inc. All Rights Reserved
Copyright © TIBCO Software Inc. All Rights Reserved


Chapter 3 Performing Postinstallation Tasks : Spin Verification and Cleanup

Spin Verification and Cleanup
Spin verification takes you through a forced journal spin to familiarize you with the TIBCO Object Service Broker journal-processing and merging process. When running TIBCO Object Service Broker in production, the Data Object Broker spins the journals automatically.
Prerequisite Tasks
Before verifying spin processing, be sure to backup the system in full by running member BACKUP as part of the initial installation. If you did not perform this task before, do it now. Spin verification requires a full system backup for the BKUPCON job.
Following are two prerequisite tasks.
Installation of the Initial TIBCO Object Service Broker Batch Server
A batch server facility in TIBCO Object Service Broker manages job queues for TIBCO Object Service Broker batch jobs that are initiated by the SCHEDULE statement. Install the initial batch server by following the procedure described in this section.
For details on operating and managing batch servers and queues, refer to Chapter 14, Managing Batch Processing.
Step 1: Customize the @BATCH_JCL Table
Do the following:
1.
2.
From the TIBCO Object Service Broker command line, invoke the TED tool and edit the table @BATCH_JCL and instance @DEFAULT, HURON by typing the following on the command line and pressing Enter:
ex TED('@BATCH_JCL(@DEFAULT,HURON)')
For table instance @BATCH_JCL(@DEFAULT,HURON), change all DD statement data-set names from HURON.LOAD to HLQNONV.INSTVER.AUTH, where HLQNONV and INSTVER are OSEMOD installation variables (for example, OSB.R60.TST.AUTH). The data set-name must be in uppercase. For information on the TED tool, see the TIBCO Object Service Broker Shareable Tools manual.
3.
From the TIBCO Object Service Broker command-line, invoke the TED tool and edit the table @BATCH_JCL and instance @DEFAULT, JOBCARD by typing the following on the command line and pressing Enter:
ex TED('@BATCH_JCL(@DEFAULT,JOBCARD)')
For table instance @BATCH_JCL (@DEFAULT,JOBCARD), change the JOB card ACCOUNT# information to the appropriate values for your site.
When updating the JCL images in the @BATCH_JCL table, ensure that the individual JCL statements are shorter than 71 bytes. All standard JCL rules apply. After saving the JCL, a warning message is displayed, stating that data is to be truncated. The truncated data is in columns 73 to 76, which are not used for coding JCL. This message is expected and you can ignore it.
4.
5.
Step 2: Define an Initial Batch Queue
Do the following:
1.
2.
Execute the BATCH tool from the EX execute rule menu option or from the command line and then press Enter, as follows:
COMMAND ==> EX BATCH
The BATCH submission menu is displayed. For details on the BATCH tool, see the TIBCO Object Service Broker Shareable Tools manual.
3.
The Batch Submission Facility Queue Definition screen is displayed.
4.
Your site value for the OSEMOD installation variable $BATQNM$.
ADMIN, which is the default queue definition
5.
On the next screen, Queue Definition, type your site-selected parameters in at least the following two fields and then press PF3 to save your changes.
Wait Duration: The time in seconds that the batch server waits (for example, 3600) if the queue becomes empty.
Wait Limit: The number of times the batch server goes into a wait state before shutting down; for example, 8.
6.
Step 3: Prepare the Initial Batch Server
Sample JCL in member BATSRVL1 in JCL invokes the batch server.
OSEMOD variable $BATSRI$, which defaults to XBATCHL1, defines the member name in the CNTL data set for the default batch server’s initialization parameters. Customize members BATSRVL1 and XBATCHL1 with the OSEMOD ISPF edit macro and then edit the startup parameters. See the following subsections.
Start-up Parameters
Member XBATCHL1 in CNTL contains sample startup parameters. Note these two rules:
Code parameter statements as PARM=VALUE with no spaces on either side of the equal (=) sign.
Here are the parameters:
The pattern for the Execution Environment communications identifier adopted by a TIBCO Object Service Broker server for communications. If not specified, the values defaults to $TDS$.
The Data Object Broker login ID for the batch server. The OSEMOD installation default is BATSRVL1.
Batch Server
You can run the batch server as a batch job or started task. For the latter, be sure to first complete the z/OS security setup. The batch server submits jobs queued to it with an internal reader. Place BATSRVL1 in your system PROCLIB, that is, SYS1.PROCLIB.
Here are the related references:
For information on running batch jobs, see the TIBCO Object Service Broker Programming in Rules manual.
For more details on running TIBCO Object Service Broker batch applications, see the TIBCO Object Service Broker for z/OS External Environments manual.pf
Creation of Education Workshop Objects
Create education workshop objects as follows:
Step 1: Log In to TIBCO Object Service Broker
To create a workshop environment, do the following:
1.
For details, see Step 3: Customize TSO Execution Environment EXEC.
2.
Log in to TIBCO Object Service Broker with USER EXEC (with user SYSADMIN, whose default password is SYSADMIN). For example:
TSO EX ‘HLQNONV.INSTVER.CLIST(USER)’ ‘U(SYSADMIN) P(SYSADMIN)’
Step 2: Create User IDs
Create TIBCO Object Service Broker user IDs in the Security interface. For details on the procedure, see the TIBCO Object Service Broker Managing Security manual.
Step 3: Create Table Instances
From the administrator workbench, execute the SETUP_EDUC(userid) rule, which creates instances in two parameterized tables: @EMPLOYEE and @DEPARTMENT. The argument userid is the parameter value for each table instance. For details on creating tables, see the TIBCO Object Service Broker Managing Data manual.
Do the following:
1.
2.
Assumptions
Table additions and updates are recorded as journal data after the Data Object Broker has performed a checkpoint. For the purpose of this activity, it is assumed that you must spin your journals and that you are using the continuous backup process. With that process, you merge spun journal data with the current backup to maintain a current system backup.
Spin Verification
TIBCO Object Service Broker merges journal accumulation data after every nth journal spin, where n is set to 2 during the initial installation by the OSEMOD variable $SPINLIM$. That value determines the number of times members SPIN01 and SPIN02 are submitted before member SPINMRG merges journal data into a single data set.
If you create the spins as jobs, one member exists for each journal up to a limit of 255. However, if you create them as started tasks, only one procedure applies, with the journal defined by JRNLDSN.
To verify the spin, follow these steps:
Step 1: Spin the Active Journal
When the Data Object Broker is running, it always contains an active journal data set. The spin process is caused by the data set becoming full or by an operator request, called a forced spin.
To force a journal spin, run this z/OS operator command from a z/OS operator console:
MODIFY jobname,SPINSUBMIT=I
where jobname is the name of the Data Object Broker for which a journal spin is forced.
Next, the Data Object Broker acknowledges the spin request. TIBCO Object Service Broker then starts a SPINxx job to copy the data from the active journal to an archive data set (SPINOUT). For your initial verification of the installation spin, this data contains page images produced during any postinstallation tasks, such as creation of the education workshop tables, batch server installation, and so on.
Reexecute the prerequisite tasks as described on page 63 to write out data to the other journal. Afterwards, reenter the z/OS operator command to invoke, submit, and run the other SPINxx job.
Spin jobs result in the following:
The first step in both SPIN01 and SPIN02 produces a return code of 0.
The third step in each job, TESTSPN, returns the number of journal generation data groups (GDGs) in existence.
Since the spin limit is 2 and you performed two journal spins, member SPINMRG is submitted by step SPAWN. SPINMRG merges journal accumulation GDGs into one manageable output data set and produces a return code of 0.
Step 2: Test the Continuous Backup JCL
The BKUPCON member in the JCL data set is the continuous backup job. It sorts the page images, merges them with the journal accumulation data, and then combines them with the latest backup to create a current backup.
To test the BKUPCON JCL, submit the job and ensure that it ends with RC=0.
Now that you have verified the journal spin, consider resetting the $SPINLIM$ value to something more appropriate for your site with the following steps:
1.
Delete and redefine the GDG base with the ENTRIES parameter, specifying the number of generations required.
2.
Update the first condition test on step SPAWN of SPIN01 and SPIN02.
With $SPINLIM$ set to 2 initially, the condition test is coded in the JCL as (2,GT,TESTSPN). Update the number 2 to the new spin limit. The maximum number of journal spin GDG data sets is defined by the OSEMOD variable $JSRGDG$ in STEP6 in member S6A3ALOC of the OSB.JOBS data set.
Cleanup
This section describes the cleanup process.
Backup Procedures
Create a procedure that suits your site requirements for merging the journal data into the current backup. For details on planning production backup and recovery procedures, see the TIBCO Object Service Broker for z/OS Managing Backup and Recovery manual.
The backup procedures are critical to the availability of your TIBCO Object Service Broker system. Involve the system administrator, operations staff, and systems programmer, and other appropriate team members when creating and documenting those procedures. Afterwards, test them.
Backup Processing
The BACKUP member in the JCL data set serves as sample JCL to back up all page data sets within a segment. It produces a copy of the TIBCO Object Service Broker page data sets that you can restore with the S6BTLRPS utility. You also use S6BTLRPS before relocating page data sets. For details, see the TIBCO Object Service Broker for z/OS Utilities manual.
Member BACKUP uses the GDG created during installation for its output data set. That data group is defined in STEP6 of member S6A3ALOC in the OSB.JOBS data set.
You cannot back up or relocate page data sets with IDCAMS REPRO because it would destroy the structure of the internal page data set.
Dump Processing
Members OSRUNSTC and OSRUN in Data Object Broker JCL contain a SYSMDUMP DD statement. That data set is part of a GDG defined in STEP6 of member S6A3ALOC in the OSB.JOBS data set with a limit of 5, which is defined by the OSEMOD variable $DMPGDG$.

Copyright © TIBCO Software Inc. All Rights Reserved
Copyright © TIBCO Software Inc. All Rights Reserved