Last April I wrote that  I was going to try to go back to a once a quarter schedule.  That didn’t happen.

So my resolution for this year is to get back to having an update April 15 (pre ASAP), July 15 (post ASAP) and October 15 (focus on the future).  2023 should be the “year of 10.1 release” and right now the users of z Workload Scheduler fall roughly into three groups:

  • Running 9.3 or earlier, thinking about getting an extension of 9.3 support rather than migrating
  • On 9.5 or 10.1 and not thinking about future releases
  • On 10.1 and thinking seriously about future releases

If you are on 9.3 or earlier my recommendation would be to skip 9.5 and go to 10.1, or even wait for the release after 10.1, although a direct one-step migration from 9.3 or earlier,  to a release after 10.1 may not be possible. (Two or three step migrations will be possible).

Whether you are looking forward a future release or dreading it depends largely upon whether you are still using any of the functionality that was announced as being removed after the 10.1 release:

  • Open Services for Lifecycle Collaboration (OSLC) integration
  • IBM Z Workload Scheduler Control Language (OCL)
  • Batch loader
  • Fault tolerant agents  (E2E / FTA)
  • Direct access storage device (DASD) connection method between controller and trackers

If you are using WAPL, or zcentric agents, or TCPIP/XCF/VTAM  connected trackers you  are positioned to have an easier time getting to the 10.1+ level of zWS.

For z/OS,  2.4 and 2.5 are currently supported and 2.6 should be available in 2024.  zWS 10.1 has z/OS 2.4 as a prereq (mostly due to TLS 1.3 support) but all versions of zWS will run on zOS 2.4 or 2.5 with no additional PTFs required.


There have not been any recent FLASHES, however this would be issued as a flash if more customers were running the 10.1 release:

Additional ACTION HOLD items for PH45777 ( SPE10101 ) for z Workload Scheduler 10.1

If you don’t receive notification for flashes, check your settings at


A new technote for 10.1 has been added:

Need all HIPER maintenance for z Workload Scheduler 10.1.0 ( COMPID 5697WSZ01 )

The technotes for 9.3 and 9.5 are still being maintained:

(Need all HIPER maintenance for z Workload Scheduler 9.5.0 ( COMPID 5697WSZ01))

(Need all HIPER maintenance for IWS z/OS 9.3.0 ( COMPID 5697WSZ01 ))

The most important new HIPER is:


9.3  PTF  UI81464

9.5 PTF   UI81465

10.1 PTF UI82881

DWC on z/OS (zliberty)

Last time I mentioned EJBROLE for assigning roles to DWC users/groups, however I have since found out that this is not currently possible (see idea ZWS-I-153 below).

Cases for  zWS have a category associated with them, and when I checked a few days ago, DWC on z/OS was the leading category.  This is  good since running the DWC on z/OS is becoming more popular).  There are still some issues with zDWC that we are working on.

The most recently DWC is 10.1 FP1 which requires liberty

Some issues for the DWC currently being worked on include:

  • Upgrade  for DWCZOS to a new fix pack or release level  (instead of doing a new install)
  • Setting  ROLES via xml files ( since EJBROLE will not work )
  • Does installation require ROOT (UID=0) or DB2 SYSADM access?
  • JCL samples in addition to EQQINDWC (like started task JCL)


As an example of documentation issues, DOC APAR PH47477 refers to both the zWS Planning and Installation Guide, and the  IBM Workload Scheduler  Planning and Installation manual. Other information is  currently only in technotes.  Much or all of this should be addressed in  time for  ASAP University in May  (WKLD in Cincinatti).


Some new technotes which you may find useful:

Additional ACTION HOLD items for PH45777 ( SPE10101 ) for z Workload Scheduler 10.1

Installing a DWC on z/OS with a DB2 database (Release 9.5 Fix pack 6 ))

zWS migration : Avoid loss of triggering records (job name or dataset)

DWC test connection message: AWSJCO005E WebSphere Application Server has given the following error: other: corbaname evaluation error:recv() failed: I/O error during read: Connection reset.

Configuring a Dynamic Domain Manager ( DDM ) for a z Workload Scheduler controller

Ideas (Formerly RFE)

These are the entries from December and January at

Ability to auto kill job once deadline is reached [ZWS-I-155]

Ability to send job late message to the error panel [ZWS-I-154]

Interface Dynamic Workload Console with EJBROLE RACF profile to define roles at login. [ZWS-I-153]

NOPs in AD should be ‘not in CP’ selectable by customer [ZWS-I-152]

Support Application groups in LTP [ZWS-I-151]

Time-Dependency-Flag in LTP [ZWS-I-150]

Modify Status Command gives no response to the Requester [ZWS-I-149]

Modify the workstation of operations in the LTP [ZWS-I-148]

Provide a new User exit to read the JCL for a FETCH directive and Automatic Recovery ADDPROC [ZWS-I-147]

Show all external dependencies in one panel [ZWS-I-146]

Clear command for IWS selection panels (ISPF dialog) [ZWS-I-145]

Support SYSTEM= parameter for WSSYS or add a new parameter [ZWS-I-144]

Support specific characters regarding MAILOPTS [ZWS-I-143]


You can see all the BLOGS related to workload automation (distributed workload scheduler, DWC and zWS) at

Some recent  BLOGS which may be of interest are:

Wrap Up

I am thinking about submitting an “idea” for simplifying zWS migration to a new release by changing some of the things that currently make it challenging to put in a new release.  For example, suffixed load modules like EQQSSCMx. EQQMINOx; changes to VSAM file structure which are handled by the EQQICTOP program; new/modified parameters and JCL (new files, changed LRECL and BLKSIZE);etc.

If this is of interest to you, or if you have some ideas about this,  feel free to drop me an email at