When you make changes in Workload Automation, it’s always a good idea to be able to get other people to review those changes quickly and easily. One of the most important things being a review of the dependencies between jobs.

But it can be time consuming going through every panel and checking every value. What you really need are reports or extracts that you can send to people to review.

Did you know you can get details of all your dependencies from an application from your database and put them in a spreadsheet really quickly. Not only that you can do the same thing in the Current Plan as well. All with the power of WAPL.

Today’s Tales from Across the Pond will show you how to build your own spreadsheets to do just that, along with many other formats. It will tell you where to get the information from to extract anything you like.

I will warn you that today’s Tales from Across the Pond is probably the longest we’ve ever done. This is because rather then just give you the answers it explains how data is structured inside Workload Automation , how to get at it and how to present it. This way you can write so much more.

It’s the old adage of “Teach someone how to WAPL and their workload with never croak”

Forget the Frog, lets go the Whole Hog

Before we look at WAPL, lets look at reports already in the product that can give you a HUGE amount of information. The ISPF menus have a lot of useful reports, if you know where to look. There are a whole set of reports for printing Applications that you can find at option 1.4.4 in ISPF.

Copy to Clipboard

Today we will just take a quick look at option 1 which gives you a detailed print out of your application in full, but by all means try out the others and see what they can do for you. Many of the other object types in the database also have their own Print options.

This particular report will create a comprehensive report for your selected application that tells you absolutely everything, including run cycles, job details and dependencies. The only problem is that it’s a bit of a beast of a report. Far too big to show in this blog.

Selecting the option will bring up the input panel. Here you can enter the name of a single application, or you can insert more lines and list several applications at one time. Alas though you cannot use wildcards.

Copy to Clipboard

This creates a job that can print your application either to a dataset or to spool. You can also create this job by entering P next to an application in the list presented by option 1.4.3.

If you want to save the job it creates to plug into an automated process enter E to edit JCL on the next panel, so you can save it for later.

Another good source of information is the ‘Daily Operating Plan’ which is produced by both Current Plan Extend jobs and Replan jobs. This includes every job in the current plan, showing their Workstation, Input Arrival, Deadlines, durations, Dependencies and much more. Again, a bit of a beast as it covers every job in the plan, but if you want to check connections across a lot of applications, this is the place to look.

So, what can we do that is more concise, that just shows what we need and perhaps even put it in a spreadsheet?

Let’s look at what information we can get out of WAPL

Pretty much any piece of information you can see in ISPF, you can get out of Workload Automation using WAPL. There are a few things you cannot get, but the most important things are reasonably easy to get at.

WAPL can extract data from every kind of object in the database, long term plan and current plan.

There are two kinds of data WAPL can create –

  • ISPF Loader Streamed Object Notation, ILSON for short. This was designed to be able to be imported into ISPF tables easily using another utility. This utility was available in WAPL’s forerunner SOE, but didn’t make it as far as a formal release. Even without that, this format is flexible and can be formatted to extract JUST the segments and fields you want in a few formats, including Comma Separated Value format, making it ideal to create spreadsheets.
  • Batch Loader. This format allows you to Export object definitions in a way that can be transformed by rules and imported elsewhere. Only available for database objects and current plan JCL.

Data in Workload Automation is split into RECORDS and SEGMENTS, and the more you get to know about this, the more sense things make.

For example, an application is stored in an AD record. When you look at an application the high-level information you see is in the Common Segment (ADCOM) the Run Cycles in an ADRUN record, operations in an ADOP record, dependencies in an ADDEP record.

The record is built in a Hierarchical structure, so ADCOM comes first, then one ADRUN for each run cycle and one ADOP for each operation. Each predecessor dependency ADDEP segment follows the ADOP to which it belongs but precedes the next ADOP. So, the sequence of the segments gives you the knowledge of which item they belong to within the record as a whole.

Here is a diagram showing the relationships between the segments in an application

Copy to Clipboard

A full list of what records and fields can be accessed from WAPL can be found in your installation SEQQWAPL library in member EQQFLALL. Each record and segment are tagged, so you can quickly find your way around.

For example, typing FIND SEGMENT=ADDEP will take you straight to the information for an application dependency segment or typing FIND RECORD= will take you to the start of the next record, and pressing PF5 will let you step through every available record that WAPL can access.

Introducing LIST and SELECT

There are 2 WAPL commands that help you find and retrieve the records you want.

  • LIST – Which lets you search for multiple records that match specified criteria and returns only the Common segments for all of the matching records.
  • SELECT – Which retrieves a single record identified by criteria and returns all segments of the entire record. If the criteria matches with more than one record the SELECT will fail.

Think of LIST as performing searches and SELECT as fetching a whole record once you have found it. WAPL has a special trick to search and fetch the full records in a single action. You can either add the argument SELECT(Y) to the LIST command to make that individual LIST automatically SELECT every record it finds, or you can enter OPTIONS SELECT(Y) and every LIST command following will automatically SELECT the records it finds.

The LIST command syntax looks like this –

LIST <segment> <arguments>

e.g. LIST ADCOM ADID(ABC*) STATUS(P). This command will find all applications whose names begin with ABC and are in a STATUS of P.

The SELECT statement syntax looks like this –

SELECT <record> <arguments>.

e.g. SELECT AD ADID(MYGROUP) TYPE(G) STATUS(A) VALFROM(260325).

Applications have Valid-From and Valid-To dates as part of their key, so finding the correct version can be a little tricky. To make things easier WAPL has an additional argument of VALID(yymmdd) which will form a query against the VALFROM and VALTO fields to find the records valid on that specific date. A special form of VALID(=) will identify the version valid for today.

e.g. LIST ADCOM ADID(ABC*) VALID(=) SELECT(Y) will find today’s active version of each application whose name begins with ABC and the SELECTs the whole record.

Unlike some of the older tools that told you way too much, making you wade through the data, by default WAPL gives you nothing. This is because LIST and SELECT commands are run many times for many things, both explicitly by the user and internally by other WAPL commands. If it generated the full output every time those commands ran your output would be awash with data that you didn’t want or need.

So, if you run a LIST command alone, even if it finds something and gives return code 0, it will not tell you what it found.

If you would like to simply see which records it found, add OPTIONS SHOWKEYS(Y) ahead of the LIST statement.

Copy to Clipboard

Which will give you something like this –

Copy to Clipboard

But if you want more information that just the keys, then you need to tell WAPL exactly what records, segments and fields you need.

Introducing OUTPUT and LOADDEF

WAPL has the OUTPUT and LOADDEF commands which tells WAPL what data you want, and where to get it.

The OUTPUT command lets you pick an individual segment, decide which fields you want, how the records and fields are labelled and where to send the ILSON and Batch Loader data.

You need one OUTPUT statement for each segment you want to collect data from. The command looks something like this –

Copy to Clipboard

If you look in SEQQWAPL(EQQFLALL) you may also notice the KEYS argument as well. This is just a special case of the FIELDS argument to identify these fields as key fields, but this is only useful when loading data into ISPF. Using KEYS(ADCOM) FIELDS(ADDESC) is functionally equivalent to FIELDS(ADCOM,ADDESC) for most purposes.

You may also notice in EQQFLALL that some field names are double-barreled. For example, you might see something like this.

OUTPUT ADOP FIELDS(ADCOM.ADID,ADOPWSID,ADOPNO,ADOPJN)

If you look in the product record layouts the ADOP segment does not contain the Application name (ADID). What is happening here is that WAPL allows inheritance from a higher-level segment. Because ADCOM is a parent segment to ADOP, the OUTPUT ADOP statement can refer to fields in a parent, or grandparent segments in the format <segment>.<field> e.g. ADCOM.ADID

OUTPUT is great to define small collections of data, but if you want full Batch Loader output for an object, then you would need to define every field of every segment in the record. To make that job a little less tedious we have the LOADDEF command.

This is not really a command in its own right but an OUTPUT aggregator. What is does is LOAD the DEFinitions that you see in SEQQWAPL(EQQFLALL) for the segments you ask.

So, LOADDEF AD* will load and execute all the OUTPUT statements for segments beginning with AD. Saving you a lot of typing.

In these definitions LOADER is sent to OUTBL and DATA is sent to OUTDATA. You can override any of the OUTPUT arguments on the LOADDEF statement.

For example, LOADDEF AD* DATA(-) will load the definitions for AD segments, the Batch Loader will be sent to OUTBL but the ILSON data will be turned off. This saves you generating dual data when you only want one type.

So now we know how data is structured in Workload Automation, which commands generate it and which commands let you select what output you want, we are ready to look at some real use cases putting it all together.

Generating Batch Loader

As mentioned earlier we can use LOADDEF to define all the output you need. One thing we haven’t made really clear at this point is though a LIST alone will generate ILSON data just for the ADCOM segment, because Batch Loader defines a whole record it will NOT generate Batch Loader purely from a LIST as this would only create the high level details of the record, but no content. This would be useless as all records in Workload Automation MUST have content i.e. An application must have at least one job, a variable table must have at least one variable, a calendar must have 7 days of the week.

It is therefore important to remember that to get Batch Loader you must SELECT the whole record. Either by using LIST with SELECT or SELECT explicitly.

Copy to Clipboard

This will generate the batch loader for the version of DMOZ#DAILY1 that is valid today. The output will look something like this.

Copy to Clipboard

As you can see the output is quite verbose when you consider that this is just an application containing 3 jobs. It includes every value of every possible field in each segment and fully padded out to the field width. This is great if you want to learn the names of all the keywords, and also useful if you want to do some form of mass edit, possibly changing fields that were not explicitly set in the original object.

However, if you add the statement OPTIONS SHOWDFLT(N) STRIP(Y) before the LOADDEF statement this will suppress any fields that are set to their default values and strip any trailing spaces from all of the fields. This gives you much more concise output.

Copy to Clipboard

As well as being useful for reviewing, this output can be used to transfer the object to another controller. A job like this will do that, using the output from the previous job as the input to this one.

Copy to Clipboard

The DBMODE(REPLACE) in the ARGS is the same as entering OPTIONS DBMODE(REPLACE) in the SYSIN. The ARGS symbolic in the JCL is a way of setting execution options in WAPL without placing them in the SYSIN.

There are many uses for Batch Loader such as Development Lifecycle transformation using the TRANSLATE command, dynamically generating workload in the current plan with ACTION(SUBMIT) and updating elements within objects using DBMODE(UPDATE). But these will be topics for another Tales Across the Pond.

Generating ILSON data

To generate your own custom record layouts you really need to look at the ILSON data format.

A simple job like this –

Copy to Clipboard

Will generate the default ILSON format like this –

ADOP ADID=DMOZ#DAILY1 ADOPWSID=CPU# ADOPNO=010 ADOPJN=DMJOBA
ADOP ADID=DMOZ#DAILY1 ADOPWSID=CPU# ADOPNO=020 ADOPJN=DMJOBB
ADOP ADID=DMOZ#DAILY1 ADOPWSID=CPU# ADOPNO=030 ADOPJN=DMJOBC

Using the color coded components, each record is made up from –

Highlighted in Yellow – Segment label

Highlighted in Green – Field label

Highlighted in Red – Label separator (LABELSEP)

Highlighted in Grey – Field data

What you can’t really see is the single character preceding the Field Label, which is known as the Field Separator (FIELDSEP) and is actually set to 00x. You can see it here in Hex view.

Copy to Clipboard

It is set to 00x by default as that is a character that is guaranteed NOT to be in any data that comes from Workload Automation, so you can safely parse that data in other programs. The description fields, resource names and user fields can each contain commas and many other unusual characters.

Note that whatever format you decide upon, to save an ILSON file to the database the minimum record length must be the sum of the maximum length of each field you select for output + 1 character extra per field to account for separators plus the length of any field and segment labels and their separators that you may include in the output. If you don’t make the LRECL long enough you will get Return Code 4 and a truncation error message.

So, let’s look at making that output a little more friendly, such as getting rid of the 00x and making the labels more meaningful.

Copy to Clipboard

The OPTIONS FIELDSEP sets the field separator to blank. Then where the segment names and field names are specified, appending = immediately followed by some text will display an alternative label for those elements. Note that the alternative label cannot contain spaces.

The output now looks like this, much easier to understand.

Copy to Clipboard

You can turn off individual labels by setting * as the label text e.g. ADOPNO=*.

You can turn off all labels by adding LABEL(NO) to the OUTPUT statement, or to just turn off Field or Segment labels with LABEL(NOFIELD) or LABEL(NOSEGMENT) respectively.

If instead of individual labels you wanted columns with a header, WRITE your own headers and turn of the field labels.

Copy to Clipboard

Which would look like this –

Copy to Clipboard

Now to get the dependency information in there too, we need to add the ADDEP segment into the mix.

Copy to Clipboard

We added in the OUTPUT statement for the ADDEP segment and added an extra column of dependency type. Which gives you a file that looks like this –

Copy to Clipboard

The first thing you might notice is that the Jobname is blank on the PRED rows. Even though the ADDEP segment has an ADDEPJOBN field, it is mostly empty. If you look in the Driving IBM Z Workload Schedular manual and the record layouts, or even in SEQQSAMP(EQQFLALL) you will notice the comment ‘Not always set’. This is because the job name itself is not actually part of the dependency, the unique key is the Application ID and Operation number.

I believe the ADDEPJOBN field is only used when Building an AD record with the EQQYLTOP Batch Loader program, as it is possible to define dependencies by Jobname rather than operation number. So, the ADDEPJOBN field is used to store the jobname until the operation number has been found by EQQYLTOP, but it does not get stored in the database as this can lead to data getting out of step in some scenarios.

It is however useful to keep the field in the OUTPUT statement as it provides a fixed width placeholder to keep the Dependency Type column lined up. If you look at the report in Hex you will see that ADDEPJOBN is actually set to eight 00x characters, which is what keeps everything lined up.

The dependency will either be I for Internal or E for External. Whilst it is easy enough to work that out from whether the PRED has a value in ADID, providing a column with I or E makes it easier to filter when we get as far as a spreadsheet.

As well as looking at dependencies in the database, once the application has arrived in the current plan you can check the dependencies there as well.
In the current plan things are a little different. Because the applications have arrived and all the external dependencies have been resolved there is no longer distinction between what was an internal and what was an external predecessor, everything is Operation to Operation dependencies. Both types of dependency will show both the application name and the input arrival of the occurrence to which they are connected.

Additionally, as well as predecessors we can now see the successors as well. The Predecessor and Successor segments do not however have the workstation included. As there is no field defined, we cannot use the empty field as a placeholder like before. So, in this case the workstation on the operation segment is placed at the end to keep all the columns lined up.

The Current Plan version of the extract looks like this

Copy to Clipboard

And the output looks like this –

Copy to Clipboard

You may also notice here that we can also see the external successor in application DMOZ#DAILY2. Something that is not possible to do from the database.

Creating spreadsheets

We now have two reasonable mainframe reports, but how can we turn those into spreadsheets? The best way to do this is to turn them into Comma Separated Value (CSV) files. You can then take these files, send them to your PC and open them in your favourite spreadsheet application.

It is surprisingly easy to do with a few small changes –

OPTIONS FIELDSEP(,) STRIP(Y)

OUTPUT ADOP=OPER DATA(OUTDATA) LABEL(NOFIELD)

FIELDS(ADCOM.ADID,ADOPWSID,ADOPNO,ADOPJN)

OUTPUT ADDEP=PRED DATA(OUTDATA) LABEL(NOFIELD)

FIELDS(ADDEPADID,ADDEPWSID,ADDEPOPNO,DUMMY,ADDEPTYPE)

WRITE OUTDATA ‘Seg,Application,WSID,OP#,Jobname,Type,

LIST ADCOM ADID(DMOZ#DAILY1) VALID(=) SELECT(Y)

Here is what you need to change:

  • Highlighted in Yellow – Since this is a Comma Separated Value file you should change the field separator to a comma.
  • Highlighted in Green – Since this is going into a spreadsheet the values no longer need to be padded with spaces to keep everything lined up. STRIP(Y) will remove all the padding.
  • Highlighted in Red – The ADDEPJOBN field worked for padding as it was an 8 byte Hex 00 string, which STRIP(Y) will not remove. For a CSV file you don’t need the width, so coding the name of a non-existent field will return an empty cell in the spreadsheet. We used DUMMY in the example, but any word that is not a real field name will do.
  • Highlighted in Grey – The header row has the trailing spaces removed, and commas added between the fields. The header underscore row isn’t needed in a spreadsheet.

This gives you a file that looks like this –

Copy to Clipboard

You can then transfer this to your PC and give it a name with an extension of .csv

Clicking on this should open your favourite spreadsheet application, which might give you a pop-up warning something like this –

Excel Example

This is objecting to the operation numbers. My personal preference would be to click “Don’t Convert” to keep the operation numbers as 3 digits, but it’s entirely up to you what you do here.

Once opened your spreadsheet could look like this –

You can adapt the Current Plan job in a similar way.

Copy to Clipboard

One difference here is that because this is a CSV file we don’t need the width of the place holder, so the Workstation name can be moved back into position before the operation number and DUMMY used to create an empty field in the relevant column.

When you first open this spreadsheet, you may find that the IA time is shown in exponent format e.g. 2.6E+09 instead of 2603261900. This is easily cured by stretching the column width to accommodate the full 10 characters of the IA time.

One final tip. If your data contains Description fields, Resource names or Operation user fields, it is possible that commas may be part of the data. This makes comma separated value files unsuitable. However, another delimiter that many spreadsheet applications allow is the TAB character. This is 05x in EBCDIC. You can use this is the field separator by specifying FIELDSEP(05x) on the OPTIONS statement and remember to replace any commas in your header row with 05x. Then you must use the data import feature of your favourite spreadsheet to tell it to use TAB as the delimiter instead of a comma. It’s a bit more fiddly, but gets around issues of commas being in your data.

In conclusion

It was an unusually long Tale this time, but hopefully now you have an understanding of many different ways to get information out of Workload Automation.

Don’t forget to look in the manual for more details about the commands and features discussed here. We have only just scratched the surface, there are many more things that WAPL can do now you know where to look.

Also have a good look through SEQQWAPL(EQQFLALL) to see what information is stored in Workload Automation that you can access through WAPL.

Finally, the OUTPUT command lets you get data out in a simple way, but there is also the concept of OBJECT variables that allow you to extract data, filter it and present it in far more precise ways, but this is a quite advanced concept and definitely a Tale for another day.