Professional Documents
Culture Documents
SC26-4922-04
DFSMS/MVS Version 1 Release 5 IBM
Using Data Sets
SC26-4922-04
Note!
Before using this information and the product it supports, be sure to read the general information under “Notices” on page xxiii.
This edition applies to Version 1 Release 5 of DFSMS/MVS (5695-DF1), Release 7 of OS/390 (5647-A01), and any subsequent
releases until otherwise indicated in new editions. Make sure you are using the correct edition for the level of the product.
Order publications through your IBM representative or the IBM branch office serving your locality. Publications are not stocked at the
address given below.
A form for readers' comments appears at the back of this publication. If the form has been removed, address your comments to:
When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any way it believes
appropriate without incurring any obligation to you.
Copyright International Business Machines Corporation 1979, 1999. All rights reserved.
Note to U.S. Government Users — Documentation related to restricted rights — Use, duplication or disclosure is subject to
restrictions set forth in GSA ADP Schedule Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Programming Interface Information . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv
Contents v
Defining a Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Defining a Page Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Checking for Problems in Catalogs and Data Sets . . . . . . . . . . . . . . . 110
Using LISTCAT to List Catalog Entries . . . . . . . . . . . . . . . . . . . . . 111
Printing a Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Deleting a Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Contents vii
Connecting a Data Set to a Resource Pool: OPEN . . . . . . . . . . . . . 193
Deleting a Resource Pool Using the DLVRP Macro . . . . . . . . . . . . . 193
Managing I/O Buffers for Shared Resources . . . . . . . . . . . . . . . . . . . 194
Deferring Write Requests . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Relating Deferred Requests by Transaction ID . . . . . . . . . . . . . . . . 195
Writing Buffers Whose Writing is Deferred: WRTBFR . . . . . . . . . . . . 195
Accessing a Control Interval with Shared Resources . . . . . . . . . . . . . 197
Restrictions for Shared Resources . . . . . . . . . . . . . . . . . . . . . . . . . 198
Contents ix
Actual and Relative Addressing . . . . . . . . . . . . . . . . . . . . . . . . . 265
Magnetic Tape Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Label Character Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Data Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Identifying Unlabeled Tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Tape Marks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Contents xi
SYSIN Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
SYSOUT Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
What Your Programs and Routines Should Do . . . . . . . . . . . . . . . . 345
Contents xiii
Sequential Concatenation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
Partitioned Concatenation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
Converting PDSs to PDSEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
Converting PDSEs to Partitioned Data Sets . . . . . . . . . . . . . . . . . . 433
Improving Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
Recovering from System Failures . . . . . . . . . . . . . . . . . . . . . . . . 434
Appendixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
Contents xv
Using the Queued Indexed Sequential Access Method (QISAM) . . . . . . . 509
Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
Data Set Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
Prime Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
Index Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
Overflow Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
Creating an ISAM Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
One-Step Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
Full-Track-Index Write Option . . . . . . . . . . . . . . . . . . . . . . . . . . 515
Multiple-Step Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
Resume Load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
Allocating Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
Prime Data Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
Specifying a Separate Index Area . . . . . . . . . . . . . . . . . . . . . . . 519
Specifying an Independent Overflow Area . . . . . . . . . . . . . . . . . . . 520
Calculating Space Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . 520
Step 1. Number of Tracks Required . . . . . . . . . . . . . . . . . . . . . . 520
Step 2. Overflow Tracks Required . . . . . . . . . . . . . . . . . . . . . . . 521
Step 3. Index Entries Per Track . . . . . . . . . . . . . . . . . . . . . . . . 521
Step 4. Determine Unused Space . . . . . . . . . . . . . . . . . . . . . . . 521
Step 5. Calculate Tracks for Prime Data Records . . . . . . . . . . . . . . 522
Step 6. Cylinders Required . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
Step 7. Space for Cylinder Indexes and Track . . . . . . . . . . . . . . . . 522
Step 8. Space for Master Indexes . . . . . . . . . . . . . . . . . . . . . . . 523
Summary of Indexed Sequential Space Requirements Calculations . . . . 524
Retrieving and Updating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
Sequential Retrieval and Update . . . . . . . . . . . . . . . . . . . . . . . . 524
Direct Retrieval and Update . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
Adding Records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
Inserting New Records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
Adding New Records to the End of a Data Set . . . . . . . . . . . . . . . . 532
Maintaining an Indexed Sequential Data Set . . . . . . . . . . . . . . . . . . . 533
Buffer Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
Work Area Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
Space for Highest-Level Index . . . . . . . . . . . . . . . . . . . . . . . . . . 537
Device Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
SETL—Specifying Start of Sequential Retrieval . . . . . . . . . . . . . . . . 538
ESETL—Ending Sequential Retrieval . . . . . . . . . . . . . . . . . . . . . 540
Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
Contents xvii
xviii DFSMS/MVS V1R5 Using Data Sets
Figures
1. Data Management Access Methods . . . . . . . . . . . . . . . . . . . . . . . 8
2. HFS Directories and Files in a File System . . . . . . . . . . . . . . . . . 17
3. Data Set Activity for Non-System-Managed and System-Managed Data
Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4. REPRO Encipher/Decipher Operations . . . . . . . . . . . . . . . . . . . . 59
5. VSAM Logical Record Retrieval . . . . . . . . . . . . . . . . . . . . . . . . 65
6. Control Interval Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
7. Relation between Records and RDFs . . . . . . . . . . . . . . . . . . . . . 67
8. Data Set With Nonspanned Records . . . . . . . . . . . . . . . . . . . . . 68
9. Data Set With Spanned Records . . . . . . . . . . . . . . . . . . . . . . . 68
10. Entry-Sequenced Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
11. Example of RBAs of a Entry–Sequenced Data Set . . . . . . . . . . . . . 71
12. Entry-Sequenced Data Set Processing . . . . . . . . . . . . . . . . . . . . 71
13. Record of a Key-Sequenced Data Set . . . . . . . . . . . . . . . . . . . . 73
14. Inserting Records in a Key-Sequenced Data Set . . . . . . . . . . . . . . 74
15. Key-Sequenced Data Set Processing . . . . . . . . . . . . . . . . . . . . . 74
16. Inserting a Logical Record in a Control Interval . . . . . . . . . . . . . . . 75
17. Fixed-length Relative Record Data Set . . . . . . . . . . . . . . . . . . . . 77
18. RRDS Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
19. Variable-Length RRDS Processing . . . . . . . . . . . . . . . . . . . . . . 78
20. Comparison of ESDS, KSDS, Fixed-Length RRDS, Variable-Length
RRDS, and Linear Data Sets . . . . . . . . . . . . . . . . . . . . . . . . . . 78
21. Control Interval Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
22. Alternate Index Structure for a Key-Sequenced Data Set . . . . . . . . . 85
23. Alternate Index Structure for an Entry-Sequenced Data Set . . . . . . . . 86
24. Adding Data to Various Types of Output Data Sets . . . . . . . . . . . . 104
25. Effect of RPL Options on Data Buffers and Positioning . . . . . . . . . 132
26. VSAM Macro Relationships . . . . . . . . . . . . . . . . . . . . . . . . . 139
27. Skeleton VSAM Program . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
28. Control Interval Size, Physical Track Size, and Track Capacity . . . . . 142
29. Determining Free Space . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
30. Scheduling Buffers for Direct Access . . . . . . . . . . . . . . . . . . . . 155
31. General Format of a Control Interval . . . . . . . . . . . . . . . . . . . . 165
32. Format of Control Information for Nonspanned Records . . . . . . . . . 167
33. Format of Control Information for Spanned Records . . . . . . . . . . . 168
34. Exclusive Control Conflict Resolution . . . . . . . . . . . . . . . . . . . . 175
35. Relationship between the Base Cluster and the Alternate Index . . . . 177
36. Relationship between SHAREOPTIONS and VSAM Functions . . . . . 183
37. VSAM RLS Address/Data Spaces and Requestor Address Spaces . . 201
38. CICS VSAM Non-RLS Access . . . . . . . . . . . . . . . . . . . . . . . . 202
39. CICS VSAM RLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
40. VSAM User-Written Exit Routines . . . . . . . . . . . . . . . . . . . . . . 217
41. Contents of Registers at Entry to EODAD Exit Routine . . . . . . . . . 220
42. Contents of Registers at Entry to EXCEPTIONEXIT Routine . . . . . . 221
43. Contents of Registers at Entry to JRNAD Exit Routine . . . . . . . . . . 222
44. Example of a JRNAD Exit . . . . . . . . . . . . . . . . . . . . . . . . . . 224
45. Contents of Parameter List Built by VSAM for the JRNAD Exit . . . . . 226
46. Contents of Registers at Entry to LERAD Exit Routine . . . . . . . . . . 229
47. Contents of Registers for RLSWAIT Exit Routine . . . . . . . . . . . . . 230
48. Contents of Registers at Entry to SYNAD Exit Routine . . . . . . . . . . 231
Figures xxi
149. Option Mask Byte Settings . . . . . . . . . . . . . . . . . . . . . . . . . . 472
150. Conditions for Which Recovery Can Be Attempted . . . . . . . . . . . . 473
| 151. Recovery Work Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
152. System Response to Block Count Exit Return Code . . . . . . . . . . . 477
153. Defining an FCB Image for a 3211 . . . . . . . . . . . . . . . . . . . . . 479
154. Parameter List Passed to User Label Exit Routine . . . . . . . . . . . . 481
155. System Response to a User Label Exit Routine Return Code . . . . . . 482
156. IECOENTE Macro Parameter List . . . . . . . . . . . . . . . . . . . . . . 484
157. Saving and Restoring General Registers . . . . . . . . . . . . . . . . . . 485
158. IECOEVSE Macro Parameter List . . . . . . . . . . . . . . . . . . . . . . 487
159. Direct Access Labeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
160. Initial Volume Label Format . . . . . . . . . . . . . . . . . . . . . . . . . 494
161. User Header and Trailer Labels . . . . . . . . . . . . . . . . . . . . . . . 495
162. Creating a Direct Data Set (Tape-to-Disk) . . . . . . . . . . . . . . . . . 502
163. Adding Records to a Direct Data Set . . . . . . . . . . . . . . . . . . . . 506
164. Updating a Direct Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . 507
165. Indexed Sequential Data Set Organization . . . . . . . . . . . . . . . . . 511
166. Format of Track Index Entries . . . . . . . . . . . . . . . . . . . . . . . . 512
167. Creating an Indexed Sequential Data Set . . . . . . . . . . . . . . . . . 516
168. Requests for Indexed Sequential Data Sets . . . . . . . . . . . . . . . . 519
169. Sequentially Updating an Indexed Sequential Data Set . . . . . . . . . 525
170. Directly Updating an Indexed Sequential Data Set . . . . . . . . . . . . 528
171. Directly Updating an Indexed Sequential Data Set with Variable-Length
Records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
172. Adding Records to an Indexed Sequential Data Set . . . . . . . . . . . 533
173. Deleting Records from an Indexed Sequential Data Set . . . . . . . . . 534
174. Use of ISAM Processing Programs . . . . . . . . . . . . . . . . . . . . . 542
175. QISAM Error Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
176. BISAM Error Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
177. Register Contents for DCB-Specified ISAM SYNAD Routine . . . . . . 545
178. Register Contents for AMP-Specified ISAM SYNAD Routine . . . . . . 545
179. Abend Codes Issued by the ISAM Interface . . . . . . . . . . . . . . . . 546
180. DEB Fields Supported by ISAM Interface . . . . . . . . . . . . . . . . . 546
181. DCB Fields Supported by ISAM Interface . . . . . . . . . . . . . . . . . 548
| 182. CCSID Conversion Group 1 . . . . . . . . . . . . . . . . . . . . . . . . . 566
| 183. CCSID Conversion Group 2 . . . . . . . . . . . . . . . . . . . . . . . . . 567
| 184. CCSID Conversion Group 3 . . . . . . . . . . . . . . . . . . . . . . . . . 567
| 185. CCSID Conversion Group 4 . . . . . . . . . . . . . . . . . . . . . . . . . 567
| 186. CCSID Conversion Group 5 . . . . . . . . . . . . . . . . . . . . . . . . . 568
| 187. Output DISP=NEW,OLD . . . . . . . . . . . . . . . . . . . . . . . . . . . 569
| 188. Output DISP=MOD (IBM V4 tapes only) . . . . . . . . . . . . . . . . . . 570
| 189. Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
IBM may have patents or pending patent applications covering subject matter in
this document. The furnishing of this document does not give you any license to
these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
USA
Licensees of this program who wish to have information about it for the purpose of
enabling: (i) the exchange of information between independently created programs
and other programs (including this one) and (ii) the mutual use of the information
which has been exchanged, should contact:
IBM Corporation
Information Enabling Requests
Dept. DWZ
5600 Cottle Road
San Jose, CA 95193
Any pointers in this publication to non-IBM Web sites are provided for convenience
only, and do not in any manner serve as an endorsement of these Web sites.
3090 MVS/SP
ADSTAR MVS/XA
Advanced Function Printing OpenEdition
AIX OS/2
Application System/400 OS/390
AS/400 Personal System/2
AT RACF
CICS RAMAC
CUA Resource Measurement Facility
DATABASE 2 RISC System/6000
DB2 RMF
DFSMS SAA
DFSMS/MVS S/370
DFSMSdfp S/390
DFSMSdss SP
DFSMShsm System/370
DFSMSrmm System/390
DFSORT Systems Application Architecture
Enterprise Systems Architecture/390 VTAM
ES/9000
ESA/370
ESA/390
ESCON
Hardware Configuration Definition
Hiperbatch
Hiperspace
IBM
IMS
IMS/ESA
MVS/DFP
MVS/ESA
Other company, product, and service names, which may be denoted by a double
asterisk (**), may be trademarks or service marks of others.
| UNIX is a registered trademark in the United States and/or other countries licensed
| exclusively through X/Open Company Limited.**
This book is intended to help you use access methods to process VSAM data sets,
sequential data sets, partitioned data sets, PDSEs, and generation data sets in the
DFSMS/MVS environment. This book also explains how to use access method
services commands, macro instructions, and JCL to process data sets.
DFSMS/MVS Macro Instructions for Data Sets, explains how to code the macro
instructions for data sets.
The following functions are shown in this publication for the sake of compatibility
only. Although still supported, their use is not recommended, and, where applicable,
alternatives are suggested.
BDAM (use VSAM instead)
EXCP and XDAP (use BSAM, BPAM, or QSAM instead)
BISAM (use VSAM instead)
QISAM (use VSAM instead)
CVOLs and VSAM catalogs (convert all CVOLs and VSAM catalogs to
integrated catalog facility catalogs).
Unsupported DASD:
IBM 2305
IBM 3330
IBM 3333
IBM 3340
IBM 3344
IBM 3350
IBM 3375
IBM 3851
You should be familiar with the information presented in the following publications:
Vertical lines to the left of the text indicate changes or additions to the text and
illustrations. For a book that has been updated in softcopy only, the vertical lines
indicate changes made since the last printed version.
Order
Title Number
OS/390 UNIX System Services User's Guide SC28-1891
OS/390 MVS Authorized Assembler Services Guide GC28-1763
OS/390 MVS Authorized Assembler Services Reference ALE-DYN GC28-1764
OS/390 MVS Authorized Assembler Services Reference ENF-IXG GC28-1765
OS/390 MVS Authorized Assembler Services Reference LLA-SDU GC28-1766
OS/390 MVS Authorized Assembler Services Reference SET-WTO GC28-1767
DFSMS/MVS Checkpoint/Restart SC26-4907
DFSMS/MVS DFSMSdfp Diagnosis Reference LY27-9606
DFSMS/MVS DFM/MVS Guide and Reference SC26-4915
DFSMS/MVS Installation Exits SC26-4908
DFSMS/MVS Macro Instructions for Data Sets SC26-4913
DFSMS/MVS Managing Catalogs SC26-4914
DFSMS/MVS DFSMShsm Managing Your Own Data SH21-1077
DFSMS/MVS OAM Application Programmer's Reference SC26-4917
DFSMS/MVS OAM Planning, Installation, and Storage Administration Guide SC26-4918
for Object Support
DFSMS/MVS Planning for Installation SC26-4919
DFSMS/MVS DFSMSdss Storage Administration Guide SC26-4930
DFSMS/MVS DFSMSdfp Storage Administration Reference SC26-4920
DFSMS/MVS DFSMSdss Storage Administration Reference SC26-4929
DFSMS/MVS DFSMSdfp Advanced Services SC26-4921
DFSMS/MVS Using ISMF SC26-4911
DFSMS/MVS Using Magnetic Tapes SC26-4923
DFSMS/MVS Utilities SC26-4926
IBM 3800 Printing Subsystem Programmer’s Guide GC26-3846
IBM 3800 Printing Subsystem Models 3 and 8 Programmer’s Guide SH35-0061
ICKDSF R16 User's Guide GC35-0033
OS/390 MVS JCL Reference GC28-1757
OS/390 MVS JCL User's Guide GC28-1758
OS/390 MVS Assembler Services Guide GC28-1762
OS/390 MVS Assembler Services Reference GC28-1910
MVS Hiperbatch Guide GC28-1470
DFSMS/MVS Implementing System-Managed Storage SC26-3123
MVS/ESA SML: Managing Data SC26-3124
OS/390 MVS System Management Facilities (SMF) GC28-1783
OS/390 MVS Planning: Global Resource Serialization GC28-1759
All MVS book titles used in DFSMS/MVS publications refer to the OS/390 editions.
Users of MVS/ESA SP Version 5 should use the corresponding MVS/ESA book.
Refer to OS/390 Information Roadmap for titles and order numbers for all the
elements and features of OS/390.
For more information about OS/390 elements and features, including their
relationship to MVS/ESA SP and related products, please refer to OS/390 Planning
for Installation.
All RACF book titles refer to the Security Server editions. Users of RACF Version 2
should use the corresponding book for their level of the product. Refer to OS/390
Security Server (RACF) Introduction for more information about the Security Server.
CICS/MVS, 5665-403
CICS/ESA, 5685-083
All CICS book titles refer to the CICS Transaction Server for OS/390 editions.
Users of CICS/MVS and CICS/ESA should use the corresponding books for those
products. Please see CICS Transaction Server for OS/390: Planning for Installation
for more information.
Support for processing ISO/ANSI Version 4 tape labels was added to the
following chapters:
– “RACF Protection for Non-VSAM Data Sets” on page 49.
– “Data Conversion” on page 266.
– “Converting Character Data” on page 279.
– “Format-F Records” on page 280.
– “Format-D Records” on page 281.
– “Processing DS/DBS Tapes with QSAM” on page 286.
– “Processing Methods” on page 296.
– “DCB Parameters” on page 298.
– “Installation Exits” on page 310.
– “WRITE—Write a Block” on page 327.
– “Limitations in Using Chaining Scheduling with Non-DASD Data Sets” on
page 359.
– “Coded Character Sets, Sorted by CCSID” on page 555.
A new section was added, “Tables for Default Conversion Codes” on
page 573. These tables are used by data management when performing
default character conversion.
A new section was added, “CCSID Decision Tables” on page 568. These
tables document results of what will happen under various conditions to show
how CCSID information will be determined and how it will affect job processing.
“Defining an Extended Format Data Set” on page 79 has been modified to
document extended format support for ESDS, RRDS, VRRDS, and LDS data
sets.
In “Retrieving Records” on page 129 and “Making Concurrent Requests” on
page 134, additional information was added to clarify these sections.
In “Allocating Space for a Partitioned Data Set” on page 377, HFS was added
to the supported list of data sets that can have more than 64K tracks.
In “BLDL—Constructing a Directory Entry List” on page 381 and
“BLDL—Constructing a Directory Entry List” on page 411, additional
information was added about the DCB address.
Data can be stored on a direct access storage device (DASD) or magnetic tape
volume. The term “DASD” applies to disks or simulated equivalents of disks and
magnetic drums. A volume is a standard unit of secondary storage. All types of
data sets can be stored on DASD, but only sequential data sets can be stored on
magnetic tape. Mountable tape volumes can reside in an automated Tape Library
Dataserver.
Each block of data on a DASD volume has a distinct location and a unique
address, making it possible to find any record without extensive searching.
Records can be stored and retrieved either directly or sequentially. DASD volumes
are used for storing data and executable programs, including the operating system
itself, and for temporary working storage. One DASD volume can be used for many
different data sets, and space on it can be reallocated and reused.
Data management is the part of the operating system that organizes, identifies,
stores, catalogs, and retrieves all the information (including programs) that your
installation uses. Data management does these main tasks:
Sets aside (allocates) space on DASD volumes
Automatically retrieves cataloged data sets by name
Controls access to data
System-Managed Data
The Storage Management Subsystem (SMS) is an operating environment that
automates the management of storage. Storage management uses the values you
provide at allocation time to determine, for example, on which volume to place your
data set, and how many tracks to allocate for it. Storage management also
manages tape data sets on mountable volumes that reside in an automated Tape
Library Dataserver. With the Storage Management Subsystem, users can allocate
data sets more easily.
The data sets allocated through the Storage Management Subsystem are called
system-managed. See Chapter 2, “Using the Storage Management Subsystem” on
page 21 for how to allocate system-managed data sets. If you are a storage
Access methods are identified primarily by the data set organization to which they
apply. For example, you can use the basic sequential access method (BSAM) with
sequential data sets. However, there are times when an access method identified
with one organization can be used to process a data set organized in a different
manner. For example, a sequential data set (not extended format data set) created
using BSAM can be processed by the basic direct access method (BDAM), and
vice versa.
Basic Direct Access Method (BDAM)
BDAM arranges records in any sequence your program indicates, and retrieves
records by actual or relative address. If you do not know the exact location of a
record, you can specify a point in the data set where a search for the record is
to begin. Data sets organized this way are called direct data sets.
IBM does not recommended using BDAM because it tends to require using
device-dependent code.
In addition, using keys is much less efficient than in virtual sequential access
method (VSAM). BDAM is supported by DFSMS/MVS only for compatibility with
other IBM operating systems. For details, see Appendix C, “Processing Direct
Data Sets” on page 499.
Basic Partitioned Access Method (BPAM)
BPAM arranges records as members of a partitioned data set (PDS) or a
partitioned data set extended (PDSE) on DASD. You can view each member
like a sequential data set. A partitioned data set or PDSE includes a directory
that relates member names to locations within the data set. The directory is
used to retrieve individual members, and for program libraries (load modules
and program objects), contains program attributes required to load and re-bind
the member. Some differences between the two types of partitioned organized
data sets are:
HFS files
BSAM accesses an HFS file as if it were a single-volume physical
sequential data set residing on DASD. Although HFS files are not actually
stored as physical sequential data sets, the system attempts to simulate the
characteristics of such a data set.
HFS files are POSIX conforming files which reside in HFS data sets. They
are byte-oriented rather than record-oriented, as are MVS files. They are
identified and accessed by specifying the “path” leading to them.
HFS files
Refer to HFS files under BSAM for parameter definition.
Fixed-length RRDS
The records must be fixed-length.
Variable-length RRDS
The records can vary in length.
HFS files
VSAM accesses an HFS file as if it were an ESDS. Although HFS files are
not actually stored as ESDS, data sets, the system attempts to simulate the
characteristics of such a data set.
HFS files are POSIX conforming files which reside in HFS data sets. They
are byte-oriented rather than record-oriented, as are MVS files. They are
identified and accessed by specifying the “path” leading to them.
Note: IBM does not recommend the use of BDAM, BISAM, and QISAM. Use
VSAM instead.
The following types of HFS files are supported through the MVS access methods:
Regular files
Character special files
FIFO special files
Symbolic links
The following types of HFS files are not supported through the MVS access
methods:
Directories
External links
Files can reside on other systems. The access method user can use NFS to
access them.
Also, you should choose a data organization according to the type of processing
you want to do: sequential or direct. For example, RRDSs or key-sequenced data
sets are best for applications that use only direct access, or both direct and
sequential access. Sequential or VSAM entry-sequenced data sets are best for
batch processing applications or sequential access.
Note:
VSAM data sets cannot be processed using non-VSAM access
methods. Non-VSAM data sets cannot be processed using the VSAM
access method.
Data sets can also be organized as PDSE program libraries. PDSE program
libraries can be accessed with BSAM, QSAM, or the program management binder.
The first member written in a PDSE library determines the library type, program or
data.
You can allocate a data set by using any of the following methods:
Access method services. You can define data sets and establish catalogs
using a multifunction services program called access method services. For
information about access method services commands, see DFSMS/MVS
Access Method Services for ICF. You can use the following commands with all
data sets:
ALLOCATE to allocate a new data set
ALTER to change the attributes of a data set
DEFINE NONVSAM to catalog a data set
DELETE to delete a data set
LISTCAT to list catalog entries
PRINT to print a data set
REPRO to copy a data set.
ALLOCATE command. The ALLOCATE command, issued either through
access method services or TSO, can be used to define VSAM and non-VSAM
data sets. The TSO commands are described in OS/390 TSO/E Command
Reference.
Dynamic allocation. The DYNALLOC macro is described in OS/390 MVS
Authorized Assembler Services Guide.
JCL. All data sets can be defined directly through JCL. For information on
using JCL, see OS/390 MVS JCL Reference and OS/390 MVS JCL User's
Guide .
All the access methods that are described in this book allow you to do the
following:
Share a data set among different systems, different jobs in a single system,
multiple access method control blocks (ACBs) or data control blocks (DCBs) in
a task, or different subtasks in an address space. For information about sharing
a VSAM data set, see Chapter 12, “Sharing VSAM Data Sets” on page 173.
For information about sharing a non-VSAM data set, see Chapter 24, “Sharing
Non-VSAM Data Sets” on page 337.
Share buffers and control blocks among VSAM data sets. See Chapter 13,
“Sharing Resources Among VSAM Data Sets” on page 189.
Provide user exit routines to analyze logical and physical errors, and to perform
end-of-data processing. See Chapter 31, “Non-VSAM User-Written Exit
Routines” on page 451 and Chapter 16, “VSAM User-Written Exit Routines” on
page 217.
Back up and recover data sets. See Chapter 4, “Backing Up and Recovering
Data Sets” on page 39.
Maintain data security and integrity. See Chapter 5, “Protecting Data Sets” on
page 47.
BSAM, QSAM and VSAM convert between record-oriented data and byte-stream
oriented data that is stored in HFS files.
You can use either 24-bit or 31-bit addressing mode for VSAM programs. You can
issue the OPEN and CLOSE macros for any access method in 24-bit or 31-bit
addressing mode. VSAM allows you to create buffers, user exits, shared resource
pools, and some control blocks in virtual storage above 16 megabytes. Your
program must run in 31-bit addressing mode to access these areas above 16
megabytes. Detailed information is provided in Chapter 17, “Using 31-Bit
Addressing Mode With VSAM” on page 239.
Non-VSAM
Most BSAM, BPAM, QSAM, and BDAM macros can be run in 24-bit or 31-bit
addressing mode. Data referred to by the macros must reside below the 16
megabyte line if the macro is executed in 24-bit mode.
The BSAM, BPAM, QSAM, and BDAM access methods allow you to create certain
data areas, buffers, certain user exits, and some control blocks in virtual storage
above the 16MB line if the macro is executed in 31-bit mode. For more information,
refer to DFSMS/MVS Macro Instructions for Data Sets.
BPAM
Use BPAM to process the directory of a partitioned data set or PDSE.
Each partitioned data set or PDSE must be on one direct-access volume.
However, you can concatenate multiple input data sets that are on the same or
different volumes.
A PDSE can be used as a data library or program library, but not both. The first
member STOWed in the library determines the library type.
You can use either BSAM or QSAM macros to add or retrieve a member
without specifying the BLDL, FIND or STOW macro. Code the DSORG=PS
parameter in the DCB macro, and the DDNAME parameter of the JCL DD
statement with both the data set and member names, as shown:
//ddname DD DSN=LIBNAME(MEMNAME),...
(Data set positioning and directory maintenance are then handled by the OPEN
and CLOSE macros.) Because you are processing the member as if it were
part of a sequential data set, you are not using the complete capabilities of
BPAM.
When you create a partitioned data set, the SPACE parameter defines the size
of the data set and its directory so that the system can allocate data set space.
For a partitioned data set, the SPACE parameter preformats the directory.
Specifying SPACE for a PDSE is different from a partitioned data set. (See
“Allocating Space for a PDSE” on page 403.)
You can use the STOW macro to add, delete, change, or replace a member
name or alias in the partitioned data set or PDSE directory, or to clear a PDSE
directory. You can also use the STOW macro to delete all the members of a
PDSE. However, you cannot use the STOW macro to delete all the members of
a partitioned data set. For program libraries, STOW cannot be used to add or
replace a member name or alias in the directory.
You can process multiple data set members by passing a list of members to
BLDL. Then you can use the FIND macro to position to a member before
processing it.
You can code a DCBE and use 31-bit addressing for partitioned data sets and
PDSEs.
Partitioned data sets and PDSEs and members cannot use sequential data
striping.
For more information, see Chapter 27, “Processing a Partitioned Data Set” on
page 373 and Chapter 28, “Processing a PDSE” on page 395. Also refer to
DFSMS/MVS Macro Instructions for Data Sets for information on how to code the
DCB (BPAM) and DCBE macros.
Use BSAM to process a sequential data set and members of a partitioned data set
or PDSE.
BSAM can read a member of a PDSE program library, but not write one.
The problem program must block and unblock its own input and output records.
BSAM only reads and writes data blocks.
The problem program must manage its own input and output buffers. It must
give BSAM a buffer address with the READ macro, and it must fill its own
output buffer before issuing the WRITE macro.
The problem program must synchronize its own I/O operations by issuing a
CHECK macro for each READ and WRITE macro issued.
BSAM lets you process blocks in a nonsequential order by repositioning with
the NOTE and POINT macros.
You can read and write direct access storage device record keys with BSAM.
PDSEs and extended format data sets are an exception.
BSAM automatically updates the directory when a member of a partitioned data
set or PDSE is added or deleted.
QSAM
Use QSAM to process a sequential data set and members of a partitioned data set
or PDSE.
QSAM processes all record formats except blocks with keys.
QSAM blocks and unblocks records for you automatically.
QSAM manages all aspects of I/O buffering for you automatically. The GET
macro retrieves the next sequential logical record from the input buffer. The
PUT macro places the next sequential logical record in the output buffer.
QSAM gives you three transmittal modes: move, locate, and data. The three
modes give you greater flexibility managing buffers and moving data.
You can be in 31-bit addressing mode when issuing BSAM, QSAM, and BPAM
macros. Data areas can be above the 16MB line for BSAM and QSAM macros, and
you can request that OPEN obtain buffers above the 16MB line for QSAM. This
allows larger amounts of data to be transferred.
Data can reside on a system other than the one the user program is running on
without using shared DASD. The other system can be MVS or non-MVS. NFS
transports the data.
You should be familiar with the information found in OS/390 UNIX System Services
User's Guide regarding HFS files.
Since HFS files are not actually stored as physical sequential data sets or ESDSs,
the system cannot simulate all the characteristics of a sequential data set. Because
of this, there are certain macros and services which have incompatibilities or
restrictions when dealing with HFS files. For example:
Data set labels and UCBs do not exist for HFS files, therefore any service
which relies on a DSCB or UCB for information might not work with these files.
With traditional MVS data sets, other than VSAM linear data sets, the system
maintains record boundaries. That is not true with byte-stream files such as
HFS files.
For more information, see “Accessing HFS Files Via BSAM and QSAM” on
page 369. Also see DFSMS/MVS Macro Instructions for Data Sets for how to code
the DCB (BSAM) and DCB (QSAM) macros, and DCBE (BSAM) and DCBE
(QSAM) macros.
Allocation Considerations
To allocate an HFS file to be accessed via BSAM/QSAM (DCB DSORG=PS) or
VSAM, specify the PATH=pathname parameter on the JCL DD statement, SVC 99,
or TSO ALLOCATE command. The OPEN macro suppports DD statements that
specify the PATH parameter only for DCBs which specify DSORG=PS and for
ACBs. Note that EXCP is not allowed.
Refer to OS/390 MVS JCL Reference, for a description of the PATH parameter,
including the list of parameters which may be coded with a PATH parameter and
the defaults for a DD statement with a PATH parameter.
Note: The JCL DD statement, SVC 99, and TSO ALLOCATE allow the following
DCB parameters with the PATH parameter:
BLKSIZE
LRECL
RECFM
BUFNO
NCP
BLKSIZE, RECFM, and LRECL are not stored with an HFS file. Therefore, you
must specify these fields in JCL, SVC 99, TSO ALLOCATE, or in the DCB if you do
not want the defaults.
FILEDATA parameter
The access methods support both EBCDIC text HFS files and binary HFS files.
When a file is created, the user is responsible for specifying whether the file
consists of text data or binary data by coding the FILEDATA parameter on the JCL
DD statement, SVC 99, or TSO ALLOCATE command. This value is saved with the
HFS file.
FILEDATA=TEXT indicates that the data consists of records that are separated
by a delimiter of the EBCDIC newline character (x'15').
The record delimiters are transparent to the user.
FILEDATA=BINARY indicates that the data does not contain record delimiters.
Note: The FILEDATA parameter is currently meaningful only when the PATH
parameter is also coded and the program uses BSAM, QSAM, or
VSAM.
When FILEDATA is coded for an existing file, the value specified temporarily
overrides the value saved with the file. However, there is one exception. If the
existing file does not already have a FILEDATA value saved with the file (that
is, the HFS file was created without coding FILEDATA) and the user has the
proper authority to update the HFS file, an attempt is made during OPEN to
save the specified FILEDATA value with the file.
When FILEDATA is not coded and a value has not been saved with the file, the
file is treated as binary.
File status
Since the DISP parameter cannot appear on a DD statement containing a PATH
parameter, use the PATHOPTS parameter to specify the status for an HFS file as
follows:
OLD status
– PATHOPTS is not on the DD statement.
– PATHOPTS does not contain a file option.
– PATHOPTS does not contain OCREAT.
MOD status
– PATHOPTS contains OCREAT but not OEXCL.
NEW status
– PATHOPTS contains both OCREAT and OEXCL.
SMS constructs
For a DD statement containing a PATH parameter, DATACLAS, STORCLAS, and
MGMTCLAS cannot be specified and the ACS routines are not called.
When the application program issues an OPEN macro for an HFS file, the OPEN
SVC performs an open( ) function to establish a connection to the existing file. The
pathname from the PATH parameter is passed without modification. The options
from PATHMODE are not passed since the HFS file must already exist. The
options from PATHOPTS are passed with the following modifications:
OCREAT and OEXCL, if specified, are ignored by the system since the file
must already exist.
If the file is a regular file and OWRONLY is specified, ORDWR is set by the
system to ensure that both reads and writes are allowed.
If the file is a FIFO or character special file
– and the DCB OPEN first option is INPUT; INOUT or omitted or the ACB
MACRF=OUT options is not coded then.
– and the DCB OPEN first option is OUTPUT, OUTIN, EXTEND or OUTINX
or the ACB MACRF=OUT option is coded then.
SMF records
CLOSE does not write SMF type 14/15 or 60-69 records for HFS files. DFSMS
relies on OS/390 UNIX MVS to write appropriate SMF records as requested by the
system programmer.
Migration/Coexistance considerations
An attempt to access an HFS file with BSAM, QSAM, or VSAM on a downlevel
DFSMS release will result in ABEND 213-88.
System security
DFSMS depends on OS/390 UNIX MVS to address security as it applies to the files
being accessed. For HFS files, authorization (RACF) checking will actually be
performed by the file system called by OS/390 UNIX MVS.
For more information regarding BSAM and QSAM access, see Chapter 26,
“Processing a Sequential Data Set” on page 347.
For more information regarding VSAM access, see Chapter 6, “Organizing VSAM
Data Sets” on page 65.
Root
Directory
PATH
Directory Directory
File
File
File
File Directory Directory
File File
File File
File
HFS data sets appear to the MVS system much as a PDSE does, but the internal
structure is entirely different. DFSMS/MVS manages HFS data sets, providing
access to the files of data within them as well as backup/recovery and
migration/recall capability.
HFS data sets, not HFS files, have the following requirements and restrictions:
They must be system managed, reside on DASD volumes managed by the
system, and be cataloged.
They cannot be processed by standard open, end-of-volume, or access
methods; POSIX system calls must be used instead.
They are supported by standard DADSM create, rename, and scratch.
They are supported by DFSMShsm for dump/restore and migrate/recall if
DFSMSdss is used as the data mover.
They are not supported by IEBCOPY or the DFSMSdss COPY function.
Distributed FileManager/MVS creates and associates DDM attributes with data sets.
The DDM attributes describe the characteristics of the data set such as the file size
class and last access date. The end user can determine whether a specific data set
has associated DDM attributes by using the ISMF Data Set List panel and the
IDCAMS DCOLLECT command.
Distributed FileManager/MVS also provides the ability to process data sets along
with their associated attributes. Any DDM attributes associated with a data set
cannot be propagated with the data set unless DFSMShsm uses DFSMSdss as its
data mover.
Refer to DFSMS/MVS DFM/MVS Guide and Reference for more information about
the DDM file attributes.
A VIO data set always occupies a single extent (area) on the simulated device. The
size of the extent is equal to the primary space amount plus 15 times the
secondary amount (VIO data size = primary space + (15 × secondary space)). An
easy way to specify the largest possible VIO data set in JCL is SPACE=(TRK,65535).
This limit can be set lower. Specifying ALX (all extents) or MXIG (maximum
contiguous extent) on the SPACE parameter results in the largest extent allowed on
the real device, which can be less than 65535 tracks.
Do not allocate a VIO data set with zero space. Failure to allocate space to a VIO
data set will cause unpredictable results when reading or writing.
A data set name can be one name segment, or a series of joined name segments.
Each name segment represents a level of qualification. For example, the data set
name DEPT58.SMITH.DATA3 is composed of three name segments. The first
name on the left is called the high-level qualifier, the last is the low-level qualifier.
The period (.) separates name segments from each other. Including all name
segments and periods, the length of the data set name must not exceed 44
characters. Thus, a maximum of 22 name segments can make up a data set name.
See “Naming a Cluster” on page 90 and “Naming an Alternate Index” on page 106
for examples of naming a VSAM data set.
Note: Do not use name segments longer than 8 characters, because
unpredictable results might occur.
You should only use the low-level qualifier GxxxxVyy, where xxxx and yy are
numbers, in the names of generation data sets. You can define a data set with
GxxxxVyy as the low-level qualifier of non-generation data sets only if a generation
data group with the same base name does not exist. However, we recommend that
you restrict GxxxxVyy qualifiers to generation data sets, to avoid confusing
generation data sets with other types of non-VSAM data sets.
The catalog describes data set attributes and indicates the volumes on which a
data set is located. Data sets can be cataloged, uncataloged, or recataloged. All
system-managed DASD data sets are cataloged automatically in an integrated
catalog facility catalog. Cataloging of data sets on magnetic tape is not required
but usually it simplifies user's jobs. All data sets can be cataloged in an integrated
catalog facility or VSAM catalog. Non-VSAM data sets can also be cataloged in a
CVOL. The use of CVOLs and VSAM catalogs is not recommended.
Non-VSAM data sets can also be cataloged through the catalog management
macros (CATALOG and CAMLST). An existing data set can be cataloged through
the access method services DEFINE RECATALOG command.
Access method services is also used to establish aliases for data set names and to
connect user catalogs and CVOLs to the master catalog. For information on how to
use the catalog management macros, see DFSMS/MVS Managing Catalogs.
When you allocate or define a data set using SMS, you specify your data set
requirements using data class, storage class, and management class. Typically you
do not need to specify these because a storage administrator has ACS routines to
determine classes.
A data class is a named list of data set allocation and space attributes that
SMS assigns to a data set when it is allocated. You can also use a data class
with a non-system-managed data set.
A management class is a named list of management attributes that DFSMShsm
uses to control action for retention, migration, backup, and release of allocated
but unused space of data sets. The object access method (OAM) uses
management classes to control action for the retention, backup, and class
transition of objects in an object storage hierarchy.
A storage class is a named list of “data set service” or “performance objectives”
that SMS uses to identify performance and availability requirements for data
sets. OAM uses storage classes to control the placement of objects in an
object storage hierarchy.
Note: Your storage administrator must code storage class ACS routines to ensure
that data sets allocated by remote applications using distributed file
management are system-managed.
Using data class, you can easily allocate data sets without specifying all of the data
set attributes normally required. Your storage administrator can define standard
data set attributes and use them to create data classes, which you can then use
when you allocate your data set. For example, your storage administrator might
define a data class for data sets whose names end in LIST and OUTLIST because
they have similar allocation attributes. The ACS routines can then be used to
assign this data class, if the data set names end in LIST or OUTLIST.
Another way to allocate a data set without specifying all of the data set attributes
normally required is to model the data set after an existing data set. You can do
this by referring to the existing data set in the DD statement for the new data set,
using the JCL keywords LIKE or REFDD. For more information on JCL keywords,
see OS/390 MVS JCL Reference and OS/390 MVS JCL User's Guide.
Figure 3 lists the storage management functions and products you can use with
system-managed and non-system-managed data sets. For details, see MVS/ESA
SML: Managing Data.
Figure 3. Data Set Activity for Non-System-Managed and System-Managed Data Sets
Activity Allocation Non-System-Managed Data System-Managed Data
Data Placement JCL, Storage Pools ACS, Storage Groups
Allocation Control Software User Exits ACS
Allocation Authorization, Definition RACF, JCL, IDCAMS, TSO, RACF, Data Class, JCL, IDCAMS,
DYNALLOC TSO, DYNALLOC
Access
Access Authorization RACF RACF
Read/Write Performance, Availability Manual Placement, DFSMSdss, Storage Class, Management and
DFSMShsm, JCL Storage Class
Access method access to POSIX Not available JCL (PATH=)
(UNIX) byte-stream
Space Management
Backup DFSMShsm, DFSMSdss, Utilities Management Class
Expiration JCL Management Class
Release Unused Space DFSMSdss, JCL Management Class, JCL
Deletion DFSMShsm, JCL, Utilities Management Class, JCL
Migration DFSMShsm Data and Management Class, JCL
Descriptions:
DFSMSdss: Moves data (dump, restore, copy, and move) between volumes on DASD devices, manages
space, and converts data sets or volumes to SMS control. See DFSMS/MVS DFSMSdss Storage
Administration Guide for how to use DFSMSdss.
DFSMShsm: Manages space, migrates data, and backs up data through SMS classes and groups. See
DFSMS/MVS DFSMShsm Managing Your Own Data for how to use DFSMShsm.
RACF: Controls access to data sets and use of system facilities. See OS/390 Security Server (RACF)
Introduction and associated books for how to use RACF.
Direct data sets (BDAM) can be system-managed but if a program uses OPTCD=A,
the program might become dependent on where the data set is on the disk. For
example the program might record the cylinder and head numbers in a data set.
Such a data set should not be migrated or moved. You can specify a management
class that prevents automatic migration.
To update an existing data set, specify a DISP of OLD, MOD, or SHR. Do not use
DISP=SHR while updating a sequential data set unless you have some other
means of serialization, because you might lose data.
To share the data set during access, specify a DISP of SHR. (If a DISP of NEW,
OLD, or MOD is specified, the data set cannot be shared.)
Note: If SMS is active, it is impossible to determine if a data set will be
system-managed based solely on the JCL, because an ACS routine can
assign a STORCLAS to any data set.
HFS data sets must be system managed and reside on system managed volumes.
If HFS is specified as the DSNTYPE value, the data set is allocated as an HFS data
set. If PIPE is specified as the DSNTYPE value and the PATH parameter is also
specified, the result is a FIFO special file defined in an OS/390 UNIX file system.
IBM provides HFS data sets as one type of file system that supports FIFO special
files.
You can request the name of the data class, storage class, and management class
in the JCL DD statement. However, in most cases, the ACS routines pick the
classes needed for the data set.
Allocating a PDSE
PDSEs must be system-managed. The DSNTYPE parameter determines whether
the data set is allocated as a PDSE or as a partitioned data set. A DSNTYPE of
LIBRARY causes the data set to be a PDSE. The DSNTYPE parameter can be
specified in the JCL DD statement, the data class, or the system default, or by
using the JCL LIKE keyword to refer to an existing PDSE.
When first allocated, the PDSE is neither a program library or a data library. If the
first member written, by either the binder or by IEBCOPY, is a program object, the
library becomes a program library and remains such until the last member has
been deleted. If the first member written is not a program object, then the PDSE
becomes a data library. Program objects and other types of data cannot be mixed
in the same PDSE library.
Allocating a PDSE
The following example shows the ALLOCATE command being used with the
DSNTYPE keyword to allocate a PDSE. DSNTYPE(LIBRARY) means that the data
set being allocated is a system-managed PDSE.
//ALLOC EXEC PGM=IDCAMS,DYNAMNBR=1
//SYSPRINT DD SYSOUT=A
//SYSIN DD \
ALLOC -
DSNAME(XMP.ALLOCATE.EXAMPLE1) -
NEW -
STORCLAS(SCð6) -
MGMTCLAS(MCð6) -
DSNTYPE(LIBRARY)
/\
This example allocates a new VSAM entry-sequenced data set, with a logical
record length of 80, a block size of 8000, on 2 tracks. To allocate a VSAM data set,
specify the RECORG keyword on the ALLOCATE command. RECORG is mutually
exclusive with DSORG and with RECFM. To allocate a key-sequenced data set,
you also must specify the KEYLEN parameter. RECORG specifies the type of data
set desired:
ES entry-sequenced data set
KS key-sequenced data set
LS linear data set
RR relative record data set
ALLOC DA(EX2.DATA) RECORG(ES) SPACE(2,ð) TRACKS LRECL(8ð)
BLKSIZE(8192) NEW
The system can use data class if SMS is active even if the data set is not
SMS-managed. For system-managed data sets, the system selects the volumes.
Therefore, you do not need to specify a volume when you define your data set.
If you specify your space request by average record length, space allocation will be
independent of device type. Device independence is especially important to
system-managed storage.
By Blocks:
When the amount of space required is expressed in blocks, you must specify the
number and average length of the blocks within the data set, as in this example:
// DD SPACE=(3ðð,(5ððð,1ðð)), ...
From this information, the operating system estimates and allocates the number of
tracks required.
Note: IBM recommends, for sequential and partitioned data sets, to let the system
calculate the block size and that you not request space by average block
length. See “System-Determined Block Size” on page 298 for more
information.
If the average block length of the real data does not match the value coded here,
the system might allocate much too little or much too much space.
When the amount of space required is expressed in average record length, you
must specify the number of records within the data set and their average length.
Use the AVGREC keyword to modify the scale of your request. When AVGREC is
specified, the first subparameter of SPACE becomes average record length. The
system applies the scale value to the primary and secondary quantities specified in
the SPACE keyword. Possible values for the AVGREC keyword are:
U—use a scale of 1
K—use a scale of 1024
M—use a scale of 1048576
When the AVGREC keyword is specified, the values specified for primary and
secondary quantities in the SPACE keyword are multiplied by the scale and those
new values will be used in the space allocation. For example, the following request
will result in the primary and secondary quantities being multiplied by 1024:
// DD SPACE=(8ð,(2ð,2)),AVGREC=K, ...
From this information, the operating system estimates and allocates the number of
tracks required using one of the following block lengths, in the order indicated:
1. 4096, if the data set is a PDSE
2. The BLKSIZE parameter on the DD statement or the BLKSIZE subparameter of
the DCB parameter on the DD
3. The system determined block size, if available
4. A default value of 4096.
If it is an extended format data set, the operating system uses a value 32 larger
than the above block size. The primary and secondary space are divided by the
block length to determine the number of blocks requested. The operating system
determines how many blocks of the block length can be written on one track of the
device. The primary and secondary space in blocks is then divided by the number
of blocks per track to obtain a track value, as shown in the examples below. These
examples assume a block length of 23200. Two blocks of block length 23200 can
be written on a 3380 device:
(1.6MB / 232ðð) / 2 = 36 = primary space in tracks
(16ðKB / 232ðð) / 2 = 4 = secondary space in tracks
In the calculations above, the system does not consider whether it is a compressed
format data set. This means the calculations are done with the user-perceived
uncompressed block size and not the actual block size that the system calculates.
By Tracks or Cylinders:
By Actual Address:
If the data set contains location-dependent information in the form of an actual track
address (MBBCCHHR), space should be requested in the number of tracks and the
beginning address. In this example, 500 tracks are required, beginning at relative
track 15, which is cylinder 1, track 0.
// DD SPACE=(ABSTR,(5ðð,15)),UNIT=338ð, ...
Note: Data sets that request space by actual address are not eligible to be system
managed. Avoid using actual address allocation.
If a secondary space allocation would result in more than 65,535 total tracks
allocated on this volume for such a data set the secondary allocation will fail on this
volume and the system will try on another volume if the volume count in the data
class or on the DD statement allows another volume.
Types of data sets which are not limited to 65,535 total tracks allocated on any one
volume are:
Extended format sequential
HFS
PDSE
VSAM
Note: All other data set types are restricted to 65,535 total tracks allocated on any
one volume.
Allocate space for a multivolume data set the same way as for a single volume
data set. DASD space is initially allocated on the first volume only (an exception is
extended format data sets). When the primary allocation of space is filled, space is
allocated in secondary storage amounts (if specified). The extents can be allocated
on other volumes. VIO space allocation is handled a little differently from other data
sets. See “Estimating the Space Needed for VIO Data Sets” on page 18 for more
information.
When a multivolume VSAM data set extends to the next volume, the initial space
allocated on that volume is the primary amount. After the primary amount of space
is used up, space is allocated in secondary amounts. By using the DATACLASS
parameter, it is possible to indicate whether to take a primary or secondary amount
when VSAM extends to a new volume.
When a multivolume non-VSAM (non-extended format) data set extends to the next
volume, the initial space allocated on that volume is the secondary amount.
When space for a striped extended format data set is allocated, the system divides
the primary amount among the volumes. If it does not divide evenly, the system
rounds the amount up. For extended format data sets, when the primary space on
any volume is filled, the system allocates space on that volume. The amount is the
secondary amount divided by the number of stripes. If the secondary amount
cannot be divided evenly, the system rounds the amount up.
allocated until the preallocated primary space on all volumes is used up. The
secondary amount specified is divided by the number of volumes and rounded up
for allocation on each volume.
You can allocate space and load a multivolume data set in one step or in separate
steps. The following example allocates 100MB of primary space on each of five
volumes.
//DD1 DD DSN=ENG.MULTIFILE,DISP=(,KEEP),STORCLAS=GS,
// SPACE=(1,(1ðð,25)),AVGREC=M,UNIT=(338ð,5)
1. After 100MB is used on the first volume, 25MB extents of secondary space are
allocated on it until the extent limit is reached or the volume is full.
2. If more space is needed, 100MB of primary space is used on the second
volume. Then, more secondary space is allocated on that volume.
3. The same process is repeated on each volume.
Allocations and extends to new volumes will proceed normally until it is established
that space cannot be obtained by normal means. Space constraint relief, if
requested, will occur in one or two steps depending on the volume count specified
for the failing allocation.
1. Step 1: If the volume count is greater than 1, SMS will attempt to satisfy the
allocation by spreading the requested primary allocation over more than one
volume, but no more than the volume count specified.
2. Step 2: If Step 1 also fails or if the volume count specified was 1, in which case
Step 1 would not be entered, then SMS will take two actions simultaneously.
First, it will modify the requested primary space, or secondary space for
extensions, by the percentage specified in the REDUCE SPACE UP TO
parameter. Note that the user may specify 0% in the DATA CLASS for this
parameter in which case space would not be reduced. Second, SMS will
remove the 5 extent limit e.g. for physical sequential data sets as many as 16
extents may be used to satisfy the allocation.
The allocation will fail as before if Steps 1 and/or 2 are not successful.
Each of these methods of backup and recovery has its advantages. You will need
to decide which method is best for the particular data that you need to back up.
MVS/ESA SML: Managing Data discusses the requirements and processes of
archiving, backing up, and recovering data sets using DFSMShsm, DFSMSdss, or
ISMF. It also contains information on disaster recovery.
3. Create a copy of a nonreusable VSAM data set on the same catalog, then
delete the original data set, define a new data set, and load the backup copy
into the newly defined data set.
a. To create a backup copy, define a data set, and use REPRO to copy the
original data set into the newly defined data set. If you define the backup
data set on the same catalog as the original data set or if the data set is
SMS-managed, the backup data set must have a different name.
b. To recover the data set, use the DELETE command to delete the original
data set if it still exists. Next, redefine the data set using the DEFINE
command, then restore it with the backup copy using the REPRO
command.
4. Create a copy of a reusable VSAM data set, then load the backup copy into the
original data set. When using REPRO, the REUSE attribute allows repeated
backups to the same VSAM reusable target data set.
a. To create a backup copy, define a data set, and use REPRO to copy the
original reusable data set into the newly defined data set.
b. To recover the data set, load the backup copy into the original reusable
data set.
5. Create a backup copy of a data set, then merge the backup copy with the
damaged data set. When using REPRO, the REPLACE parameter allows you
to merge a backup copy into the damaged data set. You cannot use the
REPLACE parameter with entry-sequenced data sets, because records are
always added to the end of an entry-sequenced data set.
a. To create a backup copy, define a data set, and use REPRO to copy the
original data set into the newly defined data set.
b. To recover the data set, use the REPRO command with the REPLACE
parameter to merge the backup copy with the destroyed data set. With a
key-sequenced data set, each source record whose key matches a target
record's key replaces the target record. Otherwise, the source record is
inserted into its appropriate place in the target cluster. With a fixed-length
or variable-length RRDS, each source record, whose relative record
number identifies a data record in the target data set, replaces the target
record. Otherwise, the source record is inserted into the empty slot its
relative record number identifies. When only part of a data set is damaged,
you can replace only the records in the damaged part of the data set. The
REPRO command allows you to specify a location at which copying is to
begin and a location at which copying is to end.
6. If the index of a key-sequenced data set or variable-length RRDS becomes
damaged, follow this procedure to rebuild the index and recover the data set.
This does not apply to a compressed key-sequenced data set. It is not possible
to REPRO just the data component of a compressed key-sequenced data set.
a. Use REPRO to copy the data component only. Sort the data.
b. Use REPRO with the REPLACE parameter to copy the cluster and rebuild
the index.
See DFSMS/MVS Access Method Services for ICF for an example of using
EXPORT with a base cluster and its associated alternate indexes.
Most catalog information is exported along with the data set, easing the problem of
redefinition. The backup copy contains all of the information necessary to redefine
the VSAM cluster or alternate index when you IMPORT the copy.
Using EXPORT/IMPORT
When you export a copy of a data set for backup, specify the TEMPORARY
attribute. Exporting a data set means that the data set is not to be deleted from the
original system.
You can export entry-sequenced or linear data set base clusters in control interval
mode by specifying the CIMODE parameter. When CIMODE is forced for a linear
data set, a RECORDMODE specification is overridden.
Use the IMPORT command to totally replace a VSAM cluster whose backup copy
was built using the EXPORT command. The IMPORT command uses the backup
copy to replace the cluster's contents and catalog information.
IMPORT will not propagate Distributed Data Management (DDM) attributes if you
specify the INTOEMPTY parameter. Distributed FileManager/MVS will reestablish
the DDM attributes when the imported data set is first accessed.
Compressed data must not be considered portable. IMPORT will not propagate
extended format or compression information if the user specifies the INTO EMPTY
parameter.
Using concurrent copy for backup has the following advantages. It:
| Has little or no disruption
Is logically consistent
Is not necessary to take down the application using the data
Executes without regard to how the data is being used by the application
Works for any kind of DFSMSdss dump or copy operation
Eliminates the unavailability of DFSMShsm while control data sets are being
backed up.
DFSMShsm can use concurrent copy to copy its own control data sets and journal.
Running concurrent copy (like any copy or backup) during off-peak hours results in
better system throughput.
Backing up the data sets in a user catalog allows you to recover from damage to
the catalog. You can import the backup copy of a data set whose entry is lost or
you can redefine the entry and reload the backup copy.
If a subsequent OPEN is issued for update, VSAM turns off the open-for-output
indicator at CLOSE. If the data set is opened for input, however, the
open-for-output indicator is left on. It is the successful CLOSE after an OPEN for
OUTPUT that causes the OPENIND to get turned off.
When the data set is subsequently opened and the user's program attempts to
process records beyond end-of-data or end-of-key range, a read operation results
in a “no record found” error, and a write operation might write records over
previously written records. To avoid this, you can use the VERIFY command which
corrects the catalog information. For additional information about recovering a data
set, see DFSMS/MVS Managing Catalogs.
The IDCAMS VERIFY command can also be used to verify a VSAM data set. The
IDCAMS VERIFY command causes IDCAMS to open the VSAM data set for
output, issue a VSAM VERIFY macro call, and then close the data set.
The catalog will be updated from the verified information in the VSAM control
blocks when the VSAM data set which was opened for output is successfully
closed.
The actual VSAM control block fields that get updated depend on the type of data
set being verified. VSAM control block fields that may be updated include “High
used RBA/CI” for the data set, “High key RBA/CI”, “number of index levels”, and
“RBA/CI of the first sequence set record”.
The VERIFY command should be used following a system failure that caused a
component opened for update processing to be improperly closed. Clusters,
alternate indexes, entry-sequenced data sets, and catalogs can be verified. Paths
over an alternate index and linear data sets cannot be verified. Paths defined
directly over a base cluster can be verified. The VERIFY macro will perform no
function when VSAM RLS is being used. VSAM RLS is responsible for maintaining
data set information in a shared environment.
Duplicate data in a key-sequenced data set, the least likely error to occur, can
result from a failure during a control interval or control area split. To reduce the
number of splits, specify free space for both control intervals and control areas. If
the failure occurred before the index was updated, the insert is lost, no duplicate
exits, and the data set is usable.
If the failure occurred between updating the index and writing the updated control
interval into secondary storage, some data is duplicated. However, you can access
both versions of the data by using addressed processing. If you want the current
version, use REPRO to copy it to a temporary data set and again to copy it back to
a new key-sequenced data set. If you have an exported copy of the data, use the
IMPORT command to obtain a reorganized data set without duplicate data.
If the index is replicated and the error occurred between the write operations for the
index control intervals, but the output was not affected, both versions of the data
can be retrieved. The sequence of operations for a control area split is similar to
that for a control interval split. To recover the data, use the REPRO or IMPORT
command in the same way as for the failure described in the previous paragraph.
Use the journal exit (JRNAD) to determine control interval and control area splits
and the RBA range affected.
You cannot use VERIFY to correct catalog records for a key-sequenced data set,
or a fixed-length or variable-length RRDS after load mode failure. An
entry-sequenced data set defined with the RECOVERY attribute can be verified
after a create (load) mode failure; however, you cannot run VERIFY against an
empty data set or a linear data set. Any attempt to do either will result in a VSAM
logical error. See “Opening a Data Set” on page 121 for information about VSAM
issuing the implicit VERIFY command.
RACF and password protection can coexist for the same data set. The
RACF-authorization levels of ALTER, CONTROL, UPDATE, and READ correspond
to the VSAM password levels of master, control, update, and read.
To have password protection take effect for a data set, the catalog containing it
must be either RACF-protected or password-protected and the data set itself must
not be defined to RACF. Although passwords are not supported for a
RACF-protected data set, they can still provide protection if the data set is moved
to a system that does not have RACF protection. If a user-security-verification
routine (USVR) exists, it is not invoked for RACF-defined data sets.
| To cause the system to erase sensitive data with RACF the system programmer
| specifies:
| Start the erase feature with the RACF SETROPTS command.
| The VSAM cluster or non-VSAM data set must be RACF-protected, with the
| RACF ERASE option specified in the generic or discrete profile.
Specifying NOERASE using the access methods services commands will not
override erasure if the RACF profile has the ERASE option set.
If the data set is sequential, partitioned, PDSE, or VSAM extended format, and you
specified ERASE, the released area is also overwritten when you use any of the
following:
The RLSE subparameter in the JCL SPACE parameter on a DD statement that
a program writes to
The partial release option in management class
The PARTREL macro.
| If you request the ERASE option on a RAMAC Virtual Array, the system optimizes
| the erasure so that it happens much faster than on other kinds of disks. The
| erasure is guaranteed to complete before data set deletion or space release. After
| a successful erasure your data is not accessible by any software.
To protect multivolume non-VSAM DASD and tape data sets, you must define each
volume of the data set to RACF as part of the same volume set.
When a RACF-protected data set is opened for output and extended to a new
volume, the new volume is automatically defined to RACF as part of the same
volume set.
When a multivolume physical-sequential data set is opened for output, and any
of the data set's volumes are defined to RACF, either each subsequent volume
must be RACF-protected as part of the same volume set, or the data set must
not yet exist on the volume.
The system automatically defines all volumes of an extended sequential data
set to RACF when the space is allocated.
When a RACF-protected multivolume tape data set is opened for output, either
each subsequent volume must be RACF-protected as part of the same volume
set, or the tape volume must not yet be defined to RACF.
If the first volume opened is not RACF-protected, no subsequent volume can
be RACF-protected. If a multivolume data set is opened for input (or a
nonphysical-sequential data set is opened for output), no such consistency
check is performed when subsequent volumes are accessed.
You can use RACF to provide access control to tape volumes that have no labels
(NL), standard labels (SL), ISO/ANSI labels (AL), or to tape volumes that are
referred to with bypass label processing (BLP).
RACF protection of tape data sets is provided on a volume basis or on a data set
basis. A tape volume is defined to RACF explicitly by use of the RACF command
language, or automatically. A tape data set is defined to RACF whenever a data set
is opened for OUTPUT, OUTIN, or OUTINX and RACF tape data set protection is
active, or when the data set is the first file in a sequence. All data sets on a tape
volume are RACF protected if the volume is RACF protected.
If a data set is defined to RACF and is password protected, access to the data set
is authorized only through RACF. If a tape volume is defined to RACF and the data
sets on the tape volume are password protected, access to any of the data sets is
authorized only through RACF. Tape volume protection is activated by issuing the
RACF command SETROPTS CLASSACT(TAPEVOL). Tape data set name
protection is activated by issuing the RACF command SETROPTS
CLASSACT(TAPEDSN). Data set password protection is bypassed. The system
ignores data set password protection for system-managed DASD data sets.
| ISO/ANSI Version 3 and Version 4 installation exits that execute under RACF will
| receive control during ANSI volume label processing. Control goes to the
| RACHECK preprocessing and postprocessing installation exits. The same
| IECIEPRM exit parameter list passed to ANSI installation exits is passed to the
| RACF installation exits if the accessibility code is any alphabetic character from A
| through Z. See DFSMS/MVS Installation Exits for more information.
| Note: ISO/ANSI Version 4 tapes also allows special characters
| !*"%&'()+,-./:;<=>?_ and numeric 0–9.
The following are examples of passwords required for defining, listing, and deleting
non-system-managed catalog entries:
Defining a non-system-managed data set in a password-protected catalog
requires the catalog's update (or higher) password.
Listing, altering, or deleting a data set's catalog entry requires the appropriate
password of either the catalog or the data set. However, if the catalog, but not
the data set, is protected, no password is needed to list, alter, or delete the
data set's catalog entry.
OPEN and CLOSE operations on a data set can be authorized by the password
pointed to by the PASSWD parameter of the ACB macro. The “ACB Macro” section
in DFSMS/MVS Macro Instructions for Data Sets describes which level of password
is required for each type of operation.
Control access The control access password authorizes you to use control
interval access. For further information, see Chapter 11, “Processing Control
Intervals” on page 163.
Update access. The update access password authorizes you to retrieve,
update, insert, or delete records in a data set. The update password does not
allow you to alter passwords or other security information.
Read access The read-only password allows you to examine data records and
catalog records, but not to add, alter, or delete them, nor to see password
information in a catalog record.
Each higher-level password allows all operations permitted by lower levels. Any
level can be null (not specified), but if a low-level password is specified, the
DEFINE and ALTER commands give the higher passwords the value of the highest
password specified. For example, if only a read-level password is specified, the
read-level becomes the update-, control-, and master-level password as well. If you
specify a read password and a control password, the control password value
becomes the master-level password as well. However, in this case, the update-level
password is null because the value of the read-level password is not given to
higher passwords.
Catalogs are themselves VSAM data sets, and can have passwords. For some
operations (for example, listing all the catalog's entries with their passwords or
deleting catalog entries), the catalog's passwords can be used instead of the entry's
passwords. If the master catalog is protected, the update- or higher-level password
is required when defining a user catalog, because all user catalogs have an entry in
the master catalog. When deleting a protected user catalog, the user catalog's
master password must be specified.
Some access method services operations might involve more than one password
authorization. For example, importing a data set involves defining the data set and
loading records into it. If the catalog into which the data set is being imported is
password-protected, its update-level (or higher-level) password is required for the
definition; if the data set is password-protected, its update-level (or higher-level)
password is required for the load. The IMPORT command allows you to specify the
password of the catalog; the password, if any, of the data set being imported is
obtained by the commands from the exported data.
For a Data Set: Observe the following precautions when using protection
commands for data sets:
To access a VSAM data set using its cluster name instead of data or
index names, you must specify the proper level password for the cluster even
if the data or index passwords are null.
To access a VSAM data set using its data or index name instead of its
cluster name, you must specify the proper data or index password. However,
if cluster passwords are defined, the master password of the cluster can be
specified instead of the data or index password.
Null means no password was specified. If a cluster has only null passwords,
you can access the data set using the cluster name without specifying
passwords, even if the data and index entries of the cluster have defined
passwords. Using null passwords allows unrestricted access to the VSAM
cluster but protects against unauthorized modification of the data or index as
separate components.
Password Prompting
Computer operators and TSO terminal users should supply a correct password if a
processing program does not give the correct one when it tries to open a
password-protected data set. When the data set is defined, you can use the CODE
parameter to specify a code instead of the data set name to prompt the operator or
terminal user for a password. The prompting code keeps your data secure by not
allowing the operator or terminal user to know both the name of the data set and its
password.
A data set's code is used for prompting for any operation against a
password-protected data set. The catalog code is used for prompting when the
catalog is opened as a data set, when an attempt is made to locate catalog entries
that describe the catalog, and when an entry is to be defined in the catalog.
If you do not specify a prompting code, VSAM identifies the job for which a
password is needed with the JOBNAME and DSNAME for background jobs or with
the DSNAME alone for foreground (TSO) jobs.
When you define a data set, you can use the ATTEMPTS parameter to specify the
number of times the computer operator or terminal user is allowed to give the
password when a processing program is trying to open a data set.
The system ignores data set password protection for system-managed data sets.
Assign a Password:
When you define a non-VSAM data set in an integrated catalog facility catalog, the
data set is not protected with passwords in its catalog entry. However, you can
password protect the catalog. Two levels of protection options are available.
Specify these options in the LABEL field of a DD statement with the parameter
PASSWORD or NOPWREAD. (Refer to OS/390 MVS JCL Reference for more
details.)
Password protection (specified by the PASSWORD parameter) makes a data
set unavailable for all types of processing until a correct password is entered by
the system operator, or for a TSO job by the TSO user.
No-password-read protection (specified by the NOPWREAD parameter) makes
a data set available for input without a password, but requires that the
password be entered for output or delete operations.
The system sets the data set security byte either in the standard header label 1, as
shown in DFSMS/MVS Using Magnetic Tapes, or in the identifier data set control
block (DSCB). After you have requested security protection for magnetic tapes, you
cannot remove it with JCL unless you overwrite the protected data set.
Besides requesting password protection in your JCL, you must enter at least one
record for each protected data set in a data set named PASSWORD, which must
be created on the system-residence volume. The system-residence volume
contains the nucleus of the operating system. You should also request password
protection for the PASSWORD data set itself to prevent both reading and writing
without knowledge of the password.
For a data set on direct access storage devices, you can place the data set under
protection when you enter its password in the PASSWORD data set. You can use
the PROTECT macro or the IEHPROGM utility program to add, change, or delete
an entry in the PASSWORD data set. With either of these methods, the system
updates the DSCB of the data set to reflect its protected status. Therefore, you do
not need to use JCL whenever you add, change, or remove security protection for
a data set on direct access storage devices. DFSMS/MVS DFSMSdfp Advanced
Services describes how to maintain the PASSWORD data set, including the
PROTECT macro. DFSMS/MVS Utilities describes the IEHPROGM utility program.
All access method services load modules are contained in SYS1.LINKLIB, and the
root segment load module (IDCAMS) is link-edited with the SETCODE AC(1)
attribute.
APF authorization is established at the job step level. If, during the execution of an
APF-authorized job step, a load request is satisfied from an unauthorized library,
the task is abnormally terminated. It is the installation's responsibility to ensure that
a load request cannot be satisfied from an unauthorized library during access
method services processing.
The following situations could cause the invalidation of APF authorization for
access method services:
An access method services module is loaded from an unauthorized library.
A user-security-verification routine (USVR) is loaded from an unauthorized
library during access method services processing.
An exception exit routine is loaded from an unauthorized library during access
method services processing.
A user-supplied special graphics table is loaded from an unauthorized library
during access method services processing.
Because APF authorization is established at the job step task level, access method
services is not authorized if invoked by an unauthorized problem program or an
unauthorized terminal monitor program (TMP).
The system programmer must enter the names of those access method services
commands requiring APF authorization to execute under TSO in the authorized
command list.
Programs that are designed to be called as exit routines should never be linked or
bound with APF authorization. This is because someone could directly invoke the
routine via JCL and it would be operating with APF authorization in an environment
it was not designed for.
If the above functions are required and access method services is invoked from a
problem program or a TSO terminal monitor program, the invoking program must
be authorized.
If a password exists for the type of operation being performed, the password must
be given, either in the command or in response to prompting. The
user-security-verification routine is called only after the password specified is
verified; it is bypassed whenever a correct master password is specified, whether
the master password is required for the requested operation.
There are three types of offline environments in which the enciphering of sensitive
data adds to its security. They are data sets that are:
Transported to another installation, where data security is required during
transportation and while the data is stored at the other location
Stored for long periods of time at a permanent storage location
You can use REPRO to copy a plaintext (not enciphered) data set to another data
set in enciphered form. Enciphering converts data to an unintelligible form called a
ciphertext. The enciphered data set can then be stored offline or sent to a remote
location. When desired, the enciphered data set can be brought back online and
you can use REPRO to recover the plaintext from the ciphertext by copying the
enciphered data set to another data set in plaintext (deciphered) form.
Enciphering and deciphering are based on an 8-byte binary value called the key.
Using the REPRO DECIPHER option, the data can be either deciphered on the
system it was enciphered on, or deciphered on another system that has this
functional capability and the required key to decipher the data. Given the same key,
encipher and decipher are inverse operations. The REPRO DECIPHER option uses
the services of the Programmed Cryptographic Facility or the Cryptographic Unit
Support to encipher/decipher the data, and uses block chaining with ciphertext
feedback during the encipher/decipher operation.
Except for catalogs, all data sets supported for copying by REPRO are supported
as input (SAM, ISAM, VSAM) for enciphering and as output (SAM, VSAM) for
deciphering. The input data set for the decipher operation must be an enciphered
copy of a data set produced by REPRO. The output data set for the encipher
operation can only be a VSAM entry-sequenced, linear, or sequential (SAM) data
set. The target (output) data set of both an encipher and a decipher operation must
be empty. If the target data set is a VSAM data set that has been defined with the
reusable attribute, you can use the REUSE parameter of REPRO to reset it to an
empty status.
Figure 4 on page 59 is a graphic representation of the input and output data sets
involved in REPRO ENCIPHER/DECIPHER operations.
SOURCE DATA SET TARGET DATA SET SOURCE DATA SET TARGET DATA SET
(PLAINTEX) (CIPHERTEXT) (CIPHERTEXT) (PLAINTEX)
VSAM
ESDS
KSDS
LDS
RRDS
ISAM
SAM
You should not build an alternate index over a VSAM entry-sequenced data set that
is the output of a REPRO ENCIPHER operation.
When a VSAM relative record data set is enciphered, the record size of the output
data set must be at least 4 bytes greater than the record size of the relative record
data set. (The extra 4 bytes are needed to prefix a relative record number to the
output record.) You can specify the record size of an output VSAM
entry-sequenced data set through the RECORDSIZE parameter of the DEFINE
CLUSTER command. You can specify the record size of an output sequential
(SAM) data set through the DCB LRECL parameter in the output data set's DD
statement. When an enciphered VSAM relative record data set is subsequently
deciphered with a relative record data set as the target, any empty slots in the
original data set are reestablished. When a linear data set is enciphered, both the
source (input) and target (output) data sets must be linear data sets.
When you encipher a data set, you can specify any of the delimiter parameters
available with the REPRO command (SKIP, COUNT, FROMADDRESS,
FROMKEY, FROMNUMBER, TOADDRESS, TOKEY, TONUMBER) that are
appropriate to the data set being enciphered. However, no delimiter parameter can
be specified when a data set is deciphered. If DECIPHER is specified together with
any REPRO delimiter parameter, your REPRO command terminates with a
message.
If you specify the REPRO command's REPLACE parameter and either the
ENCIPHER or the DECIPHER parameter, access method services ignores
REPLACE.
When the REPRO command copies and enciphers a data set, it precedes the
enciphered data records with one or more records of clear header data. The header
data preceding the enciphered data contains information necessary for the
deciphering of the enciphered data, such as:
Number of header records
Number of records to be ciphered as a unit
Key verification data
Enciphered data encrypting keys.
Note: If the output data set for the encipher operation is a compressed format
data set, little or no space is saved. This is because enciphered data is
pseudo-random. You can save space for the output if the input data set is in
compressed format and is compressed.
Key Management
This section is intended to give you an overview of key management (used when
enciphering or deciphering data via the REPRO command).
For both private- and system-key management, REPRO allows you to supply an
8-byte value to be used as the plaintext data encrypting key. If you do not supply
the data encrypting key, REPRO provides an 8-byte value to be used as the
plaintext data encrypting key. The plaintext data encrypting key is used to
encipher/decipher the data using the Data Encryption Standard.
If you supply your own plaintext data encrypting key on ENCIPHER or DECIPHER
through the REPRO command, you risk exposing that key when the command is
listed on SYSPRINT. To avoid this exposure, you can direct REPRO to a data
encrypting key data set to obtain the plaintext data encrypting key. You identify the
DD statement for this data set through the DATAKEYFILE parameter of the
REPRO command. REPRO obtains the data encrypting key from this data set by
locating the first nonblank column in the first record of the data set and processing
16 consecutive bytes of data. These 16 consecutive bytes contain the hexadecimal
character representation for the 8-byte key. Each byte of data must contain the
EBCDIC character representation of a hexadecimal digit (that is, 0 through F). If a
blank byte is found before 16 nonblank bytes have been processed, the data key is
padded to the right with EBCDIC blanks. If the end of the first record is
encountered before 16 bytes have been processed, the data key is considered not
valid. If a valid data encrypting key cannot be obtained from the first record of the
data encrypting key data set, then REPRO either generates the data encrypting key
(ENCIPHER) or terminates with a message (DECIPHER).
If you allow (or force, by a not valid data encrypting key data set record) REPRO to
provide the data encrypting key on ENCIPHER, you risk exposing the data
encrypting key only if you manage your keys privately. REPRO lists the generated
data encrypting key only for privately managed keys.
than the plaintext data encrypting key. In this way, the actual plaintext data
encrypting key is not revealed. (Note that the secondary file key is used only in key
communication, not when enciphering data.)
If you choose to manage your own private keys, no secondary file keys are used to
encipher the data encrypting key; it is your responsibility to ensure the secure
nature of your private data encrypting key.
Although both internal and external secondary file keys can be used when doing
a REPRO ENCIPHER, only internal secondary file keys can be used when doing a
REPRO DECIPHER. Thus, external secondary file keys cannot be used to do a
REPRO ENCIPHER and REPRO DECIPHER on the same system. External
secondary file keys are used when the enciphered data is to be deciphered on
another system. The REPRO DECIPHER is done on the other system using the
same secondary file key. However, in this other system's CKDS, the secondary file
key must be defined as an internal secondary file key.
Requirements
In planning to use the ENCIPHER or DECIPHER functions of the REPRO
command, you should be aware of the following requirements:
Either the Programmed Cryptographic Facility, program number 5740-XY5, or
the Cryptographic Unit Support Facility, program number 5740-XY6, and its
prerequisites must be installed on your system.
Before issuing the REPRO command (via a batch job or from a TSO terminal),
the Programmed Cryptographic Facility must have already been started through
a START command at the operator's console.
Data Storage
Logical records of VSAM data sets are stored differently from logical records in
non-VSAM data sets. VSAM stores records in control intervals. A control interval is
a continuous area of direct access storage that VSAM uses to store data records
and control information that describes the records. Whenever a record is retrieved
from direct access storage, the entire control interval containing the record is read
into a VSAM I/O buffer in virtual storage. The desired record is transferred from the
VSAM buffer to a user-defined buffer or work area. Figure 5 shows how a logical
record is retrieved from direct access storage.
DASD Storage
Virtual Storage
I/O Buffer
CI I/O Path R1 R2 R3
Work R2
Area
CI = Control Interval
R = Record
Control Intervals
The size of control intervals can vary from one VSAM data set to another, but all
the control intervals within the data portion of a particular data set must be the
same length. You can let VSAM select the size of a control interval for a data set,
or you can request a particular control interval size using the access method
services DEFINE command. For information on selecting the best control interval
size, see “Optimizing Control Interval Size” on page 141.
Logical records
Free space
Control information fields.
Note: In a linear data set all of the control interval bytes are data bytes. There is
no imbedded control information.
└──────────────────┘
Control Information Fields
CIDFs are 4 bytes long, and contain information about the control interval, including
the amount and location of free space. RDFs are 3 bytes long, and describe the
length of records and how many adjacent records are of the same length.
If two or more adjacent records have the same length, only two RDFs are used for
this group. One RDF gives the length of each record, and the other gives the
number of consecutive records of the same length. Figure 7 on page 67 shows
RDFs for records of the same and different lengths.
Control Interval 1
┌──────┬──────┬──────┬──────┬─────┬─────┬──────┐
│ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │
│ R1 │ R2 │ R3 │ FS │ RDF │ RDF │ CIDF │
│ │ │ │ │ 2 │ 1 │ │
└──────┴──────┴──────┴──────┴─────┴─────┴──────┘
Record length 16ð 16ð 16ð 22 3 3 4
Control Interval 2
┌─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬──────┐
│ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │ │
│ R1 │ R2 │ R3 │ R4 │ FS │ RDF │ RDF │ RDF │ RDF │ CIDF │
│ │ │ │ │ │ 4 │ 3 │ 2 │ 1 │ │
└─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┴──────┘
13ð 7ð 11ð 14ð 46 3 3 3 3 4
Control Interval 3
┌─────┬─────┬─────┬─────┬──────┬─────┬─────┬─────┬─────┬─────┬──────┐
│ │ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │ │ │
│ R1 │ R2 │ R3 │ R4 │ R5 │ FS │ RDF │ RDF │ RDF │ RDF │ CIDF │
│ │ │ │ │ │ │ │ │ │ │ │
└─────┴─────┴─────┴─────┴──────┴─────┴─────┴─────┴─────┴─────┴──────┘
8ð 8ð 8ð 1ðð 93 63 3 3 3 3 4
FS = Free Space
Figure 7. Relation between Records and RDFs
compressed format, VSAM must be used to extract and expand each record in
order to obtain data which is usable. If methods other than VSAM are used to
process a compressed data set which does not recognize the record prefix, the end
result is unpredictable and could result in loss of data. See “Compressed Data” on
page 80 for additional information.
Control Areas
The control intervals in a VSAM data set are grouped together into fixed-length
contiguous areas of direct access storage called control areas. A VSAM data set is
actually composed of one or more control areas. The number of control intervals in
a control area is fixed by VSAM.
The maximum size of a control area is one cylinder, and the minimum size is one
track of DASD storage. When you specify the amount of space to be allocated to a
data set, you implicitly define the control area size. For information on finding the
best control area size, see “Optimizing Control Area Size” on page 145.
Spanned Records
Sometimes a record is larger than the control interval size used for a particular data
set. In VSAM, you do not need to break apart or reformat such records, because
you can specify spanned records when defining a data set. The SPANNED
parameter allows a record to extend across or span control interval boundaries.
Spanned records might reduce the amount of DASD space required for a data set
when data records vary significantly in length, or when the average record length is
larger compared to the CI size. The following figures show how spanned records
can be used for more efficient use of space.
Figure 9 shows a data set with the same space requirements as in Figure 8, but
one that allows spanned records.
CI1 CI2 CI3 CI4 CI5
┌─────┬────┬──────┐ ┌─────────┬──────┐ ┌─────────┬──────┐ ┌─────┬────┬──────┐ ┌─────┬────┬──────┐
│ │ │ Con- │ │ │ Con- │ │ │ Con- │ │ R │ │ Con- │ │ │ │ Con- │
│ R │ FS │ trol │ │ R Seg 1 │ trol │ │ R Seg 2 │ trol │ │ Seg │ FS │ trol │ │ R │ FS │ trol │
│ │ │ Info │ │ │ Info │ │ │ Info │ │ 3 │ │ Info │ │ │ │ Info │
└─────┴────┴──────┘ └─────────┴──────┘ └─────────┴──────┘ └─────┴────┴──────┘ └─────┴────┴──────┘
The control interval size is reduced to 4096 bytes. When the record to be stored is
larger than the control interval size, the record is spanned between control
intervals. In Figure 9, control interval 1 contains a 2000-byte record. Control
intervals 2, 3, and 4 together contain one 10000-byte record. Control interval 3
contains a 2000-byte record. By changing control interval size and allowing
spanned records, the three records can be stored in 20,480 bytes, reducing the
amount of storage needed by 10,240 bytes.
Linear data sets are rarely used by application programs. They are the most
effective for specialized applications. For example, Data-In-Virtual (DIV) stores
data in linear data sets.
R5
R4
R1 R2 R3
Records are only added at the end of the data set. Existing records cannot be
deleted. If you want to delete a record, you must flag that record as inactive. As far
as VSAM is concerned, the record is not deleted. Records can be updated, but
When a record is loaded or added, VSAM indicates its relative byte address (RBA).
The RBA is the offset of this logical record from the beginning of the data set. The
first record in a data set has an RBA of 0; the second record has an RBA equal to
the length of the first record, and so on. The RBA of a logical record depends only
on the record's position in the sequence of records. The RBA is always expressed
as a fullword binary integer.
Note: You can access an HFS file with VSAM. The HFS file is simulated as an
ESDS.
When using VSAM, the system attempts to compatibly support an HFS file such
that the application program sees the file as if it were an ESDS.
Since HFS files are not actually stored as ESDSs, the system cannot simulate all
the characteristics of an ESDS. Because of this, there are certain macros and
services which have incompatibilities or restrictions when dealing with HFS files.
IBM has identified macros and services having incompatibilities or restrictions which
may affect their use with HFS files.
Record Processing: Record boundaries are not maintained within binary files.
Text files are presumed to be EBCDIC. Repositioning functions (such as POINT,
CLOSE TYPE=T, GET DIRECT) are not allowed for FIFO or character special files.
When the file is accessed as binary, each record will be returned as length ACB
RPLBUFL (move mode), except, possibly, for the last one.
When the file is accessed as text, if any record in the file consists of zero bytes
(that is, a text delimiter is followed by another text delimiter), the record returned
will consist of one blank. If any record is longer than ACB RPLBUFL bytes, it will
result in an error return code on GET (for an ACB).
VSAM restrictions: The following are VSAM restrictions associated with this
function:
Only ESDS is simulated
No file sharing or buffer sharing is supported except for multiple readers and, of
course, for a reader and a writer for a FIFO.
STRNO > 1 is not supported
Chained RPLs are not supported
Shared resources is not supported
Updated and backward processing is not supported
Direct processing or POINT for FIFO and character special files is not
supported
There is no catalog support for HFS files
Alternate indexes are not supported
There is no support for JRNAD, UPAD, or the EXCEPTION exit
There is no cross memory support
ERASE, VERIFY, and SHOWCAT are not supported
Variable length binary records do not retain record boundaries during
conversion to a byte stream. During reading, each record, except the last, is
assumed to be the maximum length.
To specify the maximum record size, code the LRECL keyword on the JCL DD
statement, SVC 99, or TSO ALLOCATE. If not specified, the default is 32767.
On return from a synchronous PUT or a CHECK associated with an
asynchronous PUT, it is not guaranteed that data written has been
synchronized to the output device. To ensure data synchronization, use
ENDREQ, CLOSE, or CLOSE TYPE=T.
Services and utilities: The following services and utilities support HFS files:
Access Method Services (AMS) REPRO
REPRO by DD name is supported and uses QSAM.
AMS PRINT
PRINT by DD name is supported and uses QSAM. Instead of displaying 'RBA
OF RECORD', it displays 'RECORD NUMBER'.
DEVTYPE macro
DEVTYPE provides information related to the HFS file. Instead of giving return
code 8, as it did before DFSMS/MVS V1R3M0, if PATH is specified on the DD
statement, it returns a return code 0, a UCBTYP simulated value of
X'00000103', and a maximum block size of 32760.
The following services and utilities do not support HFS files. Unless stated
otherwise, they will return an error or unpredictable value when issued for an HFS
file.
Access Method Services (AMS)
– ALTER, DEFINE, DELETE, DIAGNOSE, EXAMINE, EXPORT, IMPORT,
LISTCAT, VERIFY
OBTAIN, SCRATCH, RENAME, TRKCALC, PARTREL
The above require a DSCB or UCB. HFS files do not have DSCBs or valid
UCBs.
Note: ISPF Browse/Edit does not support HFS files.
See DFSMS/MVS Macro Instructions for Data Sets for additional information
regarding VSAM interfaces and HFS files.
Unique
In the same position in each record
In the first segment of a spanned record
The key must be in the same position in each record, the key data must be
contiguous, and each record's key must be unique. After it is specified, the value of
the key cannot be altered, but the entire record can be erased or deleted. For
compressed data sets, the key itself and any data before the key will not be
compressed.
When a new record is added to the data set, it is inserted in its collating sequence
by key, as shown in Figure 14.
Key Field
654
Free Space
When a key-sequenced data set is defined, unused space can be scattered
throughout the data set to allow records to be inserted or lengthened. The unused
space is called free space. When a new record is added to a control interval or an
existing record is lengthened, subsequent records are moved into the following free
space to make room for the new or lengthened record. Conversely, when a record
is deleted or shortened, the space given up is reclaimed as free space for later use.
When you define your data set, you can use the FREESPACE parameter to specify
what percentage of each control interval is to be set aside as free space when the
data set is initially loaded.
Within each control area, you can reserve free space by using free control intervals.
If you have free space in your control area, it is easier to avoid splitting your control
area when you want to insert additional records or lengthen existing records. When
you define your data set, you can specify what percentage of the control area is to
be set aside as free space, using the FREESPACE parameter.
For information on specifying the optimal amount of control interval and control area
free space, see “Optimizing Free Space Distribution” on page 146.
After
┌─────┬─────┬─────┬─────────────┬───┬───┬───┬───┐
│ │ │ │ │ │ │ │ C │
│ │ │ │ │ R │ R │ R │ I │
│ 11 │ 12 │ 14 │ Free Space │ D │ D │ D │ D │
│ │ │ │ │ F │ F │ F │ F │
└─────┴─────┴─────┴─────────────┴───┴───┴───┴───┘
Figure 16. Inserting a Logical Record in a Control Interval
Two logical records are stored in the first control interval shown in Figure 16. Each
logical record has a key (11 and 14). The second control interval shows what
happens when you insert a logical record with a key of 12.
When a record is deleted, the procedure is reversed, and the space occupied by
the logical record and corresponding RDF is reclaimed as free space.
Prime Index
A key-sequenced data set always has a prime index that relates key values to the
relative locations of the logical records in a data set. The prime index, or simply
index, has two uses:
To locate the collating position when inserting records
To locate records for retrieval
When initially loading a data set, records must be presented to VSAM in key
sequence. The index for a key-sequenced data set is built automatically by VSAM
as the data set is loaded with records.
When a data control interval is completely loaded with logical records, free space,
and control information, VSAM makes an entry in the index. The entry consists of
the highest possible key in the data control interval and a pointer to the beginning
of that control interval.
Key Compression
The key in an index entry is stored by VSAM in a compressed form. Compressing
the key eliminates from the front and back of a key those characters that are not
necessary to distinguish it from the adjacent keys. Compression helps achieve a
smaller index by reducing the size of keys in index entries. VSAM automatically
does key compression in any key-sequenced data set. It is independent of whether
the data set is in compressed format.
Each slot has a unique relative record number, and the slots are sequenced by
ascending relative record number. Each record occupies a slot, and is stored and
retrieved by the relative record number of that slot. The position of a data record is
fixed; its relative record number cannot change. A fixed-length RRDS cannot have
a prime index or an alternate index.
Because the slot can either contain data or be empty, a data record can be
inserted or deleted without affecting the position of other data records in the
fixed-length RRDS. The record definition field (RDF) shows whether the slot is
occupied or empty. Free space is not provided in a fixed-length RRDS because the
entire data set is divided into fixed-length slots.
In a fixed-length RRDS, each control interval contains the same number of slots.
The number of slots is determined by the control interval size and the record
length. Figure 17 on page 77 shows the structure of a fixed-length RRDS after
adding a few records. Each slot has a relative record number and an RDF.
┌──────┬──────┬──────┬──────┬───┬───┬───┬───┬───┐
│ │ │ │ │ R │ R │ R │ R │ C │
│ │ │ │ │ D │ D │ D │ D │ I │
│ 1 │ 2 │ 3(E) │ 4(E) │ F │ F │ F │ F │ D │
│ │ │ │ │ 4 │ 3 │ 2 │ 1 │ F │
└──────┴──────┴──────┴──────┴───┴───┴───┴───┴───┘
┌──────┬──────┬──────┬──────┬───┬───┬───┬───┬───┐
│ │ │ │ │ R │ R │ R │ R │ C │
│ 5 │ 6(E) │ 7(E) │ 8(E) │ D │ D │ D │ D │ I │
│ │ │ │ │ F │ F │ F │ F │ D │
│ │ │ │ │ 8 │ 7 │ 6 │ 5 │ F │
└──────┴──────┴──────┴──────┴───┴───┴───┴───┴───┘
┌──────┬───────┬─────┬──────┬───┬───┬───┬───┬───┐
│ │ │ │ │ R │ R │ R │ R │ C │
│ 9 │ 1ð(E) │ 11 │ 12 │ D │ D │ D │ D │ I │
│ │ │ │ │ F │ F │ F │ F │ D │
│ │ │ │ │ 12│ 11│ 1ð│ 9 │ F │
└──────┴───────┴─────┴──────┴───┴───┴───┴───┴───┘
(E)-Empty Slot
Figure 17. Fixed-length Relative Record Data Set
You must load the variable-length RRDS sequentially in ascending relative record
number order. To define a variable-length RRDS, specify NUMBERED and
RECORDSIZE. The average record length and maximum record length in
RECORDSIZE must be different.
Free space is used for inserting and lengthening variable-length RRDS records.
When a record is deleted or shortened, the space given up is reclaimed as free
space for later use. When you define your data set, use the FREESPACE
parameter to specify what percentage of each control interval and control area is to
be set aside as free space when the data set is initially loaded. “Insertion of a
Logical Record in a Control Interval” on page 75 shows how free space is used to
insert and delete a logical record.
Spanned records and alternate indexes are not allowed for a variable-length RRDS.
VRRDS is a KSDS processed as an RRDS so a prime index will be created.
Note: Variable-length RRDS performance is similar to a key-sequenced data set,
and is slower than for a fixed-length RRDS.
Figure 20 (Page 1 of 2). Comparison of ESDS, KSDS, Fixed-Length RRDS, Variable-Length RRDS, and Linear
Data Sets
Fixed-Length Variable-Length
ESDS KSDS RRDS RRDS Linear Data Sets
Records are in order Records are in Records are in Records are in No processing at
as they are entered collating sequence relative record relative record record level
by key field number order number order
Direct access by Direct access by Direct access by Direct access by Access with
RBA key or by RBA relative record relative record Data-In-Virtual (DIV)
number number
Alternate indexes Alternate indexes No alternate No alternate No alternate indexes
allowed1 allowed indexes allowed indexes allowed allowed
A record's RBA A record's RBA A record's relative A record's relative No processing at
cannot change can change record number record number record level
cannot change cannot change
Space at the end of Free space is used Empty slots in the Free space is used No processing at
the data set is used for inserting and data set are used for inserting and record level
for adding records lengthening for adding records lengthening
records records
A record cannot be Space given up by A slot given up by Space given up by No processing at
deleted, but you can a deleted or a deleted record a deleted or record level
reuse its space for a shortened record can be reused shortened record
record of the same becomes free becomes free
length1 space space
Spanned records Spanned records No spanned No spanned No spanned records
allowed allowed records records
| Extended format Extended format or Extended format Extended format Extended format
| allowed compression allowed allowed allowed
| allowed
Figure 20 (Page 2 of 2). Comparison of ESDS, KSDS, Fixed-Length RRDS, Variable-Length RRDS, and Linear
Data Sets
Fixed-Length Variable-Length
ESDS KSDS RRDS RRDS Linear Data Sets
Note:
1. Not supported for HFS files.
| When a data set is allocated as an extended format data set, the data and index
| are extended format. Any alternate indexes related to an extended format cluster
| are also extended format.
If a data set is allocated as an extended format data set, 32 bytes (x'20') are added
to each physical block. Consequently, when the control interval size is calculated or
explicitly specified, this physical block overhead may increase the amount of space
actually needed for the data set. Figure 21 shows the percentage increase in
space as indicated. Other control intervals do not result in an increase in space
needed.
Restrictions
| The following are restrictions for defining an extended format data set:
| An extended format data set does not allow the indexes to be imbedded with
| the data (IMBED parameter), or the data to be split into keyranges
| (KEYRANGES parameter).
| Extended format data sets must be SMS-managed. The mechanism for
| requesting extended format for VSAM data sets is via the SMS data class
| DSNTYPE=EXT parameter and subparameters 'R' or 'P' to indicate required or
| preferred.
| An open for improved control interval (ICI) processing is not allowed for
| extended format data sets.
| A checkpoint will not be allowed for an environment that has any extended
| format data sets open. The checkpoint will fail with an IHJ000I message.
| Hiperbatch processing is not supported for extended format data sets.
Compressed Data
| In order to use compression, the data set must be in extended format. Only
| extended format key-sequenced data sets can be compressed. The compressed
| data records have a slightly different format than logical records in a data set that
| will not hold compressed data. This results in several incompatibilities that can
| affect the definition of the data set or access to records in the data set:
| The maximum record length for nonspanned data sets is three bytes less than
| the maximum record length of data sets that do not contain compressed data
| (this length is CISIZE-10).
| The relative byte address (RBA) of another record, or the address of the next
| record in a buffer, cannot be determined using the length of the current record
| or the length of the record provided to VSAM.
| The length of the stored record can change when updating a record without
| any length change.
| The key and any data in front of the key will not be compressed. Data sets with
| large key lengths and RKP data lengths may not be good candidates for
| compression.
| Only the data component of the base cluster is eligible for compression. AIXs
| are not eligible for compression.
| The Global Resource Serialization (GRS) option is not allowed for compressed
| data sets.
In addition to these incompatibilities, the data set must meet certain requirements to
allow compression at the time it is allocated:
The data set must have a primary allocation of at least 5 megabytes or 8
megabytes if no secondary allocation is specified.
The maximum record length specified must be at least key offset plus key
length plus forty bytes.
Compressed data sets must be SMS-managed. The mechanism for requesting
compression for VSAM data sets is via the SMS data class COMPACTION=Y
parameter.
Spanned record data sets require the key offset plus the key length to be less than
or equal to the control interval size minus fifteen. These specifications regarding the
key apply to alternate keys as well as primary keys.
Compressed data sets cannot be accessed using control interval (CI) processing
except for VERIFY and VERIFY REFRESH processing and may not be opened for
improved control interval (ICI) processing. A compressed data set can be created
using the LIKE keyword and not just via a data class.
All types of VSAM data sets, including linear, can be accessed by control interval
access, but this is used only for very specific applications. CI mode processing is
not permitted when accessing a compressed data set. The data set can be opened
for CI mode processing to allow for VERIFY and VERIFY REFRESH processing
only. Control interval access is described in Chapter 11, “Processing Control
Intervals” on page 163.
To access a record directly from an entry-sequenced data set, you must supply the
RBA for the record as a search argument. For information on how to obtain the
RBA, refer to “Entry-Sequenced Data Sets” on page 70.
Keyed-Sequential Access
Sequential processing can be started anywhere within the data set. Positioning is
necessary if your starting point is within the data set. Positioning can be done in
two ways:
Using the POINT macro
Issuing a direct request, then changing the request parameter list with the
MODCB macro from “direct” to “sequential.”
Sequential access allows you to avoid searching the index more than once.
Sequential is faster than direct for accessing multiple data records in ascending key
order.
Keyed-Direct Access
Direct access is used to retrieve, update, delete and add records. When direct
processing is used, VSAM searches the index from the highest level index-set
record to the sequence-set for each record to be accessed. Searches for single
records with random keys is usually done faster with direct processing. You need
to supply a key value for each record to be processed.
For retrieval processing, you can either supply the full key or a generic key. The
generic key is the high-order portion of the full key. For example, you might want to
retrieve all records whose keys begin with the generic key AB, regardless of the full
key value. Direct access allows you to avoid retrieving the entire data set
sequentially to process a small percentage of the total number of records.
Skip-Sequential Access
Skip-sequential access is used to retrieve, update, delete, and add records. When
skip-sequential is specified as the mode of access, VSAM retrieves selected
records, but in ascending sequence of key values. Skip-sequential processing
allows you to:
Avoid retrieving the entire data set sequentially to process a small percentage
of the total number of records
Avoid retrieving the desired records directly, this causes the prime index to be
searched from the top to the bottom level for each record
Addressed Access
Keyed-Sequential Access
Skip-Sequential Access
Keyed-Direct Access
Keyed-Sequential Access
Skip-Sequential Access
Keyed-Direct Access
Unlike primary keys, that must be unique, the key of an alternate index can refer to
more than one record in the base cluster. An alternate key value that points to
more than one record is nonunique. If the alternate key points to only one record, it
is unique.
Note: The maximum number of nonunique pointers associated with an alternate
index data record cannot exceed 32,767.
Alternate indexes are not supported for linear data sets, RRDS, or reusable data
sets (data sets defined with the REUSE attribute). For information on defining and
building alternate indexes, see “Defining an Alternate Index” on page 105.
Each record in the data component of an alternate index is of variable length and
contains header information, the alternate key, and at least one pointer to a base
data record. Header information is fixed length and shows:
Whether the alternate index data record contains prime keys or RBA pointers
Whether the alternate index data record contains unique or nonunique keys
The length of each pointer
The length of the alternate key
The number of pointers.
H
Index a D FRED TOM
R
b
c
Alternate H Con-
Index D BEN 21 BILL 36 FRED 15 23 39 Free Space trol
R Info
Data
H Con-
D MIKE 12 TOM 10 41 54 Free Space trol
R Info
H
Index e D 20 38 54
R
Con-
10 TOM 12 MIKE 15 FRED Free Space trol
Info
Base
Cluster
Con-
21 BEN 23 FRED 36 BILL Free Space trol
Info
Data
Con-
39 FRED 41 TOM 54 TOM Free Space trol
Info
If you ask to access records with the alternate key of BEN, VSAM does the
following:
1. VSAM scans the index component of the alternate index, looking for a value
greater than or equal to BEN.
2. The entry FRED points VSAM to a data control interval in the alternate index.
3. VSAM scans the alternate index data control interval looking for an entry that
matches the search argument, BEN.
4. When located, the entry BEN has an associated key, 21. The key, 21, points
VSAM to the index component of the base cluster.
5. VSAM scans the index component for an entry greater than or equal to the
search argument, 21.
6. The index entry, 38, points VSAM to a data control interval in the base cluster.
The record with a key of 21 is passed to the application program.
H
Index a D FRED TOM
R
b
c
Alternate H Con-
Index D BEN 400 BILL 000 FRED 140 540 940 Free Space trol
R Info
Data d
H Con-
D MIKE 12 TOM 10 41 54 Free Space trol
R Info
Con-
36 BILL 23 FRED 10 TOM Free Space trol
Info
Con-
41 TOM 39 FRED 54 TOM Free Space trol
Info
If you ask to access records with the alternate key of BEN, VSAM does the
following:
1. VSAM scans the index component of the alternate index, looking for a value
greater than or equal to BEN.
2. The entry FRED points VSAM to a data control interval in the alternate index.
3. VSAM scans the alternate index data control interval looking for an entry that
matches the search argument, BEN.
4. When located, the entry BEN has an associated pointer, 400, that points to an
RBA in the base cluster.
5. VSAM retrieves the record with an RBA of X'400' from the base cluster.
When building an alternate index, the alternate key can be any field in the base
data set's records having a fixed length and a fixed position in each record. The
alternate key field must be in the first segment of a spanned record. Keys in the
data component of an alternate index are not compressed; the entire key is
represented in the alternate index data record.
A search for a given alternate key reads all the base cluster records containing this
alternate key. For example, Figure 22 on page 85 and Figure 23 show that one
salesman has several customers. For the key-sequenced data set, several prime
key pointers (customer numbers) are in the alternate index data record. There is
one for each occurrence of the alternate key (salesman's name) in the base data
set. For the entry-sequenced data set, several RBA pointers are in the alternate
index data record. There is one for each occurrence of the alternate key
(salesman's name) in the base data set. The pointers are ordered by arrival time.
See “Defining an Alternate Index” on page 105 for how to define an alternate
index. See “Defining a Path” on page 109 for how to define a path.
Any program other than DFSMSdss, REPRO, and any other physical data
copy/move program that does direct input/output to DASD for data sets which have
data in compressed format can compromise data integrity. These programs must
be modified to access the data using VSAM keyed access to permit expansion of
compressed data.
After any of these steps, you can use the access method services LISTCAT and
PRINT commands to verify what has been defined, loaded, and processed. The
LISTCAT and PRINT commands are useful for identifying and correcting problems.
Cluster Concept
For a key-sequenced data set, a cluster is the combination of the data component
and the index component. The cluster provides a way to treat the index and data
components as a single component with its own name. You can also give each
component a name. Fixed-length RRDSs, entry-sequenced data sets, and linear
data sets are considered to be clusters without index components. To be
consistent, they are given cluster names that are normally used when processing
the data set.
For a key-sequenced data set, an index entry describes the cluster's index
component.
All of the cluster's attributes are recorded in the catalog. The information that is
stored in the catalog provides the details needed to manage the data set and to
access the VSAM cluster or the individual components.
If you use DEFINE CLUSTER, attributes of the data and index components can be
specified separately from attributes of the cluster.
If attributes are specified for the cluster and not the data and index
components, the attributes of the cluster (except for password and USVR
security attributes) apply to the components.
If an attribute that applies to the data or index component is specified for both
the cluster and the component, the component specification overrides the
cluster's specification.
If you use ALLOCATE, attributes can be specified only at the cluster level.
Naming a Cluster
You specify a name for the cluster when defining it. Usually, the cluster name is
given as the dsname in JCL. A cluster name that contains more than 8 characters
must be segmented by periods; 1 to 8 characters can be specified between
periods. A name with a single segment is called an unqualified name. A name with
more than 1 segment is called a qualified name. Each segment of a qualified
name is called a qualifier.
You can, optionally, name the components of a cluster. Naming the data
component of an entry-sequenced cluster or a linear data set, or the data and index
components of a key-sequenced cluster, makes it easier to process the
components individually.
If you do not explicitly specify a data or index component name when defining a
VSAM data set or alternate index, VSAM will generate a name. Also, when you
define a user catalog, VSAM will generate only an index name for the user catalog
(the name of the user catalog is also the data component name). VSAM uses the
following format to generate names for both system-managed and
non-system-managed data sets:
1. If the last qualifier of the name is “CLUSTER,” replace the last qualifier with
“DATA” for the data component and “INDEX” for the index component.
EXAMPLE:
After a name is generated, VSAM searches the catalog to ensure that the name is
unique. If a duplicate name is found, VSAM continues generating new names using
the format outlined in 4 until a unique one is produced.
DFSMS/MVS Access Method Services for ICF describes the order in which one of
the catalogs available to the system is selected to contain the to-be-defined catalog
entry. When you define an object, you should ensure that the catalog the system
selects is the catalog you want the object entered.
Note also that data set name duplication is not prevented when a user catalog is
imported into a system. No check is made to determine whether the imported
catalog contains an entry name that already exists in another catalog in the system.
Temporary system-managed VSAM data sets do not require that you specify a data
set name. Should you choose to specify a data set name, however, it must begin
with & or &&:
DSNAME(&CLUSTER)
See “Define Temporary VSAM Data Sets” on page 115 for an example using the
ALLOCATE command to define a temporary system-managed VSAM data set. See
“Temporary VSAM Data Sets” on page 243 for restrictions on using temporary data
sets.
If the Storage Management Subsystem (SMS) is active, and you are defining a
system-managed cluster, you can explicitly specify the data class, management
class, and storage class parameters and take advantage of attributes defined by
your storage administrator. You can also implicitly specify the SMS classes by
taking the system determined defaults if such defaults have been established by
your storage administrator. The SMS classes are assigned only at the cluster level.
You cannot specify them at the data or index level.
If SMS is active and you are defining a non-system-managed cluster, you can also
explicitly specify the data class or take the data class default if one is available.
Management class and storage class are not supported for non-system-managed
data sets.
If you are defining a non-system-managed data set and you do not specify the data
class, you must explicitly specify all necessary descriptive, performance, security,
and integrity information through other access method services parameters. Most of
these parameters can be specified for the data component, the index component,
or both. Specify information for the entire cluster with the CLUSTER parameter.
Specify information for only the data component with the DATA parameter and for
only the index component with the INDEX parameter. For an explanation of the
types of descriptive, performance, security, and integrity information specified using
these parameters, see “Using Access Method Services Parameters.”
Both the data class and some other access method services parameters can be
used to specify values to the same parameter, for example, the control interval
size. The system uses the following order of precedence, or filtering, to determine
which parameter value to assign.
1. Explicitly specified DEFINE command parameters
2. Modeled attributes (assigned by specifying the MODEL parameter on the
DEFINE command)
3. Data class attributes
4. DEFINE command parameter defaults.
Notes:
1. A variable-length RRDS is defined using NUMBERED and RECORDSIZE,
where the average and maximum record length must be different.
2. If the actual length of an entry-sequenced, key-sequenced, or variable-length
RRDS record is less than the maximum record length, VSAM saves disk space
by storing the actual record length in the record definition field (RDF). The RDF
is not adjusted for fixed-length RRDSs.
KEYS parameter: Specifies the length and position of the key field in the records
of a key-sequenced data set.
CATALOG parameter: Specifies the name and password of the catalog in which
the cluster is to be defined.
On DASD types that we no longer support non-VSAM data sets could occupy up to
255 volumes. Being system-managed no longer is relevant to these limits.
RECORDS|KILOBYTES|MEGABYTES|TRACKS|CYLINDERS parameter:
Specifies the amount of space to allocate for the cluster. The CYLINDERS,
TRACKS, MEGABYTES, KILOBYTES and RECORDS parameters are allowed for a
linear data set.
If RECORDS is specified for a linear data set, space is allocated with the number
of control intervals equal to the number of records. (Linear data sets do not have
records; they have objects which are contiguous strings of data.)
The size of the control interval must be large enough to hold a data record of the
maximum size specified in the RECORDSIZE parameter unless the data set was
defined with the SPANNED parameter.
Specify the CONTROLINTERVALSIZE parameter for data sets that use shared
resource buffering, so that you will know what control interval size to code on the
BLDVRP macro.
SPANNED parameter: Specifies whether records can span control intervals. The
SPANNED parameter is not allowed for fixed-length and variable-length RRDSs,
and linear data sets.
VOLUMES parameter for the index component: Specifies whether to place the
cluster's index on a separate volume from data.
All these performance options are discussed in Chapter 10, “Optimizing VSAM
Performance” on page 141.
ERASE parameter: Specifies whether to erase the information a data set contains
when you delete the data set.
Note: To control the erasing of data in a VSAM component whose cluster is
RACF-protected and is cataloged in an integrated catalog facility catalog,
you can use an ERASE attribute that is kept in a generic or discrete profile.
For information about how to specify and use the ERASE option, refer to
OS/390 Security Server (RACF) Introduction and associated RACF
publications. See also “Erasing Residual Data” on page 48.
See Chapter 5, “Protecting Data Sets” on page 47 for more information on the
types of data protection available.
When the primary amount on the first volume is used up, a secondary amount is
allocated on that volume. Each time a new record does not fit in the allocated
space, the system allocates more space in the secondary space amount. This can
be repeated until the volume is out of space or the extent limit is reached. For
non-guaranteed space allocations, when you allocate space for your data set, you
can specify both a primary and a secondary allocation. A primary space allocation
is the initial amount of space allocated. The primary amount is also used when
extending to a new candidate volume. A secondary allocation is the amount of
space allocated each time space is used up. It is possible for the user to specify
whether to use primary or secondary allocation amounts when extending to a new
volume. This is done with an SMS DATACLASS parameter for VSAM attributes.
Space for a data set defined in an integrated catalog facility catalog, or defined with
the SUBALLOCATION parameter in a VSAM catalog, can be expanded to a
maximum of 119 to 123 extents. The last four extents are reserved for extending a
data set when the last extent cannot be allocated in one piece. For data sets
defined in VSAM catalogs defined with the UNIQUE parameter, the space for a
data set can be expanded to a maximum of 16 extents per volume.
You can specify space allocation at the cluster or alternate index level, at the data
level only, or at both the data and index levels. It is best to allocate space at the
cluster or data levels. VSAM allocates space as follows:
If allocation is specified at the cluster or alternate index level only, the amount
needed for the index is subtracted from the specified amount. The remainder of
the specified amount is assigned to data.
If allocation is specified at the data level only, the specified amount is assigned
to data. The amount needed for the index is in addition to the specified amount.
If allocation is specified at both the data and index levels, the specified data
amount is assigned to data and the specified index amount is assigned to the
index.
If secondary allocation is specified at the data level, secondary allocation must
be specified at the index level or the cluster level.
VSAM acquires space in increments of control areas. The control area size
generally is based on primary and secondary space allocations. See “Optimizing
Control Area Size” on page 145 for information on how to optimize control area
size.
Partial Release
Partial release is used to release unused space from the end of an extended format
data set and is specified through SMS management class or by the JCL RLSE
subparameter. All space after the high used RBA is released on a CA boundary up
to the high allocated RBA. If the high used RBA is not on a CA boundary, the high
used amount is rounded to the next CA boundary. Partial release restrictions are:
Partial release processing is supported only for extended format data sets.
Only the data component of the VSAM cluster is eligible for partial release.
Alternate indexes (AIXs) opened for path or upgrade processing are not eligible
for partial release. The data component of an AIX when opened as cluster
could be eligible for partial release.
Partial release processing is not supported for temporary close.
Partial release processing is not supported for data sets defined with
guaranteed space.
VSAM checks the smaller of primary and secondary space values against the
specified device's cylinder size. If the smaller quantity is greater than or equal to
the device's cylinder size, the control area is set equal to the cylinder size. If the
smaller quantity is less than the device's cylinder size, the size of the control area
is set equal to the smaller space quantity. The minimum control area size is one
track. See “Optimizing Control Area Size” on page 145 for information on how to
create small control areas.
When you allocate space for your cluster, you must consider the index options you
have chosen:
If IMBED is specified (to place the sequence set with the data), the data
allocation includes the sequence set. When IMBED is specified, more space
must be given for data allocation.
If REPLICATE is specified and IMBED is not specified, an allocation of 3
primary and 1 secondary track for the index set is a valid choice.
If the REPLICATE option is not used, specify 1 primary and 1 secondary track
for the index set.
For information about index options, refer to “Index Options” on page 158.
When a linear data set is defined, the catalog forces the block size to 4096 bytes.
You can specify a control interval size of 4096 to 32,768 bytes in increments of
4096 bytes. If not an integer multiple of 4096, the control interval size is rounded up
to the next 4096 increment. If the specified BUFFERSPACE is greater than 8,192
bytes, it is decremented to a multiple of 4096. If BUFFERSPACE is less than
8,192, access method services issues a message and fails the command.
3 The value (1024 – 10) is the control interval length minus 10 bytes for 2 RDFs and 1 CIDF. The record size is 200 bytes.
4 On an IBM 3380, 31 physical blocks with 1024 bytes can be stored on one track.
5 The value (31 × 14) is the number of physical blocks per track multiplied by the number of data tracks per cylinder (15 minus 1 for
the imbedded sequence set control interval).
For information about the JCL keywords used to define a VSAM data set, see
Chapter 18, “Using Job Control Language for VSAM” on page 241. For more
information about JCL keywords and the use of JCL, see OS/390 MVS JCL
Reference and OS/390 MVS JCL User's Guide.
| Allocate all of the partitions in a single IEFBR14 job step using JCL. As long as
| there are an adequate number of volumes in the storage groups, and the volumes
| are not above the allocation threshold, the SMS allocation algorithms with SRM will
| ensure each partition is allocated on a separate volume.
With entry-sequenced or key-sequenced data sets, or RRDSs, you can load all the
records either in one job or in several jobs. If you use multiple jobs to load records
into a data set, VSAM stores the records from subsequent jobs in the same manner
that it stored records from preceding jobs, extending the data set as required.
When records are to be stored in key sequence, index entries are created and
loaded into an index component as data control intervals and control areas are
filled. Free space is left as indicated in the cluster definition in the catalog, and, if
indicated in the definition, records are stored on particular volumes according to key
ranges.
VSAM data sets must be cataloged. Sequential and indexed sequential data sets
need not be cataloged. Sequential data sets that are system-managed must be
cataloged.
The DCB parameter DSORG must be supplied via the DD statement. The DCB
parameters RECFM, BLKSIZE, and LRECL can be supplied via the DSCB or
header label of a standard labeled tape, or by the DD statement. The system can
determine the optimum block size.
If you are loading a VSAM data set into a sequential data set, you must remember
that the 3-byte VSAM record definition field (RDF) is not included in the VSAM
record length. When REPRO attempts to copy a VSAM record whose length is
within 4 bytes of LRECL, a recoverable error occurs and the record is not copied.
Access method services does not support records greater than 32760 bytes for
non-VSAM data sets (LRECL=X is not supported). If the logical record length of a
non-VSAM input data set is greater than 32760 bytes, or if a VSAM data set
defined with a record length greater than 32760 is to be copied to a sequential data
set, the REPRO command terminates with an error message.
VSAM uses the high-used RBA field to determine if a data set is empty or not. An
implicit verify can update the high-used RBA. Immediately after a data set is
defined, the high-used RBA value is zero. A newly defined key range data set has
a high-used RBA for each key range. The high-used RBA is the first byte of the first
unused control area for the key range. In any case, an empty data set cannot be
verified.
The terms “create mode,” “load mode,” and “initial data set load” are synonyms for
the process of inserting records into an empty VSAM data set. To start loading the
empty VSAM data set, call the VSAM OPEN macro. The load continues while
records are added following the successful open and concludes when the data set
is closed.
Note: If the loading of an entry-sequenced data set fails, you cannot open it.
VERIFY will issue a message with an OPEN error. The data set must be
defined with the RECOVERY option before it can be opened or verified.
You must delete and redefine the data set to make it empty.
If the design of your application calls for direct processing during load mode, you
can avoid this restriction by following these steps:
1. Open the empty data set for load mode processing.
2. Sequentially write one or more records, which could be “dummy” records.
3. Close the data set to terminate load mode processing.
4. Reopen the data set for normal processing. You can now resume loading or do
direct processing. When using this method to load a VSAM data set, be
cautious about specifying partial release. Once the data set is closed, partial
release will attempt to release all space not used.
For information on how to use user-written exit routines when loading records into a
data set, see Chapter 16, “VSAM User-Written Exit Routines” on page 217.
During load mode, each control area can be preformatted as records are loaded
into it. Preformatting is useful for recovery if an error occurs during loading.
However, performance is better during initial data set load without preformatting.
The RECOVERY parameter of the access method services DEFINE command is
used to indicate that VSAM is to preformat control areas during load mode. In the
case of a fixed-length RRDS and SPEED, a control area in which a record is
inserted during load mode will always be preformatted. With RECOVERY, all
control areas will be preformatted.
Preformatting clears all previous information from the direct access storage area
and writes end-of-file indicators. For VSAM, an end-of-file indicator consists of a
control interval with a CIDF equal to zeros.
For an entry-sequenced data set, VSAM writes an end-of-file indicator in every
control interval in the control area.
For a key-sequenced data set, VSAM writes an end-of-file indicator in the first
control interval in the control area following the preformatted control area. (The
preformatted control area contains free control intervals.)
For a fixed-length RRDS, VSAM writes an end-of-file indicator in the first control
interval in the control area following the preformatted control area. All RDFs in
an empty preformatted control interval are marked “slot empty.”
The SPEED parameter does not preform at the data control areas. It writes an
end-of-file indicator only after the last record is loaded. Performance is better if you
use the SPEED parameter and if using extended format data sets. Extended
format data sets may use system-managed buffering. This allows the number of
data buffers to be optimized for load mode processing. This can be used with the
REPRO parameter for a new data set for reorganization or recovery. If an error
occurs that prevents loading from continuing, you cannot identify the last
successfully loaded record and you might have to reload the records from the
beginning. For a key-sequenced data set, the SPEED parameter only affects the
data component.
Note: Remember that, if you specify SPEED, it will be in effect for load mode
processing. After load mode processing, RECOVERY will be in effect,
regardless of the DEFINE specification.
A data set that is not reusable can be loaded only once. After the data set is
loaded, it can be read, written into, and the data in it can be modified. However,
the only way to remove the set of data is to use the access method services
command DELETE, which deletes the entire data set. If the data set is to be used
again, it must be redefined with the access method services command DEFINE or
by JCL or dynamic allocation.
Instead of using the DELETE — DEFINE sequence, you can specify the REUSE
parameter in the DEFINE CLUSTER|ALTERNATEINDEX command. The REUSE
parameter allows you to treat a filled data set as if it were empty and load it again
and again regardless of its previous contents.
VSAM uses a high-used RBA (relative byte address) field to determine if a data set
is empty or not. Immediately after a data set is defined, the high-used RBA value
is zero. After loading and closing the data set, the high-used RBA is equal to the
offset of the last byte in the data set. In a reusable data set, this high-used RBA
field can be reset to zero at OPEN by specifying MACRF=RST in the ACB at
OPEN. VSAM can use this reusable data set like a newly-defined data set.
For compressed format data sets, in addition to the high-used RBA field being reset
to zero for MACRF=RST, OPEN resets the compressed and uncompressed data
set sizes to zero. The compression dictionary token will not be reset and will be
reused to compress the new data. Because the dictionary token was derived from
previous data, this could affect the compression ratio depending on the nature of
the new data.
Because data is copied as single logical records in either key order or physical
order, automatic reorganization takes place. The reorganization can:
Physical relocation of logical records
Alteration of a record's physical position within the data set
Redistribution of free space throughout the data set
Reconstruction of the VSAM indexes.
If you are copying to or from a sequential data set that is not cataloged, you must
include the appropriate volume and unit parameters on your DD statements. For
more information on these parameters, see “Using REPRO to Load a Data Set” on
page 100.
Figure 24 describes how the data from the input data set is added to the output
data set when the output data set is an empty or nonempty entry-sequenced,
sequential, key-sequenced, or linear data set, or fixed-length or variable-length
RRDS.
Figure 24 (Page 1 of 2). Adding Data to Various Types of Output Data Sets
Type of Data Set Empty Nonempty
Entry-Sequenced/ Loads new data Adds records in sequential order to the end of the data set.
Sequential set in sequential
order.
Key-Sequenced Loads new data Merges records by key and updates the index. Unless the REPLACE
set in key option is specified, records whose key duplicates a key in the output
sequence and data set are lost.
builds an index.
Linear Loads new linear Adds data to control intervals in sequential order to the end of the
data set in relative data set.
byte order.
Figure 24 (Page 2 of 2). Adding Data to Various Types of Output Data Sets
Type of Data Set Empty Nonempty
Fixed-length RRDS / Loads a new data Records from another fixed-length or variable-length RRDS are
variable-length RRDS set in relative merged, keeping their old record numbers. Unless the REPLACE
record sequence, option is specified, a new record whose number duplicates an
beginning with existing record number is lost. Records from any other type of
relative record organization cannot be copied into a nonempty fixed-length RRDS.
number 1.
Except for data class, attributes of the alternate index's data and index components
can be specified separately from the attributes of the whole alternate index. If
attributes are specified for the whole alternate index and not for the data and index
components, these attributes (except for password and USVR security attributes)
apply to the components as well. If the attributes are specified for the components,
they override any attributes specified for the entire alternate index.
Data class, for alternate indexes, to take advantage of the attributes assigned
by the storage administrator.
Length and position of the alternate key field in data records of the base
cluster, as specified in the KEYS parameter.
Average and maximum lengths of alternate index records, as specified in the
RECORDSIZE parameter.
Whether the alternate index is reusable, as specified in the REUSE parameter.
Whether the data set is extended format and whether it has extended
addressability. These characteristics for the alternate index are the same as
those for the cluster.
The performance options and the security and integrity information for the alternate
index are the same as that for the cluster. See “Using Access Method Services
Parameters” on page 92.
Access method services opens the base cluster to read the data records
sequentially, sorts the information obtained from the data records, and builds the
alternate index data records.
The base cluster's data records are read and information is extracted to form the
key-pointer pair:
When the base cluster is entry-sequenced, the alternate key value and the data
record's RBA form the key-pointer pair.
When the base cluster is key-sequenced, the alternate key value and the data
record's prime key value form the key-pointer pair.
For information on calculating the amount of virtual storage required to sort records,
and using the BLDINDEX internal sort see the BLDINDEX command in
DFSMS/MVS Access Method Services for ICF.
| After the key-pointer pairs are sorted into ascending alternate key order, access
| method services builds alternate index records for key-pointer pairs. When all
| alternate index records are built and loaded into the alternate index, the alternate
| index and its base cluster are closed. You cannot build an alternate index over an
| extended addressable entry-sequenced data set.
You can maintain your own alternate indexes or you can have VSAM maintain
them. When the alternate index is defined with the UPGRADE attribute of the
DEFINE command, VSAM updates the alternate index whenever there is a change
to the associated base cluster. VSAM opens all upgrade alternate indexes for a
base cluster whenever the base cluster is opened for output. If you are using
control interval processing, you cannot use UPGRADE. (See Chapter 11,
“Processing Control Intervals” on page 163.)
You can define a maximum of 125 alternate indexes in a base cluster with the
UPGRADE attribute.
All the alternate indexes of a given base cluster that have the UPGRADE attribute
belong to the upgrade set. The upgrade set is updated whenever a base data
record is inserted, erased, or updated. The upgrading is part of a request and
VSAM completes it before returning control to your program. If upgrade processing
is interrupted because of a machine or program error so that a record is missing
from the base cluster but its pointer still exists in the alternate index, record
management will synchronize the alternate index with the base cluster by allowing
you to reinsert the missing base record. However, if the pointer is missing from the
alternate index, that is, the alternate index does not reflect all the base cluster data
records, you must rebuild your alternate index to resolve this discrepancy.
Note that when you use SHAREOPTIONS 2, 3, and 4, you must continue to ensure
read/write integrity when issuing concurrent requests (such as GETs and PUTs) on
the base cluster and its associated alternate indexes. Failure to ensure read/write
integrity might temporarily cause “No Record Found” or “No Associated Base
Record” errors for a GET request. You can bypass such errors by reissuing the
GET request, but it is best to prevent the errors by ensuring read/write integrity.
If you specify NOUPGRADE in the DEFINE command when the alternate index is
defined, insertions, deletions, and changes made to the base cluster will not be
reflected in the associated alternate index.
When a path is opened for update, the base cluster and all the alternate indexes in
the upgrade set are allocated. If updating the alternate indexes is unnecessary, you
can specify NOUPDATE in the DEFINE PATH command and only the base cluster
is allocated. In that case, VSAM does not automatically upgrade the alternate
index. If two paths are opened with MACRF=DSN specified in the ACB macro, the
NOUPDATE specification of one can be nullified if the other path is opened with
UPDATE specified.
Defining a Path
After an alternate index is defined, you need to establish the relationship between
an alternate index and its base cluster, using the access method services
command, DEFINE PATH. You must name the path and can also give it a
password. The path name refers to the base cluster/alternate index pair. When you
access the data set through the path, you must specify the path name and the
DSNAME parameter in the JCL.
When your program opens a path for processing, both the alternate index and its
base cluster are opened. When data in a key-sequenced base cluster is read or
written using the path's alternate index, keyed processing is used. RBA processing
is allowed only for reading or writing an entry-sequenced data set's base cluster.
See DFSMS/MVS Access Method Services for ICF for how to use the DEFINE
PATH command.
| The considerations for defining a page space are much like those for defining a
| cluster. The DEFINE PAGESPACE command has many of the same parameters as
| the DEFINE CLUSTER command has, and, therefore, the information you must
| supply for a page space is similar to what you would specify for a cluster. A page
| space data set cannot be extended format. For details on specifying information for
| a data set, and especially for system-managed data sets, see “Specifying Cluster
| Information” on page 92 through Using Access Method Services Parameters.
A page space has a maximum usable size equal to 16 MB paging slots (records).
For further details, see the description for space size declarations in DFSMS/MVS
Access Method Services for ICF.
You can define a page space in a user catalog, then move the catalog to a new
system and establish it as the system's master catalog. To be system-managed,
page spaces must be cataloged in integrated catalog facility catalogs, and you must
let the system determine in which catalog to catalog the page space. Page spaces
also cannot be duplicate data sets. See “Duplicate Data Set Names” on page 91
for information on how VSAM handles duplicate data sets. A page space cannot be
used if its entry is in a user catalog.
When you issue a DEFINE PAGESPACE command, the system creates an entry in
the catalog for the page space, then preformats the page space. If an error occurs
during the preformatting process (for example, an I/O error or an allocation error),
the page space's entry remains in the catalog even though no space for it exists.
Issue a DELETE command to remove the page space's catalog entry before you
redefine the page space.
Each page space is represented by two entries in the catalog: a cluster entry and a
data entry. (A page space is conceptually an entry-sequenced cluster.) Both of
these entries should be password-protected if the page space is
password-protected.
The system recognizes a page space if it is defined as a system data set at system
initialization time or if it is named in SYS1.PARMLIB. To be used as a page space,
it must be defined in a master catalog.
Notes:
1. The passwords you specify with the DEFINE PAGESPACE command are put in
both the page space's cluster entry and its data entry. When you define page
spaces during system initialization, use the ALTER command to add passwords
to each entry, because passwords cannot be specified during system
initialization. Unless you ensure that the catalog containing the page space
entry is either password- or RACF-protected, a user can list the catalog's
contents and find out each entry's passwords.
2. Passwords are ignored for system-managed data sets. You must have RACF
alter authority.
You can also use the access method services REPRO command to copy a data
set to an output device. For more information on REPRO, see “Copying a Data Set”
on page 103.
The access method services VERIFY command provides a means of checking and
restoring end-of-data-set values after system failure. For more information on
VERIFY, see “Using VERIFY to Process Improperly Closed Data Sets” on page 45.
For more information on using the DIAGNOSE command to indicate the presence
of invalid data or relationships in the BCS and VVDS, see DFSMS/MVS Managing
Catalogs.
The access method services EXAMINE command allows the user to analyze and
report on the structural inconsistencies of key-sequenced data set clusters. The
EXAMINE command is described in Chapter 15, “Checking a VSAM
Key-Sequenced Data Set Cluster for Structural Errors” on page 211.
The listing can be customized by limiting the number of entries, and the information
about each entry, that is printed.
You can obtain the same list while using the Interactive Storage Management
Facility (ISMF) by issuing the CATLIST line operator on the Data Set List panel.
The list is placed into a data set, which you can view immediately after issuing the
request. For more information about the CATLIST line operator, see DFSMS/MVS
Using ISMF.
Entry-sequenced and linear data sets are printed in physical sequential order.
Key-sequenced data sets can be printed in key order or in physical sequential
order. Fixed-length or variable-length RRDSs are printed in relative record number
sequence. A base cluster can be printed in alternate key sequence by specifying a
path name as the data set name for the cluster.
Only the data content of logical records is printed. System-defined control fields are
not printed. Each record printed is identified by one of the following:
The relative byte address (RBA) for entry-sequenced data sets
The key for indexed-sequential and key-sequenced data sets, and for alternate
indexes
To use the PRINT command to print a catalog, access method services must be
authorized. See “Authorized Program Facility (APF) and Access Method Services”
on page 56. For information about program authorization, see “Using the
Authorized Program Facility (APF)” in OS/390 MVS Authorized Assembler Services
Guide.
Note: If four logical and/or physical errors are found while trying to read the input,
printing is terminated.
Use the ERASE parameter to erase the components of a cluster or alternate index
when deleting it. ERASE overwrites the data set. Use the NOSCRATCH parameter
if you do not want the data set removed from the VTOC.
IF LASTCC = ð THEN -
DEFINE CLUSTER(NAME (EXAMPL1.KSDS) VOLUMES(VSERð5)) -
DATA (KILOBYTES (5ð 5))
scale=.9. /\
//STEP2 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=\
//SYSABEND DD SYSOUT=\
//AMSDUMP DD SYSOUT=\
//INDSET4 DD DSNAME=SOURCE.DATA,DISP=OLD,
// VOL=SER=VSERð2,UNIT=338ð
//SYSIN DD \
IF LASTCC = ð THEN -
LISTCAT ENTRIES(EXAMPL1.KSDS)
IF LASTCC = ð THEN -
PRINT INDATASET(EXAMPL1.KSDS)
/\
The following access method services commands are used in this example:
DEFINE USERCATALOG—to create the catalog
DEFINE CLUSTER—to define the data set
REPRO—to load the data set
LISTCAT—to list the catalog entry
PRINT—to print the data set.
See Chapter 18, “Using Job Control Language for VSAM” on page 241 for
examples of creating VSAM data sets through JCL. For more details and examples
of these or other access method services commands, see DFSMS/MVS Access
Method Services for ICF.
Explanation of Commands
The first DEFINE command defines a user catalog called USERCATX. The
USERCATALOG keyword specifies that a user catalog is to be defined. The
command's parameters are:
NAME: is required and names the catalog being defined.
ICFCATALOG: specifies that the catalog is to be in the integrated catalog
facility catalog format.
CYLINDERS: specifies the amount of space to be allocate d from the volume's
available space. If it is specified for a system-managed catalog, it will override
the DATACLASS space specification. A space parameter is required.
VOLUMES: specifies the volume to contain the catalog. You should not specify
the VOLUMES parameter if the catalog is system-managed.
DATA: is required when attributes are to be explicitly specified for the data
component of the cluster.
CYLINDERS: specifies the amount of space allocated for the data component.
A space parameter is required.
The REPRO command here loads the VSAM key-sequenced data set named
EXAMPL1.KSDS from an existing data set called SOURCE.DATA (that is described
by the INDSET4 DD statement). The command's parameters are:
INFILE: identifies the data set containing the source data. The ddname of the
DD statement for this data set must match the name specified on this
parameter.
OUTDATASET: identifies the name of the data set to be loaded. Access
method services dynamically allocates the data set. The data set is cataloged
in the master catalog. Note that, because the cluster component is not
password-protected, a password is not required.
If the REPRO operation is successful, the data set's catalog entry is listed, and the
contents of the data set just loaded are printed.
LISTCAT: lists catalog entries. The ENTRIES parameter identifies the names of
the entries to be listed.
PRINT: prints the contents of a data set. The INDATASET parameter is
required and identifies the name of the data set to be printed. Access method
services dynamically allocates the data set. The data set is cataloged in the
master catalog. No password is required because the cluster component is not
password-protected.
ALLOC -
DSNAME(&CLUSTER) -
NEW PASS -
RECORG(ES) -
SPACE(1,1ð) -
AVGREC(M) -
LRECL(256) -
STORCLAS(TEMP)
/\
AVGREC specifies that the primary quantity specified on the SPACE keyword
represent the number of records in megabytes (multiplier of 1,048,576).
LRECL specifies a record length of 256 bytes.
STORCLAS specifies a storage class for the temporary data set. The
STORCLAS keyword is optional. If you do not specify STORCLAS for the new
data set and your storage administrator has provided an ACS routine, the ACS
routine can select a storage class.
Define Alternate Index Over a Cluster, a Path Over the Alternate Index,
and Build the Alternate Index
The following example defines an alternate index over a previously loaded VSAM
key-sequenced base cluster, defines a path over the alternate index to provide a
means for processing the base cluster through the alternate index, and builds the
alternate index. The alternate index, path, and base cluster must all be defined in
the same catalog, in this case, the master catalog. Because the master catalog is
recoverable, access method services dynamically allocate the volume containing
the base cluster's index component to access its recovery area.
DEFINE ALTERNATEINDEX -
(NAME(EXAMPL1.AIX) -
RELATE(EXAMPL1.KSDS) -
MASTERPW(AIXMRPW) -
UPDATEPW(AIXUPPW) -
KEYS(3 ð) -
RECORDSIZE(4ð 5ð) -
VOLUMES(VSERð1) -
CYLINDERS(2 1) -
NONUNIQUEKEY -
UPGRADE ) -
CATALOG(ICFMAST1/MASTMPW1)
DEFINE PATH -
(NAME(EXAMPL1.PATH) -
PATHENTRY(EXAMPL1.AIX/AIXMRPW) -
READPW(PATHRDPW)) -
CATALOG(ICFMAST1/MASTMPW1)
BLDINDEX INDATASET(EXAMPL1.KSDS) -
OUTDATASET(EXAMPL1.AIX/AIXUPPW) -
CATALOG(ICFMAST1/MASTMPW1)
PRINT INDATASET(EXAMPL1.PATH/PATHRDPW)
/\
Explanation of Commands
The first DEFINE command defines a VSAM alternate index over the base cluster
EXAMPL1.KSDS.
NAME: is required and names the object being defined.
RELATE: is required and specifies the name of the base cluster over which the
alternate index is defined.
MASTERPW and UPDATEPW: specify the master and update passwords,
respectively, for the alternate index.
KEYS: specifies the length of the alternate key and its offset in the base cluster
record.
RECORDSIZE: specifies the length of the alternate index record. The average
and maximum record lengths are 40 and 50 bytes, respectively. Because the
alternate index is being defined with the NONUNIQUEKEY attribute, it must be
large enough to contain the prime keys for all occurrences of any one alternate
key.
VOLUMES: is required for only non-system-managed data sets, and specifies
the volume containing the alternate index EXAMPL1.AIX.
CYLINDERS: specifies the amount of space allocated to the alternate index. A
space parameter is required.
The second DEFINE command defines a path over the alternate index. After the
alternate index is built, opening with the path name causes processing of the base
cluster via the alternate index.
The NAME parameter is required and names the object being defined.
The PATHENTRY parameter is required and specifies the name of the
alternate index over which the path is defined and its master password.
The READPW parameter specifies a read password for the path; it will be
propagated to master-password level.
The CATALOG parameter is required, because the master catalog is
password-protected. It specifies the name of the master catalog and its update
or master password, that is required to define into a protected catalog.
The BLDINDEX command builds an alternate index. Assume that enough virtual
storage is available to perform an internal sort. However, DD statements with the
default ddnames of IDCUT1 and IDCUT2 are provided for two external sort work
data sets if the assumption is incorrect and an external sort must be performed.
The INDATASET parameter identifies the base cluster. Access method services
dynamically allocates the base cluster. The base cluster's cluster entry is not
password-protected even though its data and index components are.
The OUTDATASET parameter identifies the alternate index. Access method
services dynamically allocates the alternate index. The update- or higher-level
password of the alternate index is required.
The CATALOG parameter specifies the name of the master catalog. If it is
necessary for BLDINDEX to use external sort work data sets, they will be
defined in and deleted from the master catalog. The master password permits
these actions.
The PRINT command causes the base cluster to be printed using the alternate key,
using the path defined to create this relationship. The INDATASET parameter
identifies the path object. Access method services dynamically allocates the path.
The read password of the path is required.
The virtual resource pool for all components of the clusters or alternate indexes
must be successfully built before any open is issued to use the resource pool;
otherwise, the results might be unpredictable or performance problems might occur.
For information on the syntax of each macro, and for coded examples of the
macros, see DFSMS/MVS Macro Instructions for Data Sets.
The ACB, RPL, and EXLST are created by the caller of VSAM. When storage is
obtained for these blocks, virtual storage management assigns the PSW key of the
requestor to the subpool storage. An authorized task can change its PSW key.
Since VSAM record management runs in the protect key of its caller, such a
change might make previously acquired control blocks unusable because the
storage key of the subpool containing these control blocks no longer matches the
VSAM caller's key.
Include the following information in your ACB for OPEN to prepare the kind of
processing your program requires:
The address of an exit list for your exit routines. You use the EXLST macro to
construct the list.
If you are processing concurrent requests, the number of requests (STRNO)
defined for processing the data set. For more information on concurrent
requests, refer to “Making Concurrent Requests” on page 134.
The size of the I/O buffer virtual storage space and/or the number of I/O buffers
that you are supplying for VSAM to process data and index records.
The password required for the type of processing desired. Passwords are not
supported for system-managed data sets. You must have RACF authorization
for the type of operation to be performed.
The processing options which you plan to use:
– Keyed, addressed, or control interval 6, or a combination
– Sequential, direct, or skip sequential access, or a combination
– Retrieval, storage, or update (including deletion), or a combination
– Shared or nonshared resources.
The address and length of an area for error messages from VSAM.
If using RLS, refer to Chapter 14, “Using VSAM Record-Level Sharing” on
page 201.
You can use the ACB macro to build an access method control block when the
program is assembled, or the GENCB macro to build a control block when the
program is executed. For information on the advantages and disadvantages of
using GENCB, see “Manipulating Control Block Contents” on page 124.
Analyzing physical errors: When VSAM finds an error in an I/O operation that the
operating system's error routine cannot correct, the error routine formats a
message for your physical error analysis routine (the SYNAD user exit) to act on.
Analyzing logical errors: Errors not directly associated with an I/O operation, such
as an invalid request, cause VSAM to exit to your logical error analysis routine (the
LERAD user exit).
The EXLST macro is coordinated with the EXLST parameter of an ACB or GENCB
macro used to generate an ACB. To use the exit list, you must code the EXLST
parameter in the ACB.
You can use the EXLST macro to build an exit list when the program is assembled,
or the GENCB macro to build an exit list when the program is executed. For
information on the advantages and disadvantages of using GENCB, see
“Manipulating Control Block Contents” on page 124.
amount specified when the data set was defined, or when buffer space is not
specified in either the DD statement or the macro.
Check for consistency of updates to the prime index and data components if
you are opening a key-sequenced data set, an alternate index or a path. If a
data set and its index have been updated separately, VSAM issues a warning
message to indicate a time stamp discrepancy.
Check the password your program specified in the ACB PASSWD parameter
against the appropriate password (if any) in the catalog definition of the data.
Passwords are not supported for system-managed data sets. You must have
RACF authorization for the type of operation being performed. The password
required depends on the kind of access specified in the access method control
block (for example, access for retrieval or for update), as follows:
– Full access allows you to perform all operations (retrieving, updating,
inserting, and deleting) on a data set and any index or catalog record
associated with it. The master password allows you to delete or alter the
catalog entry for the data set or catalog it protects.
– Control interval update access requires the control password or RACF
control authority. The control password allows you to use control interval
access and to retrieve, update, insert, or delete records in the data set it
protects. For information on the use of control interval access, see
Chapter 11, “Processing Control Intervals” on page 163.
Control interval read access requires only the read password or RACF read
authority, which allows you to examine control intervals in the data set it
protects. The read password or RACF read authority does not allow you to
add, change, or delete records.
– Update access requires the update password, which allows you to retrieve,
update, insert, or delete records in the data set it protects.
– Read access requires the read password, which allows you to examine
records in the data set it protects. The read password does not allow you to
add, change, or delete records.
A password of one level authorizes you to do everything authorized by a
password of a lower level.
Besides passwords, you can have protection provided by RACF. When RACF
protection and password protection are both applied to a data set, password
protection is bypassed, and use is authorized solely through the RACF
checking system. RACF checking is bypassed for a caller that is in supervisor
state or Key 0. Password and RACF protection are further described in
Chapter 5, “Protecting Data Sets” on page 47.
If an error occurs during open, a component opened for update processing can
improperly close (leaving the open-for-output indicator on). At OPEN, VSAM
issues an implicit VERIFY command when it detects an open-for-output
indicator on and issues an informational message stating whether the VERIFY
command is successful.
If a subsequent OPEN is issued for update, VSAM turns off the open-for-output
indicator at CLOSE. If the data set is opened for input, however, the
open-for-output indicator is left on.
You can use the RPL macro to generate a request parameter list (RPL) when your
program is assembled, or the GENCB macro to build a request parameter list when
your program is executed. For information on the advantages and disadvantages of
using GENCB, see “Manipulating Control Block Contents” on page 124.
When you define your request, specify only the processing options appropriate to
that particular request. Parameters not required for a request are ignored. For
example, if you switch from direct to sequential retrieval with a request parameter
list, you do not have to zero out the address of the field containing the search
argument (ARG=address).
Each request parameter list in a chain should have the same OPTCD
subparameters. Having different subparameters can cause logical errors. You
cannot chain request parameter lists for updating or deleting records—only for
retrieving records or storing new records. You cannot process records in the I/O
buffer with chained request parameter lists. (RPL OPTCD=UPD and RPL
OPTCD=LOC are invalid for a chained request parameter list.)
When you are using chained RPLs, if an error occurs anywhere in the chain, the
RPLs following the one in error are made available without being processed and
are posted complete with a feedback code of zero.
The GENCB, MODCB, TESTCB, and SHOWCB macros build a parameter list that
describes, in codes, the actions indicated by the parameters you specify. The
parameter list is passed to VSAM to take the indicated actions. An error can occur
if you specify the parameters incorrectly.
You can use the WAREA parameter to provide an area of storage in which to
generate the control block. This work area has a 64K (X'FFFF') size limit. If you
do not provide storage when you generate control blocks, the ACB, RPL, and
EXLST will reside below 16 megabytes unless LOC=ANY is specified.
After issuing a TESTCB macro, you examine the PSW condition code. If the
TESTCB is not successful, register 15 contains an error code and VSAM passes
control to an error routine, if one has been specified. For a keyword specified as an
option or a name, you test for an equal or unequal comparison; for a keyword
specified as an address or a number, you test for an equal, unequal, high, low,
not-high, or not-low condition.
VSAM compares A to B, where A is the contents of the field and B is the value to
which it is to be compared. A low condition means, for example, that A is lower
than B — that is, that the value in the control block is lower than the value you
specified. If you specify a list of option codes for a keyword (for example,
MACRF=(ADR,DIR)), each of them must equal the corresponding value in the
control block for you to get an equal condition.
Some of the fields can be tested at any time; others, only after a data set is
opened. The ones that can be tested only after a data set is opened can, for a
key-sequenced data set, pertain either to the data or to the index, as specified in
the OBJECT parameter.
You can display fields using the SHOWCB macro at the same time that you test
the fields.
With addressed access of a key-sequenced data set, VSAM does not insert or add
new records.
Sequential Insertion: If the new record belongs after the last record of the control
interval and the free space limit has not been reached, the new record is inserted
into the existing control interval. If the control interval does not contain sufficient
free space, the new record is inserted into a new control interval without a true
split.
If the new record does not belong at the end of the control interval and there is free
space in the control interval, it is placed in sequence into the existing control
interval. If adequate free space does not exist in the control interval, a control
interval split occurs at the point of insertion. The new record is inserted into the
original control interval and the following records are inserted into a new control
interval.
Mass Sequential Insertion: When VSAM detects that two or more records are to
be inserted in sequence into a collating position (between two records) in a data
set, VSAM uses a technique called mass sequential insertion to buffer the records
being inserted, and to reduce I/O operations. Using sequential instead of direct
access allows you to take advantage of this technique. You can also extend your
data set (resume loading) by using sequential insertion to add records beyond the
highest key or relative record number. There are possible restrictions to extending
a data set into a new control area depending on the share options you specify. See
Chapter 12, “Sharing VSAM Data Sets” on page 173.
Mass sequential insertion observes control interval and control area free space
specifications when the new records are a logical extension of the control interval
or control area (that is, when the new records are added beyond the highest key or
relative record number used in the control interval or control area).
When several groups of records in sequence are to be mass inserted, each group
can be preceded by a POINT with RPL OPTCD=KGE to establish positioning.
KGE specifies that the key you provide for a search argument must be equal to the
key or relative record number of a record.
Direct Insertion: If the new record belongs after the last record of the control
interval and the free space limit has not been reached, the new record is added to
the end of the existing control interval. If the control interval does not contain
sufficient free space, the new record is inserted into an unused control interval.
If the new record belongs between existing records in the control interval and there
is sufficient free space, the new record is placed in sequence into the control
interval. If the control interval does not contain sufficient free space, the control
interval is split. Approximately half of the records are moved to a new control
interval. If there is still not enough free space, the control interval is split again. The
process of splitting the control interval continues until there is enough free space to
insert the new record.
If the insertion is to the end of the control interval, the record is placed in a new
control interval.
Retrieving Records
The GET macro is used to retrieve records. To retrieve records for update, use the
GET macro with the PUT macro. You can retrieve records sequentially or directly.
In either case, VSAM returns the length of the retrieved record to the RECLEN field
of the RPL.
Sequential Retrieval
Records can be retrieved sequentially using keyed access or addressed access.
| Keyed sequential retrieval. The first time your program accesses a data set
| for keyed sequential access (RPL OPTCD=(KEY,SEQ)), VSAM is positioned at
| the first record in the data set in key sequence if and only if the following is
| true:
| 1. Nonshared resources are being used.
| 2. There have not been any previous requests against the file.
| If VSAM picks a string that has been used previously, this implicit positioning
| will not occur. Therefore, with concurrent or multiple RPL's, it is best to initiate
| your own POINTs and positioning to prevent logic errors.
| With shared resources, you must always use a POINT macro to establish
| position. A GET macro can then retrieve the record. Certain direct requests can
| also hold position. See Figure 25 on page 132 for details on when positioning
| is retained or relased. VSAM checks positioning when processing modes are
| changed between requests.
| For keyed sequential retrieval of a fixed-length or variable-length RRDS, the
| relative record number is treated as a full key. If a deleted record is found
| during sequential retrieval, it is skipped and the next record is retrieved. The
| relative record number of the retrieved record is returned in the argument field
| of the RPL.
Addressed sequential retrieval. Retrieval by address is identical to retrieval
by key, except that the search argument is an RBA, which must be matched to
the RBA of a record in the data set. When a processing program opens a data
set with nonshared resources for addressed access, VSAM is positioned at the
record with RBA of zero to begin addressed sequential processing. A
sequential GET request causes VSAM to retrieve the data record at which it is
positioned and then positions VSAM at the next record. The address specified
for a GET or a POINT must correspond to the beginning of a data record;
otherwise the request is not valid. Spanned records stored in a key-sequenced
data set cannot be retrieved using addressed retrieval.
You cannot predict the RBAs of compressed records.
If, after positioning, you issue a direct request through the same request parameter
list, VSAM drops positioning unless NSP or UPD was specified in the RPL OPTCD
parameter.
When a POINT is followed by a VSAM GET/PUT request, both the POINT and the
subsequent request must be in the same processing mode. For example, a POINT
with RPL OPTCD=(KEY,SEQ,FWD) must be followed by GET/PUT with RPL
OPTCD=(KEY,SEQ,FWD); otherwise, the GET/PUT request is rejected.
For skip-sequential retrieval, you must indicate the key of the next record to be
retrieved. VSAM skips to the next record's index entry by using horizontal pointers
in the sequence set to find the appropriate sequence-set index record and scan its
entries. The key of the next record to be retrieved must always be higher in
sequence than the key of the preceding record retrieved.
Direct Retrieval
Records can also be retrieved directly using keyed access or addressed access.
Keyed direct retrieval for a key-sequenced data set does not depend on prior
positioning. VSAM searches the index from the highest level down to the
sequence set to retrieve a record. You can specify the record to be retrieved by
supplying one of the following:
The exact key of the record
An approximate key, less than or equal to the key field of the record
A generic key.
You can use an approximate specification when you do not know the exact
key. If a record actually has the key specified, VSAM retrieves it. Otherwise, it
retrieves the record with the next higher key. Generic key specification for direct
processing causes VSAM to retrieve the first record having that generic key. If
you want to retrieve all the records with the generic key, specify RPL
OPTCD=NSP in your direct request. That causes VSAM to position itself at the
next record in key sequence. You can then retrieve the remaining records
sequentially.
To use direct or skip-sequential access to process a fixed-length or
variable-length RRDS, you must supply the relative record number of the record
you want in the argument field of the RPL macro. For a variable-length RRDS,
you also must supply the record length in the RECLEN field of the RPL macro.
If you request a deleted record, the request will cause a no-record-found logical
error.
A fixed-length RRDS has no index. VSAM takes the number of the record to be
retrieved and calculates the control interval that contains it and its position
within the control interval.
Addressed direct retrieval requires that the RBA of each individual record be
specified; previous positioning is not applicable.
With direct processing, you can optionally specify RPL OPTCD=NSP to indicate
that position be maintained following the GET. Your program can then process the
following records sequentially in either a forward or backward direction.
Updating a Record
The GET and PUT macros are used to update records. A GET for update retrieves
the record and the following PUT for update stores the record that the GET
retrieved.
When you update a record in a key-sequenced data set, you cannot alter the prime
key field.
You can update the contents of a record with addressed access, but you cannot
alter the record's length. To change the length of a record in an entry-sequenced
data set, you must store it either at the end of the data set (as a new record) or in
the place of an inactive record of the same length. You are responsible for marking
the old version of the record as inactive.
Deleting a Record
After a GET for update retrieves a record, an ERASE macro can delete the record.
The ERASE macro can be used only with a key-sequenced data set, fixed-length or
variable-length RRDS. When you delete a record in a key-sequenced data set or
variable-length RRDS, the record is physically erased. The space the record
occupied is then available as free space.
You can erase a record from the base cluster of a path only if the base cluster is a
key-sequenced data set. If the alternate index is in the upgrade set (that is,
UPGRADE was specified when the alternate index was defined), it is modified
automatically when you erase a record. If the alternate key of the erased record is
unique, the alternate index data record with that alternate key is also deleted.
When you erase a record from a fixed-length RRDS, the record is set to binary
zeros and the control information for the record is updated to indicate an empty
slot. You can reuse the slot by inserting another record of the same length into it.
With an entry-sequenced data set, you are responsible for marking a record you
consider to be deleted. As far as VSAM is concerned, the record is not deleted.
You can reuse the space occupied by a record marked as deleted by retrieving the
record for update and storing in its place a new record of the same length.
The operation that uses but immediately releases a buffer and does not retain
positioning is:
GET RPL OPTCD=(DIR,NUP,MVE)
If you are doing multiple string update processing, you must consider VSAM
look-aside processing and the rules surrounding exclusive use. Look-aside means
VSAM checks its buffers to see if the control interval is already present when
requesting an index or data control interval.
For GET to update requests, the buffer is obtained in exclusive control, and then
read from the device for the latest copy of the data. If the buffer is already in
exclusive control of another string, the request fails with an exclusive control
feedback code. If you are using shared resources, the request can be queued, or
can return an exclusive control error.
If you are using nonshared resources, VSAM does not queue requests that have
exclusive control conflicts, and you are required to clear the conflict. If a conflict is
found, VSAM returns a logical error return code, and you must stop activity and
clear the conflict. If the RPL that caused the conflict had exclusive control of a
control interval from a previous request, you issue an ENDREQ before you attempt
to clear the problem. You can clear the conflict in one of three ways:
Queue until the RPL holding exclusive control of the control interval releases
that control and then reissue the request
Issue an ENDREQ against the RPL holding exclusive control to force it to
release control immediately
Use shared resources and issue MRKBFR MARK=RLS.
Note: If your RPL has provided a correctly specified MSGAREA and MSGLEN,
the address of the RPL holding exclusive control is provided in the first word
of the MSGAREA. Your RPL field, RPLDDDD, contains the RBA of the
requested control interval.
| Strings (sometimes called “place holders”) are like cursors, each represents a
| position in the data set and are like holding your finger in a book to keep the place.
| The same ACB is used for all requests, and the data set needs to be opened only
| once. This means, for example, you could be processing a data set sequentially
| using one RPL, and at the same time, using another RPL, directly access selected
| records from the same data set.
| Keep in mind, though, that strings are not “owned” by the RPL any longer than the
| request holds position. Once a request gives up position (for example, with an
| ENDREQ), that string is free to be used by another request and must be
| repositioned in the dataset by the user.
| For each request, a string defines the set of control blocks for the exclusive use of
| one request. For example, if you use three RPLs, you should specify three strings.
| If the number of strings as needed by the concurrent requests for the ACB
| For each request, a string defines the set of control blocks for the exclusive use of
| one request. For example, if you use three RPLs, you should specify three strings.
| If the number of strings you specify is not sufficient, and you are using NSR, the
| operating system dynamically extends the number of strings as needed by the
| concurrent requests for the ACB. Strings allocated by dynamic string addition are
| not necessarily in contiguous storage.
| Dynamic string addition does not occur with LSR and GSR. Instead, you will get a
| logic error if you have more requests than available strings.
| The maximum number of strings that can be defined or added by the system is
| 255. Therefore, the maximum number of concurrent requests holding position in
| one dataset at any one time is 255.
When you use direct or skip-sequential access to process a path, a record from the
base data set is returned according to the alternate key you specified in the
argument field of the RPL macro. If the alternate key is not unique, the record first
entered with that alternate key is returned and a feedback code (duplicate key) is
set in the RPL. To retrieve the remaining records with the same alternate key,
specify RPL OPTCD=NSP when retrieving the first record with a direct request, and
then switch to sequential processing.
You can insert and update data records in the base cluster using a path if:
The PUT request does not result in nonunique alternate keys in an alternate
index, which you have defined with the UNIQUEKEY attribute. However, if a
nonunique alternate key is generated and you have specified the
NONUNIQUEKEY attribute, updating can occur.
You do not change the key of reference between the time the record was
retrieved for update and the PUT is issued.
You do not change the prime key.
When the alternate index is in the upgrade set, the alternate index is modified
automatically by inserting or updating a data record in the base cluster. If the
updating of the alternate index results in an alternate index record with no pointers
to the base cluster, the alternate-index record is erased.
Note that when you use SHAREOPTIONS 2, 3, and 4, you must continue to
ensure read/write integrity when issuing concurrent requests (such as GETs and
PUTs) on the base cluster and its associated alternate indexes. Failure to ensure
read/write integrity might temporarily cause “No Record Found” or “No Associated
Base Record” errors for a GET request. You can bypass such errors by reissuing
the GET request, but it is best to prevent the errors by ensuring read/write integrity.
Once the request is completed, CHECK releases control to the next instruction in
your program, and frees up the RPL for use by another request.
Ending a Request
Suppose you determine, after initiating a request but before it is completed, that
you do not want to complete the request. For example, suppose you determine
during the processing immediately following a GET that you do not want the record
you just requested. You can use the ENDREQ macro to cancel the request.
ENDREQ has the following advantages:
It allows you to avoid checking an unwanted asynchronous request.
It writes any unwritten data or index buffers in use by the string.
It cancels the VSAM positioning on the data set for the RPL.
| Note: If you issue the ENDREQ macro, it is important that you check the
| ENDREQ return code to make sure it completes successfully. If an
| asynchronous request does not complete ENDREQ successfully, you must
| issue the CHECK macro. The data set cannot be closed until all
| asynchronous requests successfully complete either ENDREQ or CHECK.
| ENDREQ will wait for the target RPL to post and, for that reason, it should
| not be issued in an attempt to terminate a hung request.
For data sets that did not go through a normal CLOSE processing (for example,
due to power failure), VSAM OPEN will perform an implicit VERIFY. This performs
the same function as when the VERIFY is explicitly performed with the Access
Method Services command. If the VERIFY is successful, the application is passed
a zero return code and a unique reason code, indicating that the VERIFY was
performed on its behalf. If the VERIFY is unsuccessful, VSAM OPEN will pass a
unique return code and reason code. This is a warning and it is the responsibility of
the application to determine if processing should continue. Implicit VERIFYs are not
performed for linear data sets. VSAM OPEN completes with return and reason
codes as described above.
If a record management error occurs while CLOSE is flushing buffers, the data
set's catalog information is not updated. The catalog cannot properly reflect the
data set's status and the index cannot accurately reflect some of the data records.
If the program enters an abnormal termination routine (abend), all open data sets
are closed. The VSAM CLOSE invoked by abend does not update the data set's
catalog information, it does not complete outstanding I/O requests, and buffers are
not flushed. The catalog cannot properly reflect the cluster's status, and the index
cannot accurately reference some of the data records. You can use the access
method services VERIFY command to correct catalog information. The use of
VERIFY is described in “Using VERIFY to Process Improperly Closed Data Sets”
on page 45.
CLOSE TYPE=T causes VSAM to complete any outstanding I/O operations, update
the catalog if necessary, and write any required SMF records. Processing can
continue after a temporary CLOSE without issuing an OPEN macro.
If a VSAM data set is closed and CLOSE TYPE=T is not specified, you must
reopen the data set before performing any additional processing on it.
When you issue a temporary or a permanent CLOSE macro, VSAM updates the
data set's catalog records. If your program ends with an abnormal end (ABEND)
without closing a VSAM data set the data set's catalog records are not updated,
and contain inaccurate statistics.
In VSAM you can only operate in cross-memory or SRB mode for synchronous,
supervisor state requests with shared resources or improved control interval (ICI)
access. For data sets not processed with ICI, an attempt to invoke VSAM
asynchronously, in problem state, or with non-shared resources in either
cross-memory or SRB mode results in a logical error. This error is not generated
for ICI.
For cross-memory mode requests, VSAM does not do wait processing when a
UPAD for wait returns to VSAM. For non-cross-memory task mode, however, if the
UPAD taken for wait returns with ECB not posted VSAM issues a WAIT supervisor
call instruction (SVC). For either mode when a UPAD is taken for post processing
returns, VSAM assumes the ECB has been marked complete and does not do post
processing.
SRB mode does not require UPAD. If a UPAD is provided for an SRB mode
request, it is taken only for I/O wait and resource wait processing.
See Chapter 16, “VSAM User-Written Exit Routines” on page 217 for more
information on VSAM user-written exit routines.
//ddname DD DSNAME=dsname,DISP=(OLD|SHR)
[,OPTCD= [DIR|SEQ|SKP]
[ . . . . . . . . . . . .])]
.........
[,BUFND=number]
[,BUFNI=number]
[,BUFSP=number]
[,MACRF=([DIR][,SEQ][,SKP]
[,IN][,OUT]
[,NRS ][,RST]
[,STRNO=number]
[,PASSWD=address]
[,EXLST=address]
........
Figure 27 on page 140 is a skeleton program that shows the relationship of VSAM
macros to each other and to the rest of the program.
START CSECT
SAVE(14,12) Standard entry code
.
B INIT Branch around file specs
Most of the options are specified in the access method services DEFINE command
when a data set is defined. Sometimes options can be specified in the ACB and
GENCB macros and in the DD AMP parameter.
Control interval size affects record processing speed and storage requirements in
these ways:
Buffer space. Data sets with large control interval sizes require more buffer
space in virtual storage. For information on how much buffer space is required,
see “Determining I/O Buffer Space for Nonshared Resource” on page 152.
I/O operations. Data sets with large control interval sizes require fewer I/O
operations to bring a given number of records into virtual storage; fewer index
records must be read. It is best to use large control interval sizes for sequential
and skip-sequential access. Large control intervals are not beneficial for keyed
direct processing of a key-sequenced data set or variable-length RRDS.
Free space. Free space is used more efficiently (fewer control interval splits
and less wasted space) as control interval size increases relative to data record
size. For more information on efficient use of free space, see “Optimizing Free
Space Distribution” on page 146.
The valid control interval sizes and block sizes are from 512 to 8192 bytes in
increments of 512 bytes, and from 8KB to 32KB in increments of 2KB for objects
cataloged in integrated catalog facility catalogs and for the integrated catalog facility
catalogs themselves. Control interval size must be a whole number of physical
blocks. If you specify a control interval that is not a proper multiple for the
supported block size, VSAM increases it to the next multiple. For example, 2050 is
increased to 2560.
The valid control interval and block sizes for index components cataloged in VSAM
catalogs are 512, 1024, 2048 and 4096 bytes.
The block size of the index component is always equal to the control interval size.
However, the block size for the data component and index component might differ.
Unless the data set was defined with the SPANNED attribute, the control interval
must be large enough to hold a data record of the maximum size specified in the
RECORDSIZE parameter. Because the minimum amount of control information in a
control interval is 7 bytes, a control interval is normally at least 7 bytes larger than
the largest record in the component. For compressed data sets, a control interval is
at least 10 bytes larger than the largest record after it is compressed. This allows
for the control information and record prefix. Since the length of a particular record
is hard to predict and since the records might not compress, it is best to assume
that the largest record is not compressed. If the control interval size you specify is
not large enough to hold the maximum size record, VSAM increases the control
interval size to a multiple of the minimum physical block size. The control interval
size VSAM provides is large enough to contain the record plus the overhead.
Note: For a variable-length RRDS, a control interval is at least 11 bytes larger
than the largest record.
The use of the SPANNED parameter removes this constraint by allowing data
records to be continued across control intervals. The maximum record size is then
equal to the number of control intervals per control area multiplied by control
interval size minus 10. The use of the SPANNED parameter places certain
restrictions on the processing options that can be used with a data set. For
example, records of a data set with the SPANNED parameter cannot be read or
written in locate mode. For more information on spanned records, see “Spanned
Records” on page 68.
PB - Physical Block
Figure 28. Control Interval Size, Physical Track Size, and Track Capacity
The information on a track is divided into physical blocks. Control interval size must
be a whole number of physical blocks. Control intervals can span tracks. However,
poor performance results if a control interval spans a cylinder boundary, because
the read/write head must move between cylinders.
The physical block size is always selected by VSAM. VSAM chooses the largest
physical block size that exactly divides into the control interval size. The block size
is also based on device characteristics.
If you specify free space for a key-sequenced data set or variable-length RRDS,
the system determines the number of bytes to be reserved for free space. For
example, if control interval size is 4096, and the percentage of free space in a
control interval has been defined as 20%, 819 bytes are reserved. Free space
calculations drop the fractional value and use only the whole number.
To find out what values are actually set in a defined data set, you can issue the
access method services LISTCAT command.
The size of the index control interval must be compatible with the size of the data
control interval. If you select too small a data control interval size, the number of
data control intervals in a control area can be large enough to cause the index
control interval size to exceed the maximum. VSAM first tries to increase the index
control interval size and, if not possible, then starts decreasing the number of
control intervals per control area. If this does not solve the problem, the DEFINE
fails.
A 512-byte index control interval is normally chosen, because this is the smallest
amount of data that can be transferred to virtual storage using the least amount of
device and channel resources. A larger control interval size can be needed,
depending on the allocation unit, the data control interval size, the key length, and
the key content as it affects compression. (It is rare to have the entire key
represented in the index, because of key compression.) After the first define
(DEFINE), a catalog listing (LISTCAT) shows the number of control intervals in a
control area and the key length of the data set.
The number of control intervals and the key length can be used to estimate the size
of index record necessary to avoid control area splits, which occur when the index
control interval size is too small. To make a general estimate of the index control
interval size needed, multiply one half of the key length (KEYLEN) by the number
of data control intervals per control area (DATA CI/CA):
(KEYLEN/2) \ DATA CI/CA ≤ INDEX CISIZE
The following examples show how the control area size is generally determined by
the primary and secondary allocation amount. The index control interval size and
buffer space can also affect the control area size. The following examples assume
the index CI size is large enough to handle all the data CIs in the CA and the buffer
space is large enough not to affect the CI sizes.
CYLINDERS(5,10) Results in a 1-cylinder control area size
KILOBYTES(100,50) The system determines the control area based on 50 KB,
resulting in a 1-track control area size
RECORDS(2000,5) Assuming 10 records would fit on a track, results in a 1-track
control area size
TRACKS(100,3) Results in a 3-track control area size
TRACKS(3,100) Results in a 3-track control area size
A spanned record cannot be larger than a control area less the control information
(10 bytes per control interval). Therefore, do not specify large spanned records and
a small primary or secondary allocation not large enough to contain the largest
spanned record.
VSAM data sets cataloged in an integrated catalog facility catalog are allocated
with the CONTIG attribute if the allocation unit is TRACKS. Therefore, the primary
and secondary allocations are in contiguous tracks.
Note: If space is allocated in kilobytes, megabytes, or records, the system sets
the control area size equal to multiples of the minimum number of tracks or
cylinders required to contain the specified kilobytes, megabytes, or records.
Space is not assigned in units of bytes or records.
If the control area is smaller than a cylinder, its size will be an integral multiple of
tracks, and it can span cylinders. However, a control area can never span an extent
of a data set, which is always composed of a whole number of control areas. For
more information on allocating space for a data set, see “Allocating Space for a
VSAM Data Set” on page 95.
Free space improves performance by reducing the likelihood of control interval and
control area splits. This, in turn, reduces the likelihood of VSAM moving a set of
records to a different cylinder away from other records in the key sequence. When
there is a direct insert or a mass sequential insert that does not result in a split,
VSAM inserts the records into available free space.
The amount of free space you need depends on the number and location of
records to be inserted, lengthened, or deleted. Too much free space can result in:
Increased number of index levels, which affects run times for direct processing.
More direct access storage required to contain the data set.
Too little free space can result in an excessive number of control interval and
control area splits. These splits are time consuming, and have the following
additional effects:
More time is required for sequential processing because the data set is not in
physical sequence.
More seek time is required during processing because of control area splits.
Use LISTCAT or the ACB JRNAD exit to monitor control area splits. For more
information, refer to “JRNAD Exit Routine to Journalize Transactions” on page 221.
When splits become frequent, you can reorganize the data set using REPRO or
EXPORT. Reorganization creates a smaller, more efficient data set with fewer
control intervals. However, reorganizing a data set is time consuming.
Con-
R1 R2 R3 R4 R5 R6 trol
Info
Byte Number 500 1000 2500 3267
FREESPACE(20 10)
CONTROLINTERVALSIZE(4096)
RECORDSIZE(500 500)
For this data set, each control interval is 4096 bytes. In each control interval, 10
bytes are reserved for control information. Because control interval free space is
specified as 20%, 819 bytes are reserved as free space. (4096 * .20 = 819).
Round down. The free space threshold is 3267 bytes. The space between the
threshold and the control information is reserved as free space.
Because the records loaded in the data set are 500-byte records, there is not
enough space for another record between byte 3000 and the free space threshold
at byte 3267. These 267 bytes of unused space are also used as free space. This
leaves 1086 bytes of free space; enough to insert two 500-byte records. Only 86
bytes are left unusable.
When you specify free space, ensure that the percentages of free space you
specify yield full records and full control intervals with a minimum amount of
unusable space.
Few additions. If few records will be added to the data set, consider a free
space specification of (0 0). When records are added, new control areas are
created to provide room for additional insertions.
If the few records to be added are fairly evenly distributed, control interval free
space should be equal to the percentage of records to be added. (FSPC (nn
0), where nn equals the percentage of records to be added.)
Evenly distributed additions. If new records will be evenly distributed
throughout the data set, control area free space should equal the percentage of
records to be added to the data set after the data set is loaded. (FSPC (0 nn),
where nn equals the percentage of records to be added.)
Unevenly distributed additions. If new records will be unevenly distributed
throughout the data set, specify a small amount of free space. Additional splits,
after the first, in that part of the data set with the most growth will produce
control intervals with only a small amount of unneeded free space.
Mass insertion. If you are inserting a group of sequential records, you can
take full advantage of mass insertion by using the ALTER command to change
free space to (0 0) after the data set is loaded. For more information on mass
insertion, refer to “Inserting and Adding Records” on page 126.
Additions to a specific part of the data set. If new records will be added to
only a specific part of the data set, load those parts where additions will not
occur with a free space of (0 0). Then, alter the specification to (n n) and load
those parts of the data set that will receive additions. The example in “Altering
the Free Space Specification When Loading a Data Set” demonstrates this.
Assume that a large key-sequenced data set is to contain records with keys from 1
through 300000. It is expected to have no inserts in key range 1 through 100000,
some inserts in key range 100001 through 200000, and heavy inserts in key range
200001 through 300000.
Key Ranges
With a key-sequenced data set, you can designate specific key ranges to be stored
on specific volumes. For example, if you have three volumes, you might assign
records with keys A through E to the first volume, F through M to the second, and
N through Z to the third. Key range data sets cannot be defined as reusable or as
extended format.
Volume serial numbers specified for system-managed key range data sets do not
guarantee that defined key ranges will be on specific volumes.
All the volumes specified for the cluster's data component or for the index
component must be of the same device type. However, the data component and
the index component can be on different device types. When you define a key
range data set, all volumes to contain key ranges must be mounted.
When you define a key range data set, the primary space allocation is acquired
from each key range. Secondary allocations are supplied as the data set is
extended within a particular key range.
Example 1
This example illustrates how space is allocated for a key range data set with three
key ranges on three volumes.
DEFINE CLUSTER ... VOLUMES (V1,V2,V3) -
KEYRANGES ((A,F) (G,P) (Q,Z)) -
CYL (1ðð,5ð)
Example 2
This example illustrates how space is allocated for a key range data set with two
key ranges on three volumes. When there are more volumes than there are key
ranges, the excess volumes are marked as candidates and are not required to be
mounted. The candidate volumes are used for overflow records from any key
range.
DEFINE CLUSTER ... VOLUMES (V1,V2,V3) -
KEYRANGES ((A,F) (G,Z)) -
CYL (1ðð,5ð)
V1 V2 Overflow V3
A-F prim. G-Z prim. A-F prim.
A-F G-Z
sec. sec. G-Z prim.
A-F G-Z
sec. sec. A-F sec.
A-F G-Z G-Z
sec. sec. sec.
A-F G-Z
sec. sec.
A-F G-Z
sec. sec.
A-F G-Z
sec. sec.
Example 3
This example illustrates how space is allocated for a key range data set with five
key ranges on three volumes.
DEFINE CLUSTER ... VOLUMES (V1,V2,V3) -
KEYRANGES ((A,D) (E,H) (I,M) (N,R) (S,Z)) -
CYL (1ðð,5ð)
V1 V2 V3
A-D prim. E-H prim. I-M
prim.
A-D E-H
sec. sec.
E-H sec. N-R
prim.
S-Z
prim.
S-Z
sec.
I-M
sec.
Example 4
This example illustrates how space is allocated for a key range data set with four
key ranges on two volumes when each volumes is specified twice. When a volume
serial number is duplicated in the VOLUMES parameter, more than one key range
is allocated space on the volume.
DEFINE CLUSTER ... VOLUMES (V1,V2,V1,V2) -
KEYRANGES ((A,D) (E,H) (I,M) (N,Z)) -
CYL (1ðð,5ð)
V1 V2
A-D prim. E-H prim.
I-M prim. N-Z
prim.
I-M E-H
sec. sec.
A-D N-Z
sec. sec.
Format-1 DSCB names are used for data sets defined in an integrated catalog
facility catalog or defined with the UNIQUE attribute in a VSAM catalog.
For a multivolume key range cluster, the name specified on the data component is
used for the first key range on each volume. If more than one key range resides on
a volume, a special key range qualifier is appended to a generated name.
A key range qualifier is a unique qualifier appended to the generated name to form
a unique name for a particular key range. A key range qualifier is four characters,
starting with an alphabetic “A” followed by three digits that start at 001. The first
generated key range name would therefore be System-generated name.A001. For
If the data component name is longer than 39 characters, the first 37 characters
are joined to the last 2 characters, and the 4-character key range qualifier is
appended.
If a duplicate generated name is found on a volume, the key range qualifier starts
with the letter “B” or “C” and so on, until a unique name is found.
VSAM must always have sufficient space available to process the data set as
directed by the specified processing options.
Optionally, you can specify RMODE31 on your JCL DD AMP parameter which
allows the user to override any RMODE31 values specified when the ACB was
created.
| JCL takes precedence over the specification in the data class. NSR must be
| specified. SMB will either weight or optimize buffer handling toward sequential or
| direct processing.
| To optimize your extended format data sets you can use the AMP parameter
| ACCBIAS subparameter along with related subparameters SMBVSP, SMBDFR,
| and SMBHWT. These subparameters can also be used in conjunction with
| DATACLASS Record Access Bias=SYSTEM. These subparameters are used for
| Direct Optimized processing only.
SMBVSP
SMBVSP specifies the amount of virtual storage to obtain for buffers when opening
the data set. Virtual buffer size can be specified in kilobytes with values ranging
from 1K to 2048000K or in megabytes with values ranging from 1M to 2048M. This
value is the total amount of virtual storage which can be addressed in a single
address space. It does not take into consideration the storage required by the
system or the access method.
SMBDFR
| SMBDFR allows you to specify that writing buffers to the medium is to be deferred
| until the buffer is required for a different request or until the data set is closed.
| CLOSE TYPE=T will not write the buffers to the medium when LSR processing is
| used for direct optimization. Defaults for deferred write processing depend upon the
| SHAREOPTIONS, which you specify when you define the data set. The default for
| SHAREOPTIONS (1,3) and (2,3) is deferred write. The default for
| SHAREOPTIONS (3,3), (4,3), and (x, 4) is non-deferred write. If the user specifies
| a value for SMBDFR, this value always takes precedence over any defaults.
SMBHWT
SMBHWT allows for the specification of a whole decimal value from 1-99 which is
used to allocate hiperspace buffers based on a multiple of the number of virtual
buffers which have been allocated.
When processing a data set directly, VSAM reads only one data control interval at
a time. For output processing (PUT for update), VSAM immediately writes the
updated control interval, if OPTCD=NSP is not specified in the RPL macro.
Unused index buffers do not degrade performance, so you should always specify
an adequate number. For optimum performance, the number of index buffers
should at least equal the number of high-level index set control intervals plus one
per string to contain the entire high-level index set and one sequence set control
interval per string in virtual storage. Note that additional index buffers will not be
used for more than one sequence set buffer per string unless shared resource
pools are used. For large data sets, specify the number of index buffers equal to
the number of index levels.
VSAM reads index buffers one at a time, and if you are using shared resources,
you can keep your entire index set in storage. Index buffers are loaded when the
index is referred to. When many index buffers are provided, index buffers are not
reused until a requested index control interval is not in storage.
VSAM keeps as many index set records as the buffer space allows in virtual
storage. Ideally, the index would be small enough to allow the entire index set to
remain in virtual storage. Because the characteristics of the data set cannot allow a
small index, you should be aware of how index I/O buffers are used so you can
determine how many to provide.
The following example, illustrated in Figure 30, demonstrates how buffers are
scheduled for direct access. Assume the following:
Two strings
Three-level index structure as shown
Three data buffers (one for each string, and one for splits)
Four index buffers (one for highest level of index, one for second level, and one
for each string)
──────────────────────────────────────────── ────────────────────────────
Index Buffers Data Buffers
──────────── ──────────── ──────────── ──────────── ──────────── ────────────
Direct Get String 1 String 2
from CA Pool SS SS String 1 String 2
──────────── ──────────── ──────────── ──────────── ──────────── ────────────
CA2 IS1 IS2 SS2 CA2-CI
──────────── ───── ───── ──────────── ──────────── ──────────── ────────────
CA3 SS3 CA3-CI
──────────── ───── ───── ──────────── ──────────── ──────────── ────────────
CA6-1st CI IS3 SS6 CA6-1st CI
──────────── ───── ───── ──────────── ──────────── ──────────── ─────────────
CA6 1st CI SS6 CA6-1st CI
──────────── ───── ───── ──────────── ──────────── ──────────── ─────────────
ISI
┌──────────────────────────────┼──────────────────────────────┐
│ │ │
│ │ │
IS2 IS3 IS4
┌──────┬──┴──┬──────┐ ┌───────┬──┴───┬──────┐ ┌───────┬──┴────┬───────┐
│ │ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │ │ │
SS1 SS2 SS3 SS4 SS5 SS6 SS7 SS8 SS9 SS1ð SS11 SS12
│ │ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │ │ │
6 6 6 6 6 6 6 6 6 6 6 6
CA1 CA2 CA3 CA4 CA5 CA6 CA7 CA8 CA9 CA1ðð CA11 CA12
IS - Index Set
SS - Sequence Set
CA - Control Area
Figure 30. Scheduling Buffers for Direct Access
When SHAREOPTIONS 4 is specified for the data set, the read-ahead function can
be ineffective because the buffers are refreshed when each control interval is read.
Therefore, for SHAREOPTIONS 4, keeping data buffers at a minimum can actually
improve performance.
If you experience a performance problem waiting for input from the device, you
should specify more data buffers to improve your job's run time. More data buffers
allow you to do more read-ahead processing. An excessive number of buffers,
however, can cause performance problems, because of excessive paging.
For mixed processing situations (sequential and direct), start with two data buffers
per string and increase BUFND to three per string, if paging is not a problem.
When processing the data set sequentially, VSAM reads ahead as buffers become
available. For output processing (PUT for update), VSAM does not immediately
write the updated control interval from the buffer unless a control interval split is
required. The POINT macro does not cause read-ahead processing unless RPL
OPTCD=SEQ is specified; POINT positions the data set for subsequent sequential
retrieval.
The BUFSP, BUFND, BUFNI, and STRNO parameters apply only to the path's
alternate index when the base cluster is opened for processing with its alternate
index. The minimum number of buffers are allocated to the base cluster unless the
cluster's BUFFERSPACE value (specified in the DEFINE command) or BSTRNO
value (specified in the ACB macro) allows for more buffers. VSAM assumes direct
processing and extra buffers are allocated between data and index components
accordingly.
Two data buffers and one index buffer are always allocated to each alternate index
in the upgrade set. If the path's alternate index is a member of the upgrade set, the
minimum buffer increase for each allocation is one for data buffers and one for
index buffers. Buffers are allocated to the alternate index as though it were a
key-sequenced data set. When a path is opened for output and the path alternate
index is in the upgrade set, you can specify ACB MACRF=DSN and the path
alternate index shares buffers with the upgrade alternate index.
Acquiring Buffers
Data and index buffers are acquired and allocated only when the data set is
opened. VSAM dynamically allocates buffers based on parameters in effect when
the program opens the data set. Parameters that influence the buffer allocation are
in the program's ACB: MACRF=(IN|OUT, SEQ|SKP, DIR), STRNO=n, BUFSP=n,
BUFND=n, and BUFNI=n. Other parameters that influence buffer allocation are in
the DD statement's AMP specification for BUFSP, BUFND, and BUFNI, and the
BUFFERSPACE value in the data set's catalog record.
If you open a data set whose ACB includes MACRF=(SEQ,DIR), buffers are
allocated according to the rules for sequential processing. If the RPL is modified
later in the program, the buffers allocated when the data set was opened do not
change.
Data and index buffer allocation (BUFND and BUFNI) can be specified only by the
user with access to modify the ACB parameters, or via the AMP parameter of the
DD statement. Any program can be assigned additional buffer space by modifying
the data set's BUFFERSPACE value, or by specifying a larger BUFSP value with
the AMP parameter in the data set's DD statement.
When a buffer's contents are written, the buffer's space is not released. The control
interval remains in storage until overwritten with a new control interval; if your
program refers to that control interval, VSAM does not have to reread it. VSAM
checks to see if the desired control interval is in storage, when your program
processes records in a limited key range, you might increase throughput by
providing extra data buffers. Buffer space is released when the data set is closed.
Note: More buffers (either data or index) than necessary might cause excessive
paging or excessive internal processing. There is an optimum point at which
more buffers will not help. You should attempt to have data available just
before it is to be used. If data is read into buffers too far ahead of its use in
the program, it can be paged out.
Index Options
The following options influence performance when using the index of a
key-sequenced data set or variable-length RRDS. Each option improves
performance, but some require that you provide additional virtual storage or
auxiliary storage space. The options are:
Specifying enough virtual storage to contain all index-set records (if you are
using shared resources).
Ensuring that the index control interval is large enough to contain the key of
each control interval in the control area.
Placing the index and the data set on separate volumes.
Replicating index records (REPL option).
Imbedding sequence-set records adjacent to control areas (IMBED option).
You ensure virtual storage for index-set records by specifying enough virtual
storage for index I/O buffers when you begin to process a key-sequenced data set
or variable-length RRDS. VSAM keeps as many index-set records in virtual storage
as possible. Whenever an index record must be retrieved to locate a data record,
VSAM makes room for it by deleting the index record that VSAM judges to be least
useful under the prevailing circumstances. It is generally the index record that
belongs to the lowest index level or that has been used the least. VSAM does not
keep more than one sequence set index record per string unless shared resource
pools are used.
Using different volumes lets VSAM gain access to an index and to data at the
same time. Also, the smaller amount of space required for an index makes it
economical to use a faster storage device for it.
Note: A performance improvement due to separate volumes generally requires
asynchronous processing or multitasking with multiple strings.
Because there are usually few control intervals in the index set, the cost in direct
access storage space is small. If the entire index set is not being held in storage
and there is significant random processing, then replication is a good choice. If not,
replication does very little. Because replication requires little direct access storage,
and it is an attribute that cannot be altered, it can be desirable to choose this
option.
Note: Updating a replicated index generally is slower because all records must be
updated.
You probably will get better read performance with a non-replicated index with a
cache controller than a replicated index with a non-cache controller.
When the data set is defined, you can specify that the sequence-set index record
for each control area is to be imbedded on the first track of the control area. This
allows you to separate the sequence set from the index set and place them on
separate devices. You can put the index set on a faster device to retrieve higher
levels of the index. Sequence set records can be retrieved almost as fast because
there is little rotational delay. (When you choose the IMBED option, sequence set
records are replicated regardless of whether you also chose the REPL option.)
IMBED is not a valid define option for extended format or compressed data sets.
With the IMBED option, one track of each control area is used for sequence set
records. In some situations, this can be too much space for index in relation to the
data. IMBED will usually only improve performance for sequential access of data
sets where both index and data are on the DASD actuator. IMBED must be
specified explicitly to get the performance benefits of a replicated, imbedded
sequence set.
Using Hiperbatch
Hiperbatch is a VSAM extension designed to improve performance in specific
situations. It uses the data lookaside facility (DLF) services in MVS to provide an
alternate fast path method of making data available to many batch jobs. Through
Hiperbatch, applications can take advantage of the performance benefits of MVS
without changing existing application programs or the JCL used to run them. For
further information on using Hiperbatch, see MVS Hiperbatch Guide.
Control interval access gives you access to the contents of a control interval; keyed
access and addressed access give you access to individual data records.
Note: Control interval access is not permitted when accessing a compressed data
set. The data set can be opened for control interval access to allow for
VERIFY and VERIFY REFRESH processing only.
With control interval access, you have the option of letting VSAM manage I/O
buffers or managing them yourself (user buffering). With keyed and addressed
access, VSAM always manages I/O buffers. If you choose user buffering, you have
the further option of using improved control interval access, which provides faster
processing than normal control interval access. With user buffering, only control
interval processing is allowed. See “Improved Control Interval Access” on
page 169.
When using control interval processing, you are responsible for maintaining
alternate indexes. If you have specified keyed or addressed access (ACB
MACRF={KEY|ADR},...) and control interval access, then those requests for keyed
or addressed access (RPL OPTCD={ KEY|ADR},...) cause VSAM to upgrade the
alternate indexes. Those requests specifying control interval access will not
upgrade the alternate indexes. You are responsible for upgrading them. Upgrading
an alternate index is described in “Alternate Index Maintenance” on page 108.
With NUB (no user buffering) and NCI (normal control interval access), you can
specify, in the MACRF parameter, that the data set is to be opened for keyed and
addressed access, and for control interval access. For example, MACRF=(CNV,
KEY, SKP, DIR, SEQ, NUB, NCI, OUT) is a valid combination of subparameters.
Usually, control interval access with no user buffering has the same freedoms and
limitations as keyed and addressed access have. Control interval access can be
synchronous or asynchronous, can have the contents of a control interval moved to
your work area (OPTCD=MVE) or left in VSAM's I/O buffer (OPTCD=LOC), and
can be defined by a chain of request parameter lists (except with OPTCD=LOC
specified).
Except for ERASE, all the request macros (GET, PUT, POINT, CHECK, and
ENDREQ) can be used for normal control interval access. To update the contents
of a control interval, you must (with no user buffering) previously have retrieved the
contents for update. You cannot alter the contents of a control interval with
OPTCD=LOC specified.
Both direct and sequential access can be used with control interval access, but skip
sequential access may not. That is, you can specify OPTCD=(CNV,DIR) or
(CNV,SEQ), but not OPTCD=(CNV,SKP).
With sequential access, VSAM takes an EODAD exit when you try to retrieve the
control interval whose CIDF is filled with 0's or, if there is no such control interval,
when you try to retrieve a control interval beyond the last one. A control interval
with such a CIDF contains no data or unused space, and is used to represent the
software end-of-file. However, VSAM control interval processing does not prevent
you from using a direct GET or a POINT and a sequential GET to retrieve the
software end-of-file. The search argument for a direct request with control interval
access is the RBA of the control interval whose contents are desired.
The RPL (or GENCB) parameters AREA and AREALEN have the same use for
control interval access related to OPTCD=MVE or LOC as they do for keyed and
addressed access. With OPTCD=MVE, AREA gives the address of the area into
which VSAM moves the contents of a control interval. With OPTCD=LOC, AREA
gives the address of the area into which VSAM puts the address of the I/O buffer
containing the contents of the control interval.
You can load an entry-sequenced data set with control interval access. If you open
an empty entry-sequenced data set, VSAM allows you to use only sequential
storage. That is, you can issue only PUTs, with OPTCD=(CNV,SEQ,NUP). PUT
with OPTCD=NUP stores information in the next available control interval (at the
end of the data set).
You cannot load or extend a data set with improved control interval access. VSAM
also prohibits you from extending a fixed-length or variable-length RRDS through
normal control interval access.
You can update the contents of a control interval in one of two ways:
Retrieve the contents with OPTCD=UPD and store them back. In this case, the
RBA of the control interval is specified during the GET for the control interval.
Without retrieving the contents, store new contents in the control interval with
OPTCD=UPD. (You must specify UBF for user buffering.) Because no GET (or
a GET with OPTCD=NUP) precedes the PUT, you have to specify the RBA of
the control interval as the argument addressed by the RPL.
Figure 31 shows the relative positions of data, unused space, and control
information in a control interval.
┌───────────────────┬────────────────┬────────┬────────┐
│ Records, │ │ │ │
│ Record Slots, or │ Unused Space │ RDFs │ CIDF │
│ Record Segment │ │ │ │
└───────────────────┴────────────────┴────────┴────────┘
└───────────────────┘ └─────────────────┘
Data Control Information
Figure 31. General Format of a Control Interval
For more information on the structure of a control interval, see “Control Intervals”
on page 65.
Control information consists of a CIDF (control interval definition field) and, for a
control interval containing at least one record, record slot, or record segment, one
or more RDFs (record definition fields). The CIDF and RDFs are ordered from right
to left. The format of the CIDF is the same even if the control interval size contains
multiple smaller physical records.
In an entry-sequenced data set, when there are unused control intervals beyond
the last one that contains data, the first of the unused control intervals contains a
CIDF filled with 0's. In a key-sequenced data set, a key-range portion of a
key-sequenced data set, or a RRDS, the first control interval in the first unused
control area (if any) contains a CIDF filled with 0's. The CIDF filled with 0's
represents the software end-of-file.
An RDF is a 3-byte field that contains a 1-byte control field and a 2-byte binary
number:
Length and
Offset Bit Pattern Description
0(0) 1 Control Field.
x... ..xx Reserved.
.x.. .... Indicates whether there is (1) or is not (0) a paired RDF to the left of this RDF.
..xx .... Indicates whether the record spans control intervals:
00 No.
01 Yes; this is the first segment.
10 Yes; this is the last segment.
11 Yes; this is an intermediate segment.
.... x... Indicates what the 2-byte binary number gives:
0 The length of the record, segment, or slot described by this RDF.
1 The number of consecutive nonspanned records of the same length, or the update
number of the segment of a spanned record.
.... .x.. For a fixed-length RRDS, indicates whether the slot described by this RDF does (0) or
does not (1) contain a record.
Length and
Offset Bit Pattern Description
1(1) 2 Binary number:
When bit 4 of byte 0 is 0, gives the length of the record, segment, or slot described
by this RDF.
When bit 4 of byte 0 is 1 and bits 2 and 3 of byte 0 are 0, gives the number of
consecutive records of the same length.
When bit 4 of byte 0 is 1 and bits 2 and 3 of byte 0 are not 0, gives the update
number of the segment described by this RDF.
Figure 32 shows the contents of the CIDF and RDFs of a 512-byte control interval
containing nonspanned records of different lengths.
┌──────────────────────────────────────────────────────────────────────┐
│ │
Length a a a 3a 2a 498-6a 12 4
┌──────┬──────┬──────┬────────────┬──────┬──────────────┬────────┬─────┐
│ │ │ │ │ │ │ │ C │
│ │ │ │ │ │ │ │ I │
│ R1 │ R2 │ R3 │ R4 │ R5 │ FREE SPACE │ RDFs │ D │
│ │ │ │ │ │ │ │ F │
└──────┴──────┴──────┴────────────┴──────┴──────────────┼────────┴─────┤
| └--------┐
| |
| |
┌-------------------------------------------------------┘ |
| |
| |
├───────────────┬───────────────┬───────────────┬───────────────┬───────────────┤
│ RDF4 │ RDF3 │ RDF2 │ RDF1 │ CIDF │
├───────┬───────┼───────┬───────┼───────┬───────┼───────┬───────┼───────┬───────┤
│ │ │ │ │ │ │ │ │ │ 498- │
│ X'ðð' │ 2a │ X'ðð' │ 3a │ X'ð8' │ 3 │ X'4ð' │ a │ 8a │ 8a │
│ │ │ │ │ │ │ │ │ │ │
└───────┴───────┴───────┴───────┴───────┴───────┴───────┴───────┴───────┴───────┘
│ R5 │ R4 │ R1 through R3 │
└───────────────┴───────────────┴───────────────────────────────┘
a = Unit of length
Figure 32. Format of Control Information for Nonspanned Records
The four RDFs and the CIDF comprise 16 bytes of control information as follows:
Figure 33 shows contents of the CIDF and RDFs for a spanned record with a
length of 1306 bytes.
CI Length—512 Bytes
┌──────────────────────────────────────────────────────────────────────┐
┌─────────────┬─────────────┬─────────────┐
│ RDF2 │ RDF1 │ CIDF │
┌────────────────────────────┼──────┬──────┼──────┬──────┼──────┬──────┤
│ Segment 1 │ X'18'│ n │ X'5ð'│ 5ð2 │ 5ð2 │ ð │
└────────────────────────────┴──────┴──────┴──────┴──────┴──────┴──────┘
Length 5ð2 3 3 4
┌─────────────┬─────────────┬─────────────┐
│ RDF2 │ RDF1 │ CIDF │
┌────────────────────────────┼──────┬──────┼──────┬──────┼──────┬──────┤
│ Segment 2 │ X'38'│ n │ X'7ð'│ 5ð2 │ 5ð2 │ ð │
└────────────────────────────┴──────┴──────┴──────┴──────┴──────┴──────┘
5ð2 3 3 4
┌─────────────┬─────────────┬─────────────┐
│ RDF2 │ RDF1 │ CIDF │
┌────────────┬───────────────┼──────┬──────┼──────┬──────┼──────┬──────┤
│ Segment 3 │ Free Space │ X'28'│ n │ X'8ð'│ 3ð2 │ 3ð2 │ 2ðð │
└────────────┴───────────────┴──────┴──────┴──────┴──────┴──────┴──────┘
3ð2 2ðð 3 3 4
n - Update Number
There are three 512-byte control intervals that contain the segments of the record.
The number “n” in RDF2 is the update number. Only the control interval that
contains the last segment of a spanned record can have free space. Each of the
other segments uses all but the last 10 bytes of a control interval.
In a key-sequenced data set, the control intervals might not be contiguous or in the
same order as the segments (for example, the RBA of the second segment can be
lower than the RBA of the first segment).
All the segments of a spanned record must be in the same control area. When a
control area does not have enough control intervals available for a spanned record,
the entire record is stored in a new control area.
Every control interval in a fixed-length RRDS contains the same number of slots
and the same number of RDFs; one for each slot. The first slot is described by the
rightmost RDF. The second slot is described by the next RDF to the left, and so on.
User Buffering
With control interval access, you have the option of user buffering. If you use the
user buffering option, you need to provide buffers in your own area of storage for
use by VSAM.
User buffering is required for improved control interval access (ICI) and for PUT
with OPTCD=NUP.
If you specify user buffering, you cannot specify KEY or ADR in the MACRF
parameter; you can specify only CNV. That is, you cannot intermix keyed and
addressed requests with requests for control interval access.
Improved control interval access is faster than normal control interval access.
However, with user buffering, you can only have one control interval scheduled at a
time. Improved control interval access works well for direct processing.
You cannot load or extend a data set using improved control interval processing.
Improved control interval processing is not allowed for extended format data sets.
A processing program can achieve the best performance with improved control
interval access by combining it with SRB dispatching. (SRB dispatching is
described in OS/390 MVS Authorized Assembler Services Guide and section
“Operating in SRB or Cross Memory Mode” on page 137.
To release exclusive control after a GET for update, you must issue a PUT for
update, a GET without update, or a GET for update for a different control interval.
With improved control interval access, VSAM assumes (without checking) that:
an RPL whose ACB has MACRF=ICI has OPTCD=(CNV, DIR, SYN),
a PUT is for update (RPL OPTCD=UPD), and
your buffer length (specified in RPL AREALEN=number) is correct.
Because VSAM does not check these parameters, you should debug your program
with ACB MACRF=NCI, then change to ICI.
With improved control interval access, VSAM does not take JRNAD exits and does
not keep statistics (which are normally available through SHOWCB).
The CBIC option is invoked when a VSAM data set is opened. To invoke the CBIC
option, you set the CBIC flag (located at offset X'33' (ACBINFL2) in the ACB, bit 2
(ACBCBIC)) to one. When your program opens the ACB with the CBIC option set,
your program must be in supervisor state with a protect key from 0 to 7. Otherwise,
VSAM will not open the data set.
If another address space accesses the data set's control block structure in the CSA
via VSAM record management, the following conditions should be observed:
An OPEN macro should not be issued against the data set.
The ACB of the user who opened the data set with the CBIC option must be
used.
CLOSE and temporary CLOSE cannot be issued for the data set (only the user
who opened the data set with the CBIC option can close the data set).
The address space accessing the data set control block structure must have
the same storage protect key as the user who opened the data set with the
CBIC option.
User exit routines should be accessible from all regions accessing the data set
with the CBIC option.
This chapter is intended to help you understand how to share data sets within a
single system and among multiple systems. It also describes considerations when
sharing VSAM data sets for NSR or LSR/GSR access. Refer to Chapter 14, “Using
VSAM Record-Level Sharing” on page 201 for considerations when sharing VSAM
data sets for RLS access.
When you define VSAM data sets, you can specify how the data is to be shared
within a single system or among multiple systems that can have access to your
data and share the same direct access devices. Before you define the level of
sharing for a data set, you must evaluate the consequences of reading incorrect
data (a loss of read integrity) and writing incorrect data (a loss of write
integrity)—situations that can result when one or more of the data set's users do
not adhere to guidelines recommended for accessing shared data sets.
The extent to which you want your data sets to be shared depends on the
application. If your requirements are similar to those of an integrated catalog facility
catalog, where there can be many users on more than one system, more than one
user should be allowed to read and update the data set simultaneously. At the
other end of the spectrum is an application where high security and data integrity
require that only one user at a time have access to the data.
When your program issues a GET request, VSAM reads an entire control interval
into virtual storage (or obtains a copy of the data from a control interval already in
virtual storage). If your program modifies the control interval's data, VSAM ensures
within a single control block structure that you have exclusive use of the information
in the control interval until it is written back to the data set. If the data set is
accessed by more than one program at a time, and more than one control block
structure contains buffers for the data set's control intervals, VSAM cannot ensure
that your program has exclusive use of the data. You must obtain exclusive control
yourself, using facilities such as ENQ/RESERVE and DEQ.
Two ways to establish the extent of data set sharing are the data set disposition
specified in the JCL and the share options specified in the access method services
DEFINE or ALTER command. If the VSAM data set cannot be shared because of
the disposition specified in the JCL, a scheduler allocation failure occurs. If your
program attempts to open a data set that is in use and the share options specified
do not allow concurrent use of the data, the open fails, and a return code is set in
the ACB error field.
During load mode processing, you cannot share data sets. Share options are
overridden during load mode processing. When a shared data set is opened for
create or reset processing, your program has exclusive control of the data set
within your operating system.
Note: You can use ENQ/DEQ to issue VSAM requests, but not to serialize the
system resources that VSAM uses.
Subtask Sharing
Subtask sharing is the ability to perform multiple OPENs to the same data set
within a task or from different subtasks in a single address space and still share a
single control block structure. Subtask sharing allows many logical views of the
data set while maintaining a single control block structure. With a single control
block structure, you can ensure that you have exclusive control of the buffer when
updating a data set.
If you share multiple control block structures within a task or address space, VSAM
treats this like cross-address space sharing. You must adhere to the guidelines and
restrictions specified in “Cross-Region Sharing” on page 178.
To share successfully within a task or between subtasks, you should ensure that
VSAM builds a single control block structure for the data set. This control block
structure includes blocks for control information and input/output buffers. All
subtasks access the data set through this single control block structure,
independent of the SHAREOPTION or DISP specifications. The three methods of
achieving a single control block structure for a VSAM data set while processing
multiple concurrent requests are:
A single access method control block (ACB) and a STRNO>1
Data definition name (ddname) sharing, with multiple ACBs (all from the same
data set) pointing to a single DD statement. This is the default.
For example:
//DD1 DD DSN=ABC
OPEN ACB1,DDN=DD1
OPEN ACB2,DDN=DD1
Data set name sharing, with multiple ACBs pointing to multiple DD statements
with different ddnames. The data set names are related with an ACB open
specification (MACRF=DSN).
For example:
//DD1 DD DSN=ABC
//DD2 DD DSN=ABC
OPEN ACB1,DDN=DD1,MACRF=DSN
OPEN ACB2,DDN=DD2,MACRF=DSN
| Multiple ACBs must be in the same address space, and they must be opening to
| the same base cluster. The connection occurs independently of the path selected to
| the base cluster. If the ATTACH macro is used to create a new task that will be
| processing a shared data set, allow the ATTACH keyword SZERO to default to
| YES or code SZERO=YES. This causes subpool 0 to be shared with the subtasks.
| For more information on the ATTACH macro, see OS/390 MVS Authorized
| Assembler Services Reference ALE-DYN . This also applies to when you are
| sharing one ACB in a task or different subtasks. To ensure correct processing in
| the shared environment, all VSAM requests should be issued in the same key as
| the job step TCB key.
User B Has:
┌────────────────┬────────────────┐
│ EXCLUSIVE │ │
│ CONTROL │ SHARED │
┌───────────┼────────────────┼────────────────┤
User │ EXCLUSIVE │ User A gets │ OK │
A │ CONTROL │ logical error │ User A gets │
Wants: │ │ │ second copy │
│ │ │ of buffer │
├───────────┼────────────────┼────────────────┤
│ SHARED │ OK │ OK │
│ │ User A gets │ User A gets │
│ │ second copy │ second copy │
│ │ of buffer │ of buffer │
└───────────┴────────────────┴────────────────┘
User B has:
┌────────────────┬────────────────┐
│ EXCLUSIVE │ │
│ CONTROL │ SHARED │
User ┌───────────┼────────────────┼────────────────┤
A │ EXCLUSIVE │ User A gets │ VSAM queues │
Wants: │ CONTROL │ logical error │ user A until │
│ │ │ buffer is │
│ │ │ released by │
│ │ │ user B or │
│ │ │ user A gets │
│ │ │ logical error │
│ │ │ via ACB option│
├───────────┼────────────────┼────────────────┤
│ SHARED │ User A gets │ OK │
│ │ logical error │ User A shares │
│ │ │ same buffer │
│ │ │ with user B │
└───────────┴────────────────┴────────────────┘
returns the exclusive control return code 20 (X'14') to the application program. The
application program is then able to determine the next action.
Alternatively, you can do this by changing the GENCB ACB macro in the
application program.
To test to see if the new function is in effect, the TESTCB ACB macro can be
coded into the application program.
The application program must be altered to handle the exclusive control error return
code. Register 15 will contain 8 and the RPLERRCD field will contain 20 (X'14').
The address of the RPL which owns the resource will be placed in the first word in
the RPL error message area. The VSAM avoid LSR exclusive control wait option
cannot be changed after OPEN.
A sphere is a VSAM cluster and its associated data sets. The cluster is originally
defined with the access method services ALLOCATE command, the DEFINE
CLUSTER command, or through JCL. The most common use of the sphere is to
open a single cluster. The base of the sphere is the cluster itself. When opening a
path (which is the relationship between an alternate index and base cluster) the
base of the sphere is again the base cluster. Opening the alternate index as a data
set results in the alternate index becoming the base of the sphere. In Specifying
Cluster Information, DSN is specified for each ACB and output processing is
specified.
CLUSTER.REAL.PATH
CLUSTER.REAL
CLUSTER.REAL.AIX(UPGRADE) CLUSTER.ALIAS
Figure 35. Relationship between the Base Cluster and the Alternate Index
If you add this fourth statement, the base of the sphere changes, and multiple
control block structures are created for the alternate index CLUSTER.REAL.AIX.
4. OPEN ACB=(CLUSTER.REAL.AIX)
Does not add to existing structure as the base of the sphere is not the same.
SHAREOPTIONS are enforced for CLUSTER.REAL.AIX since multiple control
block structures exist.
To be compatible, both the new ACB and the existing control block structure must
be consistent in their specification of the following processing options.
The data set specification must be consistent in both the ACB and the existing
control block structure. This means that an index of a key-sequenced data set
that is opened as an entry-sequenced data set, does not share the same
control block structure as the key-sequenced data set opened as a
key-sequenced data set.
The MACRF options DFR, UBF, ICI, CBIC, LSR, and GSR must be consistent.
For example, if the new ACB and the existing structure both specify
MACRF=DFR, the connection is made. If the new ACB specifies MACRF=DFR
and the existing structure specifies MACRF=DFR,UBF, no connection is made.
If compatibility cannot be established, OPEN tries (within the limitations of the share
options specified when the data set was defined) to build a new control block
structure. If it cannot, OPEN fails.
When processing multiple subtasks sharing a single control block, concurrent GET
and PUT requests are allowed. A control interval is protected for write operations
using an exclusive control facility provided in VSAM record management. Other
PUT requests to the same control interval are not allowed and a logical error is
returned to the user issuing the request macro. Depending on the selected buffer
option, nonshared (NSR) or shared (LSR/GSR) resources, GET requests to the
same control interval as that being updated can or cannot be allowed. Figure 34 on
page 175 illustrates the exclusive control facility.
When a subtask issues OPEN to an ACB that will share a control block structure
that can have been previously used, issue the POINT macro to obtain the position
for the data set. In this case, it should not be assumed that positioning is at the
beginning of the data set.
Cross-Region Sharing
The extent of data set sharing within one operating system depends on the data set
disposition and the cross region share option specified when you define the data
set. Independent job steps or subtasks in an MVS system or multiple systems with
global resource serialization (GRS) can access a VSAM data set simultaneously.
For more information about GRS, see OS/390 MVS Planning: Global Resource
Serialization. To share a data set, each user must specify DISP=SHR in the data
set's DD statement.
| If the data set has already been opened for RLS processing, non-RLS
| accesses for read are allowed. VSAM provides full read and write integrity to its
| RLS users, but it is the non-RLS user's responsibility to ensure read integrity.7
If you require read integrity, it is your responsibility to use the ENQ and DEQ
macros appropriately to provide read integrity for the data the program obtains.
For information on using ENQ and DEQ, see OS/390 MVS Authorized
Assembler Services Reference ALE-DYN and OS/390 MVS Authorized
Assembler Services Reference ENF-IXG.
Cross-region SHAREOPTIONS 3: The data set can be fully shared by any
number of users. With this option, each user is responsible for maintaining both
read and write integrity for the data the program accesses. This setting does
not allow any type of non-RLS access when the data set is already open for
RLS processing.
This option requires that the user's program use ENQ/DEQ to maintain data
integrity while sharing the data set, including the OPEN and CLOSE
processing. User programs that ignore the write integrity guidelines can cause
VSAM program checks, lost or inaccessible records, uncorrectable data set
failures, and other unpredictable results. This option places responsibility on
each user sharing the data set.
Cross-region SHAREOPTIONS 4: The data set can be fully shared by any
number of users, and buffers used for direct processing are refreshed for each
request. This setting does not allow any type of non-RLS access when the data
set is already open for RLS processing. With this option, as in
SHAREOPTIONS 3, each user is responsible for maintaining both read and
write integrity for the data the program accesses. Refer to the description of
SHAREOPTIONS 3 for ENQ/DEQ and warning information that applies equally
to SHAREOPTIONS 4.
With options 3 and 4 you are responsible for maintaining both read and write
integrity for the data the program accesses. These options require your program to
use ENQ/DEQ to maintain data integrity while sharing the data set, including the
OPEN and CLOSE processing. User programs that ignore the write integrity
guidelines can cause VSAM program checks, lost or inaccessible records,
uncorrectable data set failures, and other unpredictable results. These options
place heavy responsibility on each user sharing the data set.
When your program requires that no updating from another control block structure
occur before it completes processing of the requested data record, your program
can issue an ENQ to obtain exclusive use of the VSAM data set. If your program
completes processing, it can relinquish control of the data set with a DEQ. If your
program is only reading data and not updating, it is probably a good practice to
serialize the updates and have the readers wait while the update is occurring. If
your program is updating, after the update has completed the ENQ/DEQ bracket,
the reader must determine the required operations for control block refresh and
buffer invalidation based on a communication mechanism or assume that
everything is down-level and refresh each request.
| 7 You must apply APARs OW25251 and OW25252 to allow non-RLS read access to data sets already opened for RLS processing.
Protecting the cluster name with DISP processing and the components by VSAM
OPEN SHAREOPTIONS is the normally accepted procedure. When a shared data
set is opened with DISP=OLD, or is opened for create or reset processing, your
program has exclusive control of the data set within your operating system.
Note: Scheduler “disposition” processing is the same for VSAM and non-VSAM
data sets. This is the first level of share protection.
The following should be considered when you are providing read integrity:
Establish ENQ/DEQ procedures for all requests, read and write.
Decide how to determine and invalidate buffers (index and/or data) that are
possibly down-level.
Do not allow secondary allocation for an entry-sequenced data set, or
fixed-length or variable-length RRDS. If you do allow secondary allocation you
should provide a communication mechanism to the read-only tasks that the
extents are increased, force a CLOSE, and then issue another OPEN.
Providing a buffer refresh mechanism for index I/O will accommodate
secondary allocations for a key-sequenced data set.
With an entry-sequenced data set, or fixed-length or variable-length RRDS, you
must also use the VERIFY macro before the GET macro to update possible
down-level control blocks.
Generally, the loss of read integrity results in down-level data records and
erroneous no-record-found conditions.
POINT
PUT RPL OPTCD=NUP
To invalidate data buffers, ensure that all requests are for positioning by specifying
one of the following:
GET/PUT RPL OPTCD=(DIR,NSP) followed by ENDREQ
POINT GET/PUT RPL OPTCD=SEQ followed by ENDREQ
The considerations that apply to read integrity also apply to write integrity. The
serialization for read could be done as a shared ENQ and for write as an exclusive
ENQ. You must ensure that all I/O is performed to DASD before dropping the
serialization mechanism (usually the DEQ).
Cross-System Sharing
Use the following share options when you define a data set that must be accessed
and/or updated by more than one operating system simultaneously.
Note: System-managed volumes and catalogs containing system-managed data
sets must not be shared with non-system-managed systems.
Cross-system SHAREOPTION 3: The data set can be fully shared. With this
option, the access method uses the control block update facility (CBUF) to
help. With this option, as in cross region SHAREOPTIONS 3, each user is
responsible for maintaining both read and write integrity for the data the
program accesses. User programs that ignore write integrity guidelines can
cause VSAM program checks, uncorrectable data set failures, and other
unpredictable results. This option places heavy responsibility on each user
sharing the data set. The RESERVE and DEQ macros are required with this
option to maintain data set integrity.
Cross-system SHAREOPTION 4: The data set can be fully shared, and buffers
used for direct processing are refreshed for each request.
This option requires that you use the RESERVE and DEQ macros to maintain
data integrity while sharing the data set. Output processing is limited to update
and/or add processing that does not change either the high-used RBA or the
RBA of the high key data control interval if DISP=SHR is specified.
Job steps of two or more systems can gain access to the same data set regardless
of the disposition specified in each step's JCL. To get exclusive control of a
volume, a task in one system must issue a RESERVE macro. For other methods of
obtaining exclusive control using global resource serialization (GRS), see
DFSMS/MVS Planning for Installation and OS/390 MVS Planning: Global Resource
Serialization.
CBUF eliminates the restriction that prohibits control area splits under cross-region
SHAREOPTION 4. Therefore, you do not need to restrict code to prevent control
area splits, or allow for the control area split error condition. The restriction to
prohibit control area splits for cross-systems SHAREOPTION 4 still exists.
CBUF processing is not provided if the data set has cross-system SHAREOPTION
4, but does not reside on shared DASD when it is opened. That is, the data set is
still processed as a cross-system SHAREOPTION 4 data set on shared DASD.
When a key-sequenced data set or variable-length RRDS has cross-system
SHAREOPTION 4, control area splits are prevented. Also, split of the control
interval containing the high key of a key range (or data set) is prevented. With
control interval access, adding a new control interval is prevented.
Note that if you use SHAREOPTION 3, you must continue to provide read/write
integrity. Although VSAM ensures that SHAREOPTION 3 and 4 users will have
correct control block information if serialization is done correctly, the
SHAREOPTION 3 user will not get the buffer invalidation that will occur with
SHAREOPTION 4.
Figure 36 shows how the SHAREOPTIONS specified in the catalog and the
disposition specified on the DD statement interact to affect the type of processing.
Techniques of Sharing—Examples
This section describes the different techniques of data sharing.
Cross-Region Sharing
To maintain write integrity for the data set, your program must ensure that there is
no conflicting activity against the data set until your program completes updating
the control interval. Conflicting activity can be divided into two categories:
1. A data set that is totally preformatted and the only write activity is
update-in-place.
In this case, the sharing problem is simplified by the fact that data cannot
change its position in the data set. The lock that must be held for any write
operation (GET/PUT RPL OPTCD=UPD) is the unit of transfer that is the
control interval. It is your responsibility to associate a lock with this unit of
transfer; the record key is not sufficient unless only a single logical record
resides in a control interval.
The following is an example of the required procedures:
a. Issue a GET for the RPL that has the parameters
OPTCD=(SYN,KEY,UPD,DIR),ARG=MYKEY.
b. Determine the RBA of the control interval (RELCI) where the record
resides. This is based on the RBA field supplied in the RPL(RPLDDDD).
RELCI=CISIZE \ integer-part-of (RPLDDDD / CISIZE)
c. Enqueue MYDATA.DSNAME.RELCI (the calculated value).
d. Issue an ENDREQ.
Lock Condition
Control Interval Updating a record in place or adding a record to a control
interval without causing a split.
Control Area Adding a record or updating a record with a length change,
causing a control interval split, but not a control area split.
Data Set Adding a record or updating a record with a length change,
causing a control area split.
The following is a basic procedure to provide the necessary protection. Note
that, with this procedure, all updates are locked at the at the data set level:
SHAREOPTION = (4 3) CBUF processing
Enqueue MYDATA.DSNAME Shared for read only;
exclusive for write
Issue VSAM request macros
...
Dequeue MYDATA.DSNAME
In any sharing situation, it is a general rule that all resources be obtained and
released between the locking protocol. All positioning must be released by
using all direct requests or by issuing the ENDREQ macro before ending the
procedure with the DEQ.
Cross-System Sharing
With cross-system SHAREOPTIONS 3, you have the added responsibility of
passing the VSAM shared information (VSI) and invalidating data and/or index
buffers. This can be done by using an informational control record as the low-key or
first record in the data set. The following information is required to accomplish the
necessary index record invalidation:
1. Number of data control interval splits and index updates for sequence set
invalidation
2. Number of data control area splits for index set invalidation
All data buffers should always be invalidated. See “Techniques of
Sharing—Examples” on page 185 for the required procedures for invalidating
buffers. To perform selective buffer invalidation, an internal knowledge of the VSAM
control blocks is required.
Your program must serialize the following types of requests (precede the request
with an ENQ and, when the request completes, issue a DEQ):
All PUT requests.
POINT, GET-direct-NSP, GET-skip, and GET-for-update requests that are
followed by a PUT-insert, PUT-update, or ERASE request.
VERIFY requests. When VERIFY is executed by VSAM, your program must
have exclusive control of the data set.
Sequential GET requests.
Similarly, the location of the VSI on the receiving processor can be located. The
VSI level number must be incremented in the receiving VSI to inform the receiving
processor that the VSI has changed. To update the level number, assuming the
address of the VSI is in register 1:
LA ð,1 Place increment into register ð
AL ð,64(,1) Add level number to increment
ST ð,64(,1) Save new level number
If the data set can be shared between MVS/ESA operating systems, a user's
program in another system can concurrently access the data set. Before you open
the data set specifying DISP=OLD, it is your responsibility to protect across
systems with ENQ/DEQ using the UCB option. This protection is available with
GRS or equivalent functions.
Sharing these resources is not the same as sharing a data set itself (that is,
sharing among different tasks that independently open it). Data set sharing can be
done with or without sharing I/O buffers and I/O-related control blocks. For a
discussion of data set sharing, see Chapter 12, “Sharing VSAM Data Sets” on
page 173.
There are also macros that allow you to manage I/O buffers for shared resources.
Sharing resources does not improve sequential processing. VSAM does not
automatically position itself at the beginning of a data set opened for sequential
access, because placeholders belong to the resource pool, not to individual data
sets. When you share resources for sequential access, positioning at the beginning
of a data set has to be specified explicitly with the POINT macro or the direct GET
macro with RPL OPTCD=NSP. You may not use a resource pool to load records
into an empty data set.
When you issue BLDVRP, you specify for the resource pool the size and number of
virtual address space buffers for each virtual buffer pool.
The use of Hiperspace buffers can reduce the amount of I/O to a direct access
storage device (DASD) by caching data in expanded storage. The data in a
Hiperspace buffer is preserved unless there is an expanded storage shortage and
the expanded storage that backs the Hiperspace buffer is reclaimed by the system.
VSAM invalidates a Hiperspace buffer when it is copied to a virtual address space
buffer and, conversely, invalidates a virtual address space buffer when it is copied
to a Hiperspace buffer, so there is altogether at most only one copy of the control
interval in virtual address space and Hiperspace. When a modified virtual address
space buffer is reclaimed, it is copied to Hiperspace and to DASD.
For the data pool or the separate index pool at OPEN time, a data set is assigned
the one buffer pool with buffers of the appropriate size—either the exact control
interval size requested, or the next larger size available.
You may have both a global resource pool and one or more local resource pools.
Tasks in an address space that have a local resource pool may use either the
global resource pool, under the restrictions described below, or the local resource
pool. There may be multiple buffer pools based on buffer size for each resource
pool.
You may share resources locally or globally, with the following restrictions:
LSR (local shared resources)
You can build up to 255 data resource pools and 255 index resource pools in
one address space. Each resource pool must be built individually. The data
pool must exist before the index pool with the same share pool identification
can be built. The parameter lists for these multiple LSR pools can reside above
or below 16 megabytes. The BLDVRP macro RMODE31 parameter indicates
where VSAM is to obtain virtual storage when the LSR pool control blocks and
data buffers are built.
These resource pools are built with the BLDVRP macro TYPE=LSR and
DATA|INDEX specifications. Specifying MACRF=LSR on the ACB or
GENCB-ACB macros causes the data set to use the LSR pools built by the
BLDVRP macro. The DLVRP macro processes both the data and index
resource pools.
GSR (global shared resources)
All address spaces for a given protection key in the system share one resource
pool. Only one resource pool can be built for each of the protection keys 0
through 7. With GSR, an access method control block and all related request
parameter lists, exit lists, data areas, and extent control blocks must be in the
common area of virtual storage with protection key the same as the resource
pool. To get storage in the common area with that protection key, issue the
GETMAIN macro while in that key, for storage in subpool 241.
The separate index resource pools are not supported for GSR.
The Hiperspace buffers (specified on the BLDVRP macro) are not supported for
GSR.
Generate ACBs, RPLs, and EXLSTs with the GENCB macro—code the
WAREA and LENGTH parameters. The program that issues macros related to
that global resource pool must be in supervisor state with the same key. (The
macros are: BLDVRP, CHECK, CLOSE, DLVRP, ENDREQ, ERASE, GENCB,
GET, GETIX, MODCB, MRKBFR, OPEN, POINT, PUT, PUTIX, SCHBFR,
SHOWCB, TESTCB, and WRTBFR. The SHOWCAT macro is not related to a
resource pool, because it is issued independently of an opened data set.)
For example, to find the control interval size using SHOWCB: open the data set for
nonshared resources processing, issue SHOWCB, close the ACB, issue BLDVRP,
open the ACB for LSR or GSR.
Note: Because Hiperspace buffers are in expanded storage, you do not need to
consider their size and number when you calculate the size of the virtual
resource pool.
For each VSAM cluster that will share the virtual resource pool you are building,
follow this procedure:
1. Determine the number of concurrent requests you expect to process. The
number of concurrent requests represents STRNO for the cluster.
2. Specify BUFFERS=(SIZE(STRNO+1)) for the data component of the cluster.
If the cluster is a key-sequenced cluster and the index CISZ (control
interval size) is the same as the data CISZ, change the specification to
BUFFERS=(SIZE(2 X STRNO)+1).
If the index CISZ is not the same as the data component CISZ, specify
BUFFERS=(dataCISZ(STRNO+1),indexCISZ(STRNO)).
Following this procedure will provide the minimum number of buffers needed to
support concurrently active STRNO strings. An additional string is not dynamically
added to a shared resource pool. The calculation can be repeated for each cluster
which will share the resource pool, including associated alternate index clusters and
clusters in the associated alternate index upgrade sets.
For each cluster component having a different CISZ, add another ‘,SIZE(NUMBER)’
range to the ‘BUFFERS=’ specification. Note that the data component and index
component buffers may be created as one set of buffers, or, by use of the ‘TYPE=’
statement, may be created in separate index and data buffer sets.
If the specified number of buffers is not adequate, VSAM will return a logical error
indicating the out-of-buffer condition.
The statistics cannot be used to redefine the resource pool while it is in use. You
have to make adjustments the next time you build it.
For buffer pool statistics, the keywords described below are specified in FIELDS.
These fields may be displayed only after the data set described by the ACB is
opened. Each field requires one fullword in the display work area:
Field Description
BFRFND The number of requests for retrieval that could be satisfied without
an I/O operation (the data was found in a buffer)
BUFRDS The number of reads to bring data into a buffer
NUIW The number of nonuser-initiated writes (that VSAM was forced to do
because no buffers were available for reading the contents of a
control interval)
If the VSAM control blocks and data buffers reside above 16 megabytes,
RMODE31=ALL must be specified in the ACB before OPEN is issued. If the OPEN
parameter list or the VSAM ACB resides above 16 megabytes, the MODE=31
parameter of the OPEN macro must also be coded.
When an ACB indicates LSR or GSR, VSAM ignores its BSTRNO, BUFNI, BUFND,
BUFSP, and STRNO parameters because VSAM will use the existing resource pool
for the resources associated with these parameters.
If more than one ACB is opened for LSR processing of the same data set, the LSR
pool identified by the SHRPOOL parameter for the first ACB will be used for all
subsequent ACBs.
For a data set described by an ACB with MACRF=GSR, the ACB and all related
RPLs, EXLSTs, ECBs, and data areas must be in the common area of virtual
storage with the same protection key as the resource pool.
Deferring writes saves I/O operations when subsequent requests can be satisfied
by the data in the buffer pool. If you are going to update control intervals more than
once, data processing performance will be improved by deferring writes.
You indicate that writes are to be deferred by coding MACRF=DFR in the ACB,
along with MACRF=LSR or GSR.
ACB MACRF=({LSR|GSR},{DFR| NDF},...),...
VSAM notifies the processing program when an unmodified buffer has been found
for the current request and there will be no more unmodified buffers into which to
read the contents of a control interval for the next request. (VSAM will be forced to
write a buffer to make a buffer available for the next I/O request.) VSAM sets
register 15 to 0 and puts 12 (X'0C') in the feedback field of the RPL that defines
the PUT request detecting the condition.
VSAM also notifies the processing program when there are no buffers available to
be assigned to a placeholder for a request. This is a logical error (register 15
contains 8 unless an exit is taken to a LERAD routine). The feedback field in the
RPL contains 152 (X'98'). You may retry the request; it gets a buffer if one is
freed.
TRANSID specifies a number from 0 to 31. The number 0, which is the default,
indicates that requests defined by the RPL are not associated with other requests.
A number from 1 to 31 relates the requests defined by this RPL to the requests
defined by other RPLs with the same transaction ID.
You can find out what transaction ID an RPL has by issuing SHOWCB or TESTCB.
SHOWCB FIELDS=([TRANSID],...),...
If the ACB to which the RPL is related has MACRF=GSR, the program issuing
SHOWCB or TESTCB must be in supervisor state with the same protection key as
the resource pool. With MACRF=GSR specified in the ACB to which the RPL is
related, a program check can occur if SHOWCB or TESTCB is issued by a
program that is not in supervisor state with protection key 0 to 7. For more
information on the use of SHOWCB and TESTCB, see “Manipulating Control Block
Contents” on page 124.
You can specify the DFR option in an ACB without using WRTBFR to write
buffers—a buffer is written when VSAM needs one to satisfy a GET request, or all
modified buffers are written when the last of the data sets that uses them is closed.
Besides using WRTBFR to write buffers whose writing is deferred, you can use it to
write buffers that are marked for output with the MRKBFR macro, which is
described in “Marking a Buffer for Output: MRKBFR” on page 198.
VSAM notifies the processing program when there are no more unmodified buffers
into which to read the contents of a control interval. (VSAM would be forced to write
buffers when another GET request required an I/O operation.) VSAM sets register
15 to 0 and puts 12 (X'0C') in the feedback field of the RPL that defines the PUT
request that detects the condition.
VSAM also notifies the processing program when there are no buffers available to
which to assign a placeholder for a request. This is a logical error (register 15
contains 8 unless an exit is taken to a LERAD routine); the feedback field in the
RPL contains 152 (X'98'). You may retry the request; it gets a buffer if one is
freed.
When sharing the data set with a user in another region, your program might want
to write the contents of a specified buffer without writing all other modified buffers.
Your program issues the WRTBFR macro to search your buffer pool for a buffer
containing the specified RBA. If found, the buffer is examined to verify that it is
modified and has a use count of zero. If so, VSAM writes the contents of the buffer
into the data set.
Note: Before using WRTBFR TYPE=CHK|TRN|DRBA, be sure to release all
buffers. (For details about releasing buffers, see “Processing Multiple
Strings” on page 133.) If one of the buffers is not released, VSAM defers
processing until the buffer is released.
The ddname field of the physical error message identifies the data set that was
using the buffer, but, because the buffer might have been released, its contents
might be unavailable. You can provide a JRNAD exit routine to record the contents
of buffers for I/O errors. It can be coordinated with a physical error analysis routine
to handle I/O errors for buffers whose writing has been deferred. If a JRNAD exit
routine is used to cancel I/O errors during a transaction, the physical error analysis
routine will get only the last error return code. See “SYNAD Exit Routine to
Analyze Physical Errors” on page 231 and “JRNAD Exit Routine to Journalize
Transactions” on page 221 for more information on the SYNAD and JRNAD
routines.
The buffer pool to be searched is the one used by the data component defined by
the ACB to which your RPL is related. If the ACB names a path, VSAM searches
the buffer pool used by the data component of the alternate index. (If the path is
defined over a base cluster alone, VSAM searches the buffer pool used by the data
component of the base cluster.) VSAM begins its search at the buffer you specify
and continues until it finds a buffer that contains an RBA in the range or until the
highest numbered buffer is searched.
For the first buffer that satisfies the search, VSAM returns its address
(OPTCD=LOC) or its contents (OPTCD=MVE) in the work area whose address is
specified in the AREA parameter of the RPL and returns its number in register 0. If
the search fails, Register 0 is returned with the user specified buffer number and a
one-byte SCHBFR code of X'0D'. To find the next buffer that contains an RBA in
the range, issue SCHBFR again and specify the number of the next buffer after the
first one that satisfied the search. You continue until VSAM indicates it found no
buffer that contains an RBA in the range or until you reach the end of the pool.
Finding a buffer that contains a desired RBA does not get you exclusive control of
the buffer. You may get exclusive control only by issuing GET for update. SCHBFR
does not return the location or the contents of a buffer that is already under the
exclusive control of another request.
MRKBFR MARK=OUT, indicates that the buffer's contents are modified. You must
modify the contents of the buffer itself—not a copy. Therefore, when you issue
SCHBFR or GET to locate the buffer, you must specify RPL OPTCD=LOC. (If you
use OPTCD=MVE, you get a copy of the buffer but do not learn its location.) The
buffer is written when a WRTBFR is issued or when VSAM is forced to write a
buffer to satisfy a GET request.
If you are sharing a buffer or have exclusive control of it, you can release it from
shared status or exclusive control with MRKBFR MARK=RLS. If the buffer was
marked for output, MRKBFR with MARK=RLS does not nullify it; the buffer is
eventually written. Sequential positioning is lost. MRKBFR with MARK=RLS is
similar to the ENDREQ macro.
VSAM requests related to the global resource pool may be issued only by a
program in supervisor state with protection key 0 to 7 (the same as the
resource pool).
Checkpoints cannot be taken for data sets whose resources are shared in a
global resource pool. When a program in an address space that opened a data
set whose ACB has MACRF=GSR issues the CHKPT macro, 8 is returned in
register 15. If a program in another address space issues the CHKPT macro,
the checkpoint is taken, but only for data sets that are not using the global
resource pool.
Checkpoint/restart can be used with data sets whose resources are shared in a
local resource pool, but the restart program does not reposition for processing
at the point where the checkpoint occurred—processing is restarted at a data
set's highest used RBA. Refer to DFSMS/MVS Checkpoint/Restart for
information about restarting the processing of VSAM data.
If a physical I/O error is found while writing a control interval to the direct
access device, the buffer remains in the resource pool. The write-required flag
(BUFCMW) and associated mod bits (BUFCMDBT) are turned off, and the
BUFC is flagged in error (BUFCER2=ON). The buffer is not replaced in the
pool, and buffer writing is not attempted. To release this buffer for reuse, a
WRTBFR macro with TYPE=DS can be issued or the data set can be closed
(CLOSE issues the WRTBFR macro).
When you use the BLDVRP macro to build a shared resource pool, some of
the VSAM control blocks are placed in a system subpool and others in subpool
0. When a task ends, the system frees subpool 0 unless it is shared with
another task. The system does not free the system subpool until the job step
ends. Then, if another task attempts to use the resource pool, an abend might
occur when VSAM attempts to access the freed control blocks. This problem
does not occur if the two tasks share subpool 0. Code in the ATTACH macro
the SZERO=YES parameter, or the SHSPL or SHSPV parameters.
SZERO=YES is the default.
| GSR is not allowed for compressed data sets.
RLS access is supported for KSDS, ESDS, RRDS, and VRRDS data sets. RLS
access is supported for VSAM alternate indexes. RLS is not supported for control
interval mode access (CNV or ICI) or for HFS file.
The VSAM RLS functions are provided by the SMSVSAM server. This server
resides in a system address space. The address space is created and the server is
started at MVS IPL time. VSAM internally performs cross-address space accesses
and linkages between requestor address spaces and the SMSVSAM server
address space.
The SMSVSAM server owns two data spaces. One data space is called the
SMSVSAM data space. It contains some VSAM RLS control blocks and a
system–wide buffer pool. The other data space is used by VSAM RLS to collect
activity monitoring information which is used to produce SMF records. See
Figure 37 for an example.
┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ CICS AOR │ │ CICS AOR │ │ Batch Job │. . . │ VSAM RLS ├──────────────┬──────────────┐
│Address Space │ │Address Space │ │Address Space │ │ (SMSVSAM) │ Data Space │ Data Space │
│ │ │ │ │ │ │Address Space │ │ │
│ OPEN │ │ OPEN │ │ OPEN │ │ │ RLS Buffer │ RLS Activity │
│ ACB │ │ ACB │ │ ACB │ │ │ Pool │ │
│ MACRF=RLS │ │ MACRF=RLS │ │ MACRF=RLS │ │ │ │ Monitoring │
│ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ Information │
│ GET/PUT │ │ GET/PUT │ │ GET/PUT │ │ │ │ │
│ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ RLS internal │ │
│ │ │ │ │ │ │ │control blocks│ │
│ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │
└──────────────┘ └──────────────┘ └──────────────┘ └─────────────┬┘ │ │
└───────────────┴──────────────┘
Figure 37. VSAM RLS Address/Data Spaces and Requestor Address Spaces. VSAM
provides the cross-address space access/linkage between the requestor address spaces and
the SMSVSAM address/data spaces.
Figure 38. CICS VSAM Non-RLS Access. CICS AORs function ship VSAM requests to
access a specific file to the CICS FOR that owns that file. This distributed access form of
data sharing has existed in CICS for some time.
┌─────────────────────────────┐ ┌─────────────────────────────┐
│┌───────┐ ┌───────┐│ . . . │┌───────┐ ┌───────┐│
││ CICS │ . . . │ CICS ││ ││ CICS │ . . . │ CICS ││
││ AOR │ │ AOR ││ ││ AOR │ │ AOR ││
││ │ │ ││ ││ │ │ ││
││ │ │ ││ ││ │ │ ││
│└─────┬─┘ └──┬────┘│ │└────┬──┘ └──┬────┘│
│ └────┐ ┌────┘ │ │ └────┐ ┌─────┘ │
│ │ │ │ │ │ │ │
│ ┌───┴──────┴──┐ │ │ ┌───┴──────┴──┐ │
│ │ VSAM RLS │ │ │ │ VSAM RLS │ │
│ │ Instance 1 │ │ │ │ Instance n │ │
│ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │
│ │SMSVSAM │ │ │ │SMSVSAM │ │
│ │Address Space│ │ │ │Address Space│ │
│ └─────┬───────┘ │ │ └─────┬───────┘ │
│ │ │ │ │ │
│ │ │ │ │ │
│ │ MVS 1 │ │MVS n │ │
└─────────────┼───────────────┘ └────────────┼────────────────┘
│ │
┌──┴────────────────────────────────────┴─┐
│ Coupling Facility (CF) │
│ │
└─────────────────────────────────────────┘
Figure 39. CICS VSAM RLS. VSAM RLS is a multisystem server. The CICS AORs access
the shared files by submitting requests directly to the VSAM RLS server. The server uses
the CF to serialize access at the record level.
CICS File Control provides transactional commit, rollback, and forward recovery
logging functions for VSAM recoverable files. With VSAM RLS, CICS continues to
provide the transactional functions. The transactional functions are not provided by
VSAM RLS itself. VSAM RLS provides CF-based record locking and CF data
caching.
CICS recoverable file attributes can be used with VSAM data set attributes
specifiable on IDCAMS (access method services) DEFINE and ALTER commands.
LOG can be specified in the data class along with BWO and LOGSTREAMID. Only
a CICS application can open a recoverable data set for output. The reason for this
is that VSAM RLS itself does not provide the logging and other transactional file
functions required when writing to a recoverable file.
When a data set is opened in non-RLS (NSR, LSR, GSR) access mode, the
recoverable attributes of the data set do not apply and are ignored. Thus, existing
programs which do not use RLS access are not impacted by the recoverable data
set rules.
The CICS rollback (backout) function removes changes made to the recoverable
files by a transaction. When a transaction abnormally terminates, CICS implicitly
performs a rollback.
The commit and rollback functions isolate a transaction from the changes made to
recoverable files or in general, recoverable resources by other transactions. This
allows the transaction logic to focus on the function it is providing and not have to
be concerned with data recovery or cleanup in the event of problems or failures.
VSAM RLS can ensure read integrity across splits. It uses the cross-invalidate
function of the CF to invalidate copies of data and index CI in buffer pools other
than the writer's buffer pool. This ensures that all RLS readers, CICS and
non-CICS, are able to see any records moved by a concurrent CI or CA split. On
each GET request, VSAM RLS tests validity of the buffers and when invalid, the
buffers are refreshed from the CF or DASD.
VSAM RLS provides record locking and buffer coherency across the CICS and
non-CICS read/write sharers of non-recoverable data sets. However, the record
lock on a new or changed record is released as soon as the buffer containing the
change has been written to the CF cache and DASD. This differs from the case
where a CICS transaction modifies VSAM RLS recoverable data sets and the
corresponding locks on the added and changed records remain held until
end-of-transaction.
For sequential and skip-sequential processing, VSAM RLS does not write a
modified CI (control interval) until the processing moves to another CI or an
ENDREQ is issued by the application. In the event of an application abnormal
termination or abnormal termination of the VSAM RLS server, these buffered
changes are lost.
While VSAM RLS permits read/write sharing of non-recoverable data sets across
CICS and non-CICS programs, most programs are not designed to tolerate this
sharing. The absence of transactional recovery requires very careful design of the
data and the programs.
CICS and VSAM RLS provide a quiesce function to assist in the process of
switching a sphere from CICS RLS usage to non-RLS usage.
Share Options
For non-RLS access, VSAM uses the share options settings to determine the type
of sharing allowed. If you set the cross-region share option to 2, a non-RLS open
for input is allowed while the data set is already open for RLS access. VSAM
provides full read and write integrity for the RLS users, but does not provide read
integrity for the non-RLS user. A non-RLS open for output is not allowed when
already opened for RLS.
VSAM RLS provides full read and write sharing for multiple users; it does not use
share options settings to determine levels of sharing. When an RLS open is
requested and the data set is already open for non-RLS input, VSAM does check
the cross-region setting. If it is 2, then the RLS open is allowed. The open fails for
any other share option or if the data set has been opened for non-RLS output.
Locking
Non-RLS provides local locking (within the scope of a single buffer pool) at the
VSAM control interval level of granularity. Locking contention can result in an
“exclusive control conflict” error response to a VSAM record management request.
8 You must apply APARs OW25251 and OW25252 to allow non-RLS read access to data sets already opened for RLS processing.
on a VSAM record, the request that encountered the contention waits for the
contention to be removed. The DFSMS/MVS lock manager provides deadlock
detection. When a lock request is found to be in deadlock, the request is rejected
resulting in the VSAM record management request completing with a “deadlock”
error response.
VSAM RLS supports a timeout value which can be specified via the RPL. CICS will
use this parameter to ensure that a transaction does not wait indefinitely for a lock
to become available. VSAM RLS uses a timeout function of the DFSMS/MVS lock
manager.
Retaining Locks
VSAM RLS uses share and exclusive record locks to control access to the shared
data. An exclusive lock is used to ensure that a single user is updating a specific
record. The exclusive lock causes any read-with-integrity request for the record by
another user (CICS transaction or non-CICS application) to wait until the update is
finished and the lock released.
When a transaction enters in-doubt or a CICS AOR terminates, any exclusive locks
on records of recoverable files held by the transaction must remain held. However,
other users waiting for these locks should not continue to wait. The outage is likely
to be longer than the user would want to wait. When these conditions occur, VSAM
RLS converts these exclusive locks into retained locks.
Both exclusive and retained locks are not available to other users. When another
user encounters lock contention with an exclusive lock, the user's lock request
waits. When another user encounters lock contention with a retained lock, the lock
request is immediately rejected with “retained lock” error response. This results in
the VSAM record management request that produced the lock request failing with
“retained lock” error response.
The VSAM RLS record management request task must be the same task that
opened the ACB, or the task that opened the ACB must be in the task hierarchy.
That is, the record management task was attached by the task that opened the
ACB, or by a task that was attached by the task that opened the ACB).
EXAMINE
EXAMINE is an access method services command that allows the user to analyze
and collect information on the structural consistency of key-sequenced data set
clusters. This service aid consists of two tests: INDEXTEST and DATATEST.
INDEXTEST examines the index component of the key-sequenced data set cluster
by cross-checking vertical and horizontal pointers contained within the index control
intervals, and by performing analysis of the index information. It is the default test
of EXAMINE.
DATATEST evaluates the data component of the key-sequenced data set cluster
by sequentially reading all data control intervals, including free space control
intervals. Tests are then carried out to ensure record and control interval integrity,
free space conditions, spanned record update capacity, and the integrity of various
internal VSAM pointers contained within the control interval.
Refer to DFSMS/MVS Access Method Services for ICF for a description of the
EXAMINE command syntax.
EXAMINE Users
EXAMINE end users fall into two categories:
1. Application Programmer/Data Set Owner. These users want to know of any
structural inconsistencies in their data sets, and they are directed to
corresponding IBM-supported recovery methods by the appropriate summary
messages. The users' primary focus is the condition of their data sets;
therefore, they should use the ERRORLIMIT(0) parameter of EXAMINE to
suppress printing of detailed error messages.
Users must have master level access to a catalog or control level access to a data
set to examine it. Master level access to the master catalog is also sufficient to
examine an integrated catalog facility user catalog.
Sharing Considerations
There should not be any users open to the data set during the EXAMINE run.
When you run EXAMINE against a catalog, concurrent access can have occurred
without the message being issued. Because it can be difficult to stop system
access to the catalog, you should not run jobs that would cause an update to the
catalog.
For further considerations of data set sharing, see Chapter 12, “Sharing VSAM
Data Sets” on page 173.
DATATEST reads the sequence set from the index component and the entire data
component of the KSDS cluster. So, it should take considerably more time and
more system resources than INDEXTEST.
If you are using EXAMINE to document an error in the data component, run both
tests. If you are using EXAMINE to document an error in the index component, it is
usually not necessary to run the DATATEST.
If you are using EXAMINE to confirm a data set's integrity, your decision to run one
or both tests depends on the time and resources available.
If the master catalog is protected by RACF or an equivalent product and you do not
have alter authority to it, a message can be issued indicating an authorization
failure when the check indicated above is made. This is normal, and, if you have
master level access to the catalog being examined, the examination can continue.
When analyzing an ICF catalog, it is recommended that a VERIFY command be
performed before the EXAMINE command is used.
Message Hierarchy
Messages describing errors or inconsistencies are generated during EXAMINE
processing as that condition is detected. The detection of an error condition can
result in the generation of many messages. There are five distinct types of
EXAMINE error messages:
1. Status and Statistical Messages. This type of message tells you the status of
the EXAMINE operation, such as the beginning and completion of each test. It
provides general statistical data, such as the number of data records, the
percentage of free space in data control intervals (CIs), and the number of
deleted CIs. The four status messages are: IDC01700I, IDC01701I, IDC01709I,
and IDC01724I. The five statistical messages are IDC01708I, IDC01710I,
IDC01711I, IDC01712I, and IDC01722I.
2. Supportive (Informational) Messages. Supportive messages (all remaining
IDC0-type messages) issued by EXAMINE clarify individual data set structural
errors and provide additional information pertinent to an error.
3. Individual Data Set Structural Error Messages. The identification of an error
is always reported by an individual data set structural error (IDC1-type)
message that can be immediately followed by one or more supportive
messages.
4. Summary Error Messages. One or more summary error (IDC2-type)
messages are generated at the completion of either INDEXTEST or
DATATEST to categorize all individual data set structural error (IDC1-type)
messages displayed during the examination. The summary error message
represents the final analysis of the error(s) found, and the user should follow
the course of recovery action as prescribed by the documentation.
5. Function-Not-Performed Messages. Function-not-performed messages (all of
the IDC3-type messages) indicate that the function you requested cannot be
successfully performed by EXAMINE. In each case, the test operation
terminates before the function completes.
Chapter 15. Checking a VSAM Key-Sequenced Data Set Cluster for Structural Errors 213
KSDS Cluster Errors
EXAMINE NAME(EXAMINE.KDð5) -
INDEXTEST -
DATATEST
Because of this severe INDEXTEST error, DATATEST did not run in this particular
case.
EXAMINE then displayed the prior key (11), the data control interval at relative byte
address decimal 512, and the offset address hexadecimal 9F into the control
interval where the duplicate key was found.
Chapter 15. Checking a VSAM Key-Sequenced Data Set Cluster for Structural Errors 215
KSDS Cluster Errors
General Guidance
VSAM user-written routines can be supplied to:
Analyze logical errors
Analyze physical errors
Perform end-of-data processing
Record transactions made against a data set
Perform special user processing
Perform wait user processing
Perform user-security verification.
VSAM user-written exit routines are identified by macro parameters in access
methods services commands.
You use the EXLST VSAM macro to create an exit list. EXLST parameters
EODAD, JRNAD, LERAD, SYNAD and UPAD are used to specify the addresses of
your user-written routines. Only the exits marked active are executed. For more
information on the EXLST macro DFSMS/MVS Macro Instructions for Data Sets.
You can use access methods services commands to specify the addresses of
user-written routines to perform exception processing and user-security verification
processing. For more information on exits from access methods services
commands, see DFSMS/MVS Access Method Services for ICF.
The exit locations available from VSAM are outlined in the following table.
Programming Considerations
Information
To code VSAM user exit routines you should be familiar with the contents and have
available the following DFSMS/MVS manuals:
DFSMS/MVS Macro Instructions for Data Sets
DFSMS/MVS Access Method Services for ICF
Coding Guidance
Usually, you should observe these guidelines in coding your routine:
Code your routine reentrant
Save and restore registers (see individual routines for other requirements)
Be aware of registers used by the VSAM request macros
Be aware of the addressing mode (24 bit or 31 bit) in which your exit routine
will receive control
Determine if VSAM or your program should load the exit routine.
If the exit routine is used by a program that is doing asynchronous processing with
multiple request parameter lists (RPL) or, if the exit routine is used by more than
one data set, it must be coded so that it can handle an entry made before the
previous entry's processing is completed. Saving and restoring registers in the exit
routine, or by other routines called by the exit routine, is best accomplished by
coding the exit routine reentrant. Another way of doing this is to develop a
technique for associating a unique save area with each RPL.
If the LERAD, EODAD, or SYNAD exit routine reuses the RPL passed to it, you
should be aware that:
The exit routine is called again if the request issuing the reused RPL results in
the same exception condition that caused the exit routine to be entered
originally.
The original feedback code is replaced with the feedback code that indicates
the status of the latest request issued against the RPL. If the exit routine
returns to VSAM, VSAM (when it returns to the user's program) sets register 15
to also indicate the status of the latest request.
JRNAD, UPAD, exception exits are extensions of VSAM and therefore must
return to VSAM in the same processing mode (that is, cross-memory, SRB, or
task mode) in which they were entered.
A user exit that is loaded by VSAM will be invoked in the addressing mode
specified when the module was link-edited. A user exit that is not loaded by VSAM
will receive control in the same addressing mode as the issuer of the VSAM
record-management, OPEN, or CLOSE request that causes the exit to be taken. It
is the user's responsibility to ensure that the exit is written for the correct
addressing mode.
Your exit routine can be loaded within your program or by using the JOBLIB or
STEPLIB with the DD statement to point to the library location of your exit routine.
Register Contents
Figure 41 gives the contents of the registers when VSAM exits to the EODAD
routine.
Programming Considerations
The typical actions of an EODAD routine are to:
Examine RPL for information you need, for example, type of data set
Issue completion messages
Close the data set
Terminate processing without returning to VSAM.
If the routine returns to VSAM and another GET request is issued for access to the
data set, VSAM exits to the LERAD routine.
The type of data set whose end was reached can be determined by examining the
RPL for the address of the access method control block that connects the program
to the data set and testing its attribute characteristics.
If the exit routine issues GENCB, MODCB, SHOWCB, or TESTCB and returns to
VSAM, it must provide a save area and restore registers 13 and 14, which are used
by these macros.
When your EODAD routine completes processing, return to your main program as
described in “Returning to Your Main Program” on page 219.
Register Contents
Figure 42 gives the contents of the registers when VSAM exits to the
EXCEPTIONEXIT.
Programming Considerations
The exception exit is taken for the same errors as a SYNAD exit. If you have both
an active SYNAD routine and an EXCEPTIONEXIT routine, the exception exit
routine is processed first.
The exception exit is associated with the attributes of the data set (specified by the
DEFINE) and is loaded on every call. Your exit must reside in the LINKLIB and the
exit cannot be called when VSAM is in cross-memory mode.
When your exception exit routine completes processing, return to your main
program as described in “Returning to Your Main Program” on page 219.
For information about how exception exits are established, changed, or nullified,
DFSMS/MVS Access Method Services for ICF.
Register Contents
Figure 43 gives the contents of the registers when VSAM exits to the JRNAD
routine.
Programming Considerations
If the JRNAD is taken for I/O errors, a journal exit can zero out, or otherwise alter,
the physical-error return code, so that a series of operations can continue to
completion, even though one or more of the operations failed.
The contents of the parameter list built by VSAM, pointed to by register 1, can be
examined by the JRNAD exit routine, which is described in Figure 45 on page 225.
If the exit routine issues GENCB, MODCB, SHOWCB, or TESTCB, it must restore
register 14, which is used by these macros, before it returns to VSAM.
If the exit routine uses register 1, it must restore it with the parameter list address
before returning to VSAM. (The routine must return for completion of the request
that caused VSAM to exit.)
The JRNAD exit must be indicated as active before the data set for which the exit
is to be used is opened, and the exit must not be made inactive during processing.
If you define more than one access method control block for a data set and want to
have a JRNAD routine, the first ACB you open for the data set must specify the exit
list that identifies the routine.
When the data set being processed is extended addressable, the JRNAD exits
dealing with RBAs are not taken or are restricted due to the increase in the size of
the field required to provide addressability to RBAs which may be greater than
4GB. The restrictions are for the entire data set without regard to the specific RBA
value.
Journalizing Transactions
For journalizing transactions (when VSAM exits because of a GET, PUT, or
ERASE), you can use the SHOWCB macro to display information in the request
parameter list about the record that was retrieved, stored, or deleted
(FIELDS=(AREA,KEYLEN,RBA,RECLEN), for example). You can also use the
TESTCB macro to find out whether a GET or a PUT was for update
(OPTCD=UPD).
If your JRNAD routine only journals transactions, it should ignore reason X'0C'
and return to VSAM; conversely, it should ignore reasons X'00', X'04', and X'08'
if it records only RBA changes.
You should provide a routine to keep track of RBA changes caused by control
interval and control area splits. RBA changes that occur through keyed access to a
key-sequenced data set must also be recorded if you intend to process the data set
later by direct-addressed access.
You might also want to use the JRNAD exit to maintain shared or exclusive control
over certain data or index control intervals; and in some cases, in your exit routine
you can reject the request for certain processing of the control intervals. For
example, if you used this exit to maintain information about a data set in a shared
environment, you might reject a request for a control interval or control area split
because the split might adversely affect other users of the data set.
USERPROG CSECT
SAVE(R14,R12) Standard entry code
.
.
.
BLDVRP BUFFERS=(512(3)), Build resource pool X
KEYLEN=4, X
STRNO=4, X
TYPE=LSR, X
SHRPOOL=1, X
RMODE31=ALL
OPEN (DIRACB) Logically connect KSDS1
.
.
.
PUT RPL=DIRRPL This PUT causes the exit routine USEREXIT
to be taken with an exit code X'5ð' if
there is a CI or CA split
LTR R15,R15 Check return code from PUT
BZ NOCANCEL Retcode = ð if USEREXIT did not cancel
CI/CA split
= 8 if cancel was issued, if
we know a CI or CA split
occurred
.
. Process the cancel situation
.
NOCANCEL . Process the noncancel situation
.
.
CLOSE (DIRACB) Disconnect KSDS1
DLVRP TYPE=LSR,SHRPOOL=1 Delete the resource pool
.
.
.
RETURN Return to caller.
.
.
.
DIRACB ACB AM=VSAM, X
DDNAME=KSDS1, X
BUFND=3, X
BUFNI=2, X
MACRF=(KEY,DDN,SEQ,DIR,OUT,LSR), X
SHRPOOL=1, X
EXLST=EXITLST
\
DIRRPL RPL AM=VSAM, X
ACB=DIRACB, X
AREA=DATAREC, X
AREALEN=128, X
ARG=KEYNO, X
KEYLEN=4, X
OPTCD=(KEY,DIR,FWD,SYN,NUP,WAITX), X
RECLEN=128
\
DATAREC DC CL128'DATA RECORD TO BE PUT TO KSDS1'
KEYNO DC F'ð' Search key argument for RPL
EXITLST EXLST AM=VSAM,JRNAD=(JRNADDR,A,L)
JRNADDR DC CL8'USEREXIT' Name of user exit routine
END End of USERPROG
Parameter List
The parameter list built by VSAM contains reason codes to indicate why the exit
was taken, and also locations where you can specify return codes for VSAM to take
or not take an action on returning from your routine. The information provided in the
parameter list varies depending on the reason the exit was taken. Figure 45 shows
the contents of the parameter list.
The parameter list will reside in the same area as the VSAM control blocks, either
above or below the 16MB line. For example, if the VSAM data set was opened and
the ACB stated RMODE31=CB, the exit parameter list will reside above the 16MB
line. To access a parameter list that resides above the 16MB line, you will need to
use 31-bit addressing.
Figure 45 (Page 1 of 3). Contents of Parameter List Built by VSAM for the JRNAD Exit
Offset Bytes Description
0(X'0') 4 Address of the RPL that defines the request that caused VSAM to exit to the routine.
4(X'4') 4 Address of a 5-byte field that identifies the data set being processed. This field has the
format:
4 bytes Address of the access method control block specified by the RPL that
defines the request occasioned by the JRNAD exit.
1 byte Indication of whether the data set is the data (X'01') or the index (X'02')
component.
8(X'8') 4 Variable, depends on the reason indicator at offset 20:
Offset 20 Contents at offset 8
X'0C' The RBA of the first byte of data that is being shifted or moved.
X'20' The RBA of the beginning of the control area about to be split.
X'24' The address of the I/O buffer into which data was going to be read.
X'28' The address of the I/O buffer from which data was going to be written.
X'2C' The address of the I/O buffer that contains the control interval contents that
are about to be written.
X'30' Address of the buffer control block (BUFC) that points to the buffer into
which data is about to be read under exclusive control.
X'34' Address of BUFC that points to the buffer into which data is about to be
read under shared control.
X'38' Address of BUFC that points to the buffer which is to be acquired in
exclusive control. The buffer is already in the buffer pool.
X'3C' Address of the BUFC that points to the buffer which is to be built in the
buffer pool in exclusive control.
X'40' Address of BUFC which points to the buffer whose exclusive control has just
been released.
X'44' Address of BUFC which points to the buffer whose contents have been
made invalid.
X'48' Address of the BUFC which points to the buffer into which the READ
operation has just been completed.
X'4C' Address of the BUFC which points to the buffer from which the WRITE
operation has just been completed.
Figure 45 (Page 2 of 3). Contents of Parameter List Built by VSAM for the JRNAD Exit
Offset Bytes Description
12(X'C') 4 Variable, depends on the reason indicator at offset 20:
Offset 20 Contents at offset 12
X'0C' The number of bytes of data that is being shifted or moved (this number
doesn't include free space, if any, or control information—except for a
control area split, when the entire contents of a control interval are moved to
a new control interval.)
X'20' Unpredictable.
X'24' Unpredictable.
X'28' Bits 0 through 31 correspond with transaction IDs 0 through 31. Bits set to 1
indicate that the buffer that was being written when the error occurred was
modified by the corresponding transactions. You can set additional bits to 1
to tell VSAM to keep the contents of the buffer until the corresponding
transactions have modified the buffer.
X'2C' The size of the control interval whose contents are about to be written.
X'30' Zero.
X'34' Zero.
X'38' Zero.
X'3C' Size of the buffer which is to be built in the buffer pool in exclusive control.
X'48' Size of the buffer into which the READ operation has just been completed.
X'4C' Size of the buffer from which the WRITE operation has just been completed.
16(X'10') 4 Variable, depends on the reason indicator at offset 20:
Offset 20 Contents at offset 16
X'0C' The RBA of the first byte to which data is being shifted or moved.
X'20' The RBA of the last byte in the control area about to be split.
X'24' The fourth byte contains the physical error code from the RPL FDBK field.
You use this fullword to communicate with VSAM. Setting it to 0 indicates
that VSAM is to ignore the error, bypass error processing, and let the
processing program continue. Leaving it nonzero indicates that VSAM is to
continue as usual: terminate the request that occasioned the error and
proceed with error processing, including exiting to a physical error analysis
routine.
X'28' Same as for X'24'.
X'2C' The RBA of the control interval whose contents are about to be written.
X'48' Unpredictable.
X'4C' Unpredictable.
Figure 45 (Page 3 of 3). Contents of Parameter List Built by VSAM for the JRNAD Exit
Offset Bytes Description
20(X'14') 1 Indication of the reason VSAM exited to the JRNAD routine:
X'00' GET request.
X'04' PUT request.
X'08' ERASE request.
X'0C' RBA change.
X'10' Read spanned record segment.
X'14' Write spanned record segment.
X'18' Reserved.
X'1C' Reserved.
The following codes are for shared resources only:
X'20' Control area split.
X'24' Input error.
X'28' Output error.
X'2C' Buffer write.
X'30' A data or index control interval is about to be read in exclusive
control.
X'34' A data or index control interval is about to be read in shared status.
X'38' Acquire exclusive control of a control interval already in the buffer
pool.
X'3C' Build a new control interval for the data set and hold it in exclusive
control.
X'40' Exclusive control of the indicated control interval already has been
released.
X'44' Contents of the indicated control interval have been made invalid.
X'48' Read completed.
X'4C' Write completed.
X'50' Control interval or control area split.
X'54'–X'FF' Reserved.
21(X'15') 1 JRNAD exit code set by the JRNAD exit routine. Indication of action to be taken by
VSAM after resuming control from JRNAD (for shared resources only):
X'80' Do not write control interval.
X'84' Treat I/O error as no error.
X'88' Do not read control interval.
X'8C' Cancel the request for control interval or control area split.
Register Contents
Figure 46 gives the contents of the registers when VSAM exits to the LERAD exit
routine.
Programming Considerations
The typical actions of a LERAD routine are:
1. Examine the feedback field in the RPL to determine what error occurred
2. Determine what action to take based on error
3. Close the data set
4. Issue completion messages
5. Terminate processing and exit VSAM or return to VSAM.
If the LERAD exit routine issues GENCB, MODCB, SHOWCB, or TESTCB and
returns to VSAM, it must restore registers 1, 13, and 14, which are used by these
macros. It must also provide two save areas; one, whose address should be loaded
into register 13 before the GENCB, MODCB, SHOWCB, or TESTCB is issued, and
the second, to separately store registers 1, 13, and 14.
If the error cannot be corrected, close the data set and either terminate processing
or return to VSAM.
If a logical error occurs and no LERAD exit routine is provided (or the LERAD exit
is inactive), VSAM returns codes in register 15 and in the feedback field of the RPL
to identify the error.
When your LERAD exit routine completes processing, return to your main program
as described in “Returning to Your Main Program” on page 219.
The exit can do its own wait processing associated with the record management
request that is being asynchronously executed. When the record management
request is complete, VSAM will post the ECB that the user specified in the RPL.
The exit is not entered for each suspend condition as with the current VSAM
UPAD. VSAM RLS does not enter the RLSWAIT exit for ECB post processing as
with current VSAM.
The RLSWAIT exit is optional. It is used by applications that cannot tolerate VSAM
suspending the execution unit that issued the original record management request.
The RLSWAIT exit is required for record management request issued in cross
memory mode.
RLSWAIT should be specified on each ACB which requires the exit. If the exit is
not specified on the ACB via the EXLST, there is no RLSWAIT exit processing for
record management requests associated with that ACB. This differs from non-RLS
VSAM where the UPAD exit is associated with the control block structure so that all
ACBs connected to that structure inherit the exit of the first connector.
Register Contents
Figure 47 gives the contents of the registers when RLSWAIT is entered in 31 bit
mode.
Request Environment
VSAM RLS record management requests must be issued in PRIMARY ASC mode
and cannot be issued in home, secondary, or AR ASC mode. The user RPL,
EXLST, ACB, must be addressable from primary. Open must have been issued
from the same primary address space. VSAM RLS record management request
task must be the same as the task that opened the ACB, or the task that opened
the ACB must be in the task hierarchy (i.e., the record management task was
attached by that task that opened the ACB, or by a task that was attached by the
task that opened that ACB). VSAM RLS record management requests must not be
issued in SRB mode, and must not have functional recovery routine (FRR) in effect.
If the record management request is issued in cross memory mode, then the caller
must be in supervisor state and must specify that an RLSWAIT exit is associated
with the request (RPLWAITX = ON). The request must be synchronous.
The RLSWAIT exit, if specified, is entered at the beginning of the request and
VSAM processes the request asynchronously under a separate execution unit.
VSAM RLS does not enter the RLSWAIT exit for post processing.
VSAM assumes that the ECB supplied with the request is addressable form both
home and primary, and that the key of the ECB is the same as the key of the
record management caller.
Register Contents
Figure 48 gives the contents of the registers when VSAM exits to the SYNAD
routine.
Programming Considerations
A SYNAD routine should typically:
Examine the feedback field in the request parameter list to identify the type of
physical error that occurred.
Get the address of the message area, if any, from the request parameter list, to
examine the message for detailed information about the error
Recover data if possible
Print error messages if uncorrectable error
Close data set
Terminate processing.
The main problem with a physical error is the possible loss of data. You should try
to recover your data before continuing to process. Input operation (ACB
MACRF=IN) errors are generally less serious than output or update operation
(MACRF=OUT) errors, because your request was not attempting to alter the
contents of the data set.
If the routine cannot correct an error, it might print the physical-error message,
close the data set, and terminate the program. If the error occurred while VSAM
was closing the data set, and if another error occurs after the exit routine issues a
CLOSE macro, VSAM doesn't exit to the routine a second time.
If the SYNAD routine returns to VSAM, whether the error was corrected or not,
VSAM drops the request and returns to your processing program at the instruction
following the last executed instruction. Register 15 is reset to indicate that there
was an error, and the feedback field in the RPL identifies it.
Physical errors affect positioning. If a GET was issued that would have positioned
VSAM for a subsequent sequential GET and an error occurs, VSAM is positioned
at the control interval next in key (RPL OPTCD=KEY) or in entry (OPTCD=ADR)
sequence after the control interval involved in the error. The processing program
can therefore ignore the error and proceed with sequential processing. With direct
processing, the likelihood of reencountering the control interval involved in the error
depends on your application.
If the exit routine issues GENCB, MODCB, SHOWCB, or TESTCB and returns to
VSAM, it must provide a save area and restore registers 13 and 14, which are used
by these macros.
See “Example of a SYNAD User-Written Exit Routine” on page 233 for the format
of a physical-error message that can be written by the SYNAD routine.
When your SYNAD exit routine completes processing, return to your main program
as described in “Returning to Your Main Program” on page 219.
If a physical error occurs and no SYNAD routine is provided (or the SYNAD exit is
inactive), VSAM returns codes in register 15 and in the feedback field of the RPL to
identify the error. For a description of these return codes, DFSMS/MVS Macro
Instructions for Data Sets.
BR 14 Return to VSAM.
.
.
.
ERRCODE DC F'ð' RPL reason code from SHOWCB.
PERRMSG DS ðXL128 Physical error message.
If you are executing in cross-memory mode, you must have a UPAD routine and
RPL must specify WAITX. Cross-memory mode is described in “Operating in SRB
or Cross Memory Mode” on page 137. The UPAD routine is optional for
non-cross-memory mode.
Figure 50 describes the conditions in which VSAM calls the UPAD routine for
synchronous requests with shared resources. UPAD routines are only taken for
synchronous requests with shared resources or improved control interval
processing (ICI).
Register Contents
Figure 51 shows the register contents passed by VSAM when the UPAD exit
routine is entered.
Programming Considerations
The UPAD exit routine must be active before the data set is opened. The exit must
not be made inactive during processing. If the UPAD exit is desired and multiple
ACBs are used for processing the data set, the first ACB that is opened must
specify the exit list that identifies the UPAD exit routine.
The contents of the parameter list built by VSAM, pointed to by register 1, can be
examined by the UPAD exit routine (see Figure 52).
If the UPAD exit routine modifies register 14 (for example, by issuing a TESTCB),
the routine must restore register 14 before returning to VSAM. If register 1 is used,
the UPAD exit routine must restore it with the parameter list address before
returning to VSAM.
The UPAD routine must return to VSAM under the same TCB from which it was
called for completion of the request that caused VSAM to exit. The UPAD exit
routine cannot use register 13 as a save area pointer without first obtaining its own
save area.
The UPAD exit routine, when taken before a WAIT during LSR or GSR processing,
might issue other VSAM requests to obtain better processing overlap (similar to
asynchronous processing). However, the UPAD routine must not issue any
synchronous VSAM requests that do not specify WAITX, because a started request
might issue a WAIT for a resource owned by a starting request.
If the UPAD routine starts requests that specify WAITX, the UPAD routine must be
reentrant. After multiple requests have been started, they should be synchronized
by waiting for one ECB out of a group of ECBs to be posted complete rather than
waiting for a specific ECB or for many ECBs to be posted complete. (Posting of
some ECBs in the list might be dependent on the resumption of some of the other
requests that entered the UPAD routine.)
Cross-Memory Mode
If you are executing in cross-memory mode, you must have a UPAD routine and
RPL must specify WAITX. When waiting or posting of an event is required, the
UPAD routine is given control to do wait or post processing (reason code 0 or 4 in
the UPAD parameter list).
Through the access method services DEFINE command with the AUTHORIZATION
parameter you can identify your user-security-verification routine (USVR) and
associate as many as 256 bytes of your own security information with each data
set to be protected. The user-security-authorization record (USAR) is made
available to the USVR when the routine gets control. You can restrict access to the
data set as you choose. For example, you can require that the owner of a data set
give ID when defining the data set and then allow only the owner to gain access to
the data set.
If the USVR is being used by more than one task at a time, you must code the
USVR reentrant or develop another method for handling simultaneous entries.
When your USVR completes processing, it must return (in register 15) to VSAM
with a return code of 0 for authority granted or not 0 for authority withheld in
register 15. Figure 53 gives the contents of the registers when VSAM gives control
to the USVR.
If you specify that control blocks, buffers, or pools can be above the line and
attempt to use LOCATE mode to access records while in 24-bit mode, your
program will program check (ABEND 0C4).
Note: The user cannot specify the location of buffers or control blocks for RLS
processing. RLS ignores the ACB RMODE31= keyword.
“Obtaining Buffers Above 16 Megabytes” on page 153 explains how to create and
access buffers that reside above 16 megabytes. Chapter 13, “Sharing Resources
Among VSAM Data Sets” on page 189, explains how to build multiple LSR pools in
an address space.
Optionally, the DSNAME parameter can be used to specify one of the components
of a VSAM data set. Each VSAM data set is defined as a cluster of one or more
components. Key-sequenced data sets contain a data component and an index
component. Entry-sequenced and linear data sets, and fixed-length RRDSs contain
only a data component. Process a variable-length RRDS as a cluster. Each
alternate index contains a data component and an index component. For further
information on specifying a cluster name, see “Naming a Cluster” on page 90.
Disposition
The disposition (DISP) parameter describes the status of a data set to the system
and tells the system what to do with the data set after the step or job terminates.
With SMS, you can optionally specify a data class that contains RECORG. If your
storage administrator, through the ACS routines, creates a default data class that
contains RECORG, you have the option of taking this default as well.
The following list contains the keywords, including RECORG, used to allocate a
VSAM data set. See OS/390 MVS JCL Reference for a detailed description of
these keywords, .
AVGREC specifies the scale value of an average record request on the
SPACE keyword. The system applies the scale value to the primary and
secondary quantities specified in the SPACE keyword. The AVGREC keyword
is ignored if the SPACE keyword specifies anything but an average record
request.
The backup of the data set, which includes frequency of backup, number of
versions, and retention criteria for backup versions.
RECORG specifies the type of data set desired: KS, ES, RR, LS.
KS = key-sequenced data set
ES = entry-sequenced data set
RR = fixed-length relative record data set
LS = linear data set
REFDD specifies that the properties on the JCL statement and from the data
class of a previous DD statement should be used to allocate a new data set.
RETPD specifies the number of days a data set cannot be deleted by
specifying the PURGE keyword on the access method services DELETE
command. After the retention period, the data set can be deleted or written
over by another data set.
SECMODEL allows specification of the name of a “model” profile which RACF
should use in creating a discrete profile for the data set. For a list of the
information that is copied from the model profile, see OS/390 MVS JCL
Reference.
STORCLAS specifies the name, 1 to 8 characters, of the storage class for a
new, system-managed data set.
Your storage administrator defines the names of the storage classes you can
specify on the STORCLAS parameter. A storage class is assigned when you
specify STORCLAS or an ACS routine selects a storage class for the new data
set.
Use the storage class to specify the storage service level to be used by SMS
for storage of the data set. The storage class replaces the storage attributes
specified on the UNIT and VOLUME parameter for non-system-managed data
sets.
If a guaranteed space storage class is assigned to the data set (cluster) and
volume serial numbers are specified, space is allocated on all specified
volumes if the following conditions are met:
All volumes specified belong to the same storage group.
The storage group to which these volumes belong is in the list of storage
groups selected by the ACS routines for this allocation.
Allocation
You can allocate VSAM temporary data sets by specifying RECORG=KS|ES|LS|RR
in the following ways:
By the RECORG keyword on the DD statement or the dynamic allocation
parameter
By DATACLAS (so long as the selected data class name has the RECORG
attribute)
By the default data class established by your storage administrator (so long as
one exists and it contains the RECORG attribute)
For additional information on temporary data sets, refer to OS/390 MVS JCL
Reference and OS/390 MVS Assembler Services Guide. See “Example 4: Allocate
a Temporary VSAM Data Set” on page 246 for an example of creating a temporary
VSAM data set.
Explanation of Keywords:
DSNAME specifies the data set name.
DISP specifies that a new data set is to be allocated in this step and that the
data set is to be kept on the volume if this step terminates normally. If the data
set is not system-managed, KEEP is the only normal termination disposition
subparameter allowed for a VSAM data set. Non-system-managed VSAM data
sets should not be passed, cataloged, uncataloged, or deleted.
Explanation of Keywords:
DSNAME specifies the data set name.
DISP specifies that a new data set is to be allocated in this step and that the
data set is to be kept on the volume if this step terminates normally. Because a
system-managed data set is being allocated, all dispositions are valid for VSAM
data sets; however, UNCATLG is ignored.
DATACLAS specifies a data class for the new data set. If SMS is not active,
the system syntax checks and then ignores DATACLAS. SMS also ignores the
DATACLAS keyword if you specify it for an existing data set, or a data set that
SMS does not support.
This keyword is optional. If you do not specify DATACLAS for the new data set
and your storage administrator has provided an ACS routine, the ACS routine
can select a data class for the data set.
STORCLAS specifies a storage class for the new data set. If SMS is not active,
the system syntax checks and then ignores STORCLAS. SMS also ignores the
STORCLAS keyword if you specify it for an existing data set.
This keyword is optional. If you do not specify STORCLAS for the new data set
and your storage administrator has provided an ACS routine, the ACS routine
can select a storage class for the data set.
MGMTCLAS specifies a management class for the new data set. If SMS is not
active, the system syntax checks and then ignores MGMTCLAS. SMS also
ignores the MGMTCLAS keyword if you specify it for an existing data set.
This keyword is optional. If you do not specify MGMTCLAS for the new data
set and your storage administrator has provided an ACS routine, the ACS
routine can select a management class for the data set.
Explanation of Keywords:
DSNAME specifies the data set name.
DISP specifies that a new data set is to be allocated in this step and that the
system is to place an entry pointing to the data set in the system or user
catalog.
DATACLAS, STORCLAS, and MGMTCLAS are not required if your storage
administrator has provided ACS routines that will select the SMS classes for
you, and DATACLAS defines RECORG.
Explanation of Keywords:
DSN specifies the data set name. If you specify a data set name for a
temporary data set, it must begin with & or &&. This keyword is optional,
however. If you do not specify a DSN, the system will generate a qualified data
set name for the temporary data set.
DISP specifies that a new data set is to be allocated in this step and that the
data set is to be passed for use by a subsequent step in the same job. If KEEP
or CATLG are specified for a temporary data set, the system changes the
disposition to PASS and deletes the data set at job termination.
RECORG specifies a VSAM entry-sequenced data set.
SPACE specifies an average record length of 1 and a primary quantity of 10.
AVGREC specifies that the primary quantity (10) specified on the SPACE
keyword represents the number of records in megabytes (multiplier of
1048576).
LRECL specifies a record length of 256 bytes.
STORCLAS specifies a storage class for the temporary data set.
This keyword is optional. If you do not specify STORCLAS for the new data set
and your storage administrator has provided an ACS routine, the ACS routine
can select a storage class.
routine can select a storage class. The key-length for a key-sequenced data set
must be defined in your default DATACLAS. If you do not, this example will fail.
If SMS is active, you can pass VSAM data sets within a job. (Note that the system
replaces PASS with KEEP for permanent VSAM data sets. When you refer to the
data set later in the job, the system obtains data set information from the catalog.)
Without SMS you cannot pass VSAM data sets within a job.
Migration Consideration
If you have existing JCL that allocates a VSAM data set with DISP=(OLD,DELETE),
the system ignores DELETE and keeps the data set if SMS is inactive. If SMS is
active, DELETE is valid and the system deletes the data set.
9 With SMS, the AMP, UNIT, and VOLUME parameters are not needed when retrieving an existing VSAM data set.
10 With SMS, the DISP subparameters MOD, NEW, CATLG, KEEP, PASS, and DELETE can be used for VSAM data sets.
Certain JCL keywords should either not be used, or used only with caution when
processing VSAM data sets. See the VSAM data set section in OS/390 MVS JCL
User's Guide for a list of these keywords. Additional descriptions of these keywords
also appear in OS/390 MVS JCL Reference.
VSAM allows you to access the index of a key-sequenced data set to help you
diagnose index problems. This can be useful to you if your index is damaged or if
pointers are lost and you want to know exactly what the index contains. You should
not attempt to duplicate or substitute the index processing done by VSAM during
normal access to data records.
Access using GETIX and PUTIX is direct, by control interval: VSAM requires RPL
OPTCD=(CNV,DIR). The search argument for GETIX is the RBA of a control
interval. The increment from the RBA of one control interval to the next is control
interval size for the index.
GETIX can be issued either for update or not for update. VSAM recognizes
OPTCD=NUP or UPD but interprets OPTCD=NSP as NUP.
RPL OPTCD=MVE or LOC can be specified for GETIX, but only OPTCD=MVE is
valid for PUTIX. If you retrieve with OPTCD=LOC, you must change OPTCD to
MVE to store. With OPTCD=MVE, AREALEN must be at least index control interval
size.
Beyond these restrictions, access to an index through GETIX and PUTIX follows
the rules under Chapter 11, “Processing Control Intervals” on page 163.
You can gain access to index records with addressed access and to index control
intervals with control interval access. The use of these two types of access for
processing an index is identical in every respect with their use for processing a
data component.
Prime Index
A key-sequenced data set always has an index that relates key values to the
relative locations of the logical records in a data set. This index is called the prime
index. The prime index, or simply index, has two uses:
To locate the collating position when inserting records
To locate records for retrieval.
When initially loading a data set, records must be presented to VSAM in key
sequence. The index for a key-sequenced data set is built automatically by VSAM
as the data set is loaded with records. The index is stored in control intervals. An
index control interval contains pointers to index control intervals in the next lower
level, or one entry for each data control interval in a control area.
When a data control interval is completely loaded with logical records, free space,
and control information, VSAM makes an entry in the index. The entry consists of
the highest possible key in the data control interval and a pointer to the beginning
of that control interval. The highest possible key in a data control interval is one
less than the value of the first key in the next sequential data control interval.
Figure 55 illustrates that a single index entry, such as 19, contains all the
information necessary to locate a logical record in a data control interval.
Free
Index CI Ptr 19 25
CI List
Con- C
Con-
11 15 18 trol 20 25 trol Free Space I
Info Info D
F
Figure 56 on page 251 illustrates that a single index control interval contains all
the information necessary to locate a record in a single data control area.
Free
Index CI Ptr 19 25
CI List
b c a
Con- C
Con-
11 15 18 trol 20 25 trol Free Space I
Info Info D
F
Data Control Area
Index Levels
A VSAM index can consist of more than one index level. Each level contains a set
of records with entries giving the location of the records in the next lower level.
Figure 57 on page 252 illustrates the levels of a prime index and shows the
relationship between sequence set index records and control areas. The sequence
set shows both the horizontal pointers used for sequential processing, and the
vertical pointers to the data set. Although the values of the keys are actually
compressed in the index, the full key values are shown in the figure.
H
D 2799 4200 6705
R
Index Set
H H
D 1333 2383 2799 D 4200
R R
H H
Sequence Set D 1021 1051 1333 D 1401 2344 2383
R R
Con- Con-
1001 1002 1009 FS trol 1334 1350 1400 FS trol
Info Info
Con- Con-
1022 1025 1033 FS trol 2345 2342 2363 FS trol
Info Info
HDR - Header Information
Figure 57. Levels of a Prime Index
Sequence Set
The index records at the lowest level are the sequence set. There is one index
sequence set level record for each control area in the data set. This sequence set
record gives the location of data control intervals. An entry in a sequence set
record consists of the highest possible key in a control interval of the data
component, paired with a pointer to that control interval.
Index Set
If there is more than one sequence set level record, VSAM automatically builds
another index level. Each entry in the second level index record points to one
sequence set record. The records in all levels of the index above the sequence set
are called the index set. An entry in an index set record consists of the highest
possible key in an index record in the next lower level, and a pointer to the
beginning of that index record. The highest level of the index always contains only
a single record.
When you access records sequentially, VSAM refers only to the sequence set. It
uses a horizontal pointer to get from one sequence set record to the next record in
collating sequence. When you access records directly (not sequentially), VSAM
follows vertical pointers from the highest level of the index down to the sequence
set to find vertical pointers to data.
Index control intervals are not grouped into control areas as are data control
intervals. When a new index record is required, it is stored in a new control interval
at the end of the index data set. As a result, the records of one index level are not
segregated from the records of another level, except when the sequence set is
separate from the index set. The level of each index record is identified by a field in
the index header (see “Header Portion”).
When an index record is replicated on a track, each copy of the record is identical
to the other copies. Replication has no effect on the contents of records.
┌────────────────┬─────────┬───────────┬──────────────────────────┐
│ │ Free CI │ Unused │ │
│ Header │ Ptr List│ Space │ Index - Entry Portion │
└────────────────┴─────────┴───────────┴──────────────────────────┘
Figure 58. General Format of an Index Record
Figure 58 shows the parts of an index record. An index record contains the
following:
A 24-byte header containing control information about the record.
For a sequence-set index record governing a control area that has free control
intervals, there are entries pointing to those free control intervals.
Unused space, if any.
A set of index entries used to locate, for an index-set record, control intervals in
the next lower level of the index, or, for a sequence-set record, used control
intervals in the control area governed by the index record.
Header Portion
The first 24 bytes of an index record is the header, which gives control information
about the index record.Figure 59 shows its format. All lengths and displacements
are in bytes. The discussions in the following two sections amplify the meaning
and use of some of the fields in the header.
The entries come immediately after the header. They are used from right to left.
The rightmost entry is immediately before the unused space (whose displacement
is given in IXHFSO in the header). When a free control interval gets used, its free
entry is converted to zero, the space becomes part of the unused space, and a
new index entry is created in the position determined by ascending key sequence.
Thus, the free control interval entry portion contracts to the left, and the index entry
portion expands to the left. When all the free control intervals in a control area have
been used, the sequence-set record governing the control area no longer has free
control interval entries, and the number of index entries equals the number of
control intervals in the control area. Note that if the index control interval size was
specified with too small a value, it is possible for the unused space to be used up
for index entries before all the free control intervals have been used, resulting in
control intervals within a data control area that cannot be used.
Figure 60 shows the format of the index entry portion of an index record. To
improve search speed, index entries are grouped into sections, of which there are
approximately as many as the square root of the number of entries. For example,
if there are 100 index entries in an index record, they are grouped into 10 sections
of 10 entries each. (The number of sections does not change, even though the
number of index entries increases as free control intervals get used.)
The sections, and the entries within a section, are arranged from right to left.
IXHLEO in the header gives the displacement from the beginning of the index
record to the control information in the leftmost index entry. IXHSEO gives the
displacement to the control information in the leftmost index entry in the rightmost
section. You calculate the displacement of the control information of the rightmost
index entry in the index record (the entry with the lowest key) by subtracting
IXHFLPLN from IXHLL in the header (the length of the control information in an
index entry from the length of the record).
Each section is preceded by a 2-byte field that gives the displacement from the
control information in the leftmost index entry in the section to the control
information in the leftmost index entry in the next section (to the left). The last
(leftmost) section's 2-byte field contains 0's.
Key Compression
Index entries are variable in length within an index record, because VSAM
compresses keys. That is, it eliminates redundant or unnecessary characters from
the front and back of a key to save space. The number of characters that can be
eliminated from a key depends on the relationship between that key and the
preceding and following keys.
For front compression, VSAM compares a key in the index with the preceding key
in the index and eliminates from the key those leading characters that are the same
as the leading characters in the preceding key. For example, if key 12,356 follows
key 12,345, the characters 123 are eliminated from 12,356 because they are equal
to the first three characters in the preceding key. The lowest key in an index record
has no front compression; there is no preceding key in the index record.
There is an exception for the highest key in a section. For front compression, it is
compared with the highest key in the preceding section, rather than with the
preceding key. The highest key in the rightmost section of an index record has no
front compression; there is no preceding section in the index record.
What is called “rear compression” of keys is actually the process of eliminating the
insignificant values from the end of a key in the index. The values eliminated can
be represented by X'FF'. VSAM compares a key in the index with the following
key in the data and eliminates from the key those characters to the right of the first
character that are unequal to the corresponding character in the following key. For
example, if the key 12,345 (in the index) precedes key 12,356 (in the data), the
character 5 is eliminated from 12,345 because the fourth character in the two keys
is the first unequal pair.
The first of the control information fields gives the number of characters eliminated
from the front of the key, and the second field gives the number of characters that
remain. When the sum of these two numbers is subtracted from the full key length
(available from the catalog when the index is opened), the result is the number of
characters eliminated from the rear. The third field indicates the control interval that
contains a record with the key.
The example in Figure 62 on page 258 gives a list of full keys and shows the
contents of the index entries corresponding to the keys that get into the index (the
highest key in each data control interval). A sequence-set record is assumed, with
vertical pointers 1 byte long. The index entries shown in the figure from top to
bottom are arranged from right to left in the assumed index record.
Key 12,345 has no front compression because it is the first key in the index record.
Key 12,356 has no rear compression because, in the comparison between 12,356
and 12,357, there are no characters following 6, which is the first character that is
unequal to the corresponding character in the following key.
You can always figure out what characters have been eliminated from the front of a
key. You cannot figure out the ones eliminated from the rear. Rear compression, in
effect, establishes the key in the entry as a boundary value instead of an exact high
key. That is, an entry does not give the exact value of the highest key in a control
interval, but gives only enough of the key to distinguish it from the lowest key in the
next control interval. For example, in Figure 62 on page 258 the last three index
keys are 12,401, 124, and 134 after rear compression. Data records with key fields
between 12,402 and 124FF are associated with index key 124. Data records with
key fields between 12,500 and 134FF are associated with index key 134.
If the last data record in a control interval is deleted, and if the control interval does
not contain the high key for the control area, then the space is reclaimed as free
space. Space reclamation can be suppressed by setting the RPLNOCIR bit, which
has an equated value of X'20', at offset 43 into the RPL.
The last index entry in an index level indicates the highest possible key value. The
convention for expressing this value is to give none of its characters and indicate
that no characters have been eliminated from the front. The last index entry in the
last record in the sequence set looks like this:
┌─────┬─────┬─────┐
│ F │ L │ P │
├─────┼─────┼─────┤
│ O │ O │ X │
│ │ │ │
└─────┴─────┴─────┘
Where x is a binary number from ð to 255, assuming a 1-byte
pointer.
In a search, the two 0's signify the highest possible key value in this way:
The fact that 0 characters have been eliminated from the front implies that the
first character in the key is greater than the first character in the preceding key.
A length of 0 indicates that no character comparison is required to determine
whether the search is successful. That is, when a search finds the last index
entry, a hit has been made.
1 2 3 4 0 4 0 None 5
12345
12350
12353
K F L P
12354
5 6 3 2 1 123 none
12356
12357
F L P
12358
4 0 2 1235 9
12359
12370
12373
12380
12385
K F L P
12390
4 0 1 2 3 3 12 none
12401
12405
12410
F L P
12417
3 0 4 124 21
12421
12600
K F L P
13200
3 4 1 2 5 1 56
13456
13567
Legend:
K-Characters left in key after compression
F-Number of characters eliminated from the front
L-Number of characters left in key after compression
P-Vertical pointer
Figure 63 on page 259 illustrates how the control interval is split and the index is
updated when a record with a key of 12 is inserted in the control area shown in
Figure 56 on page 251.
Index 14 19 25
CI
d c
1. A control interval split occurs in data control interval 1, where a record with the
key of 12 must be inserted.
2. Half the records in data control interval 1 are moved by VSAM to the free
space control interval (data control interval 3).
3. An index entry is inserted in key sequence to point to data control interval 3,
that now contains data records moved from data control interval 1.
4. A new index entry is created for data control interval 1, because after the
control interval split, the highest possible key is 14. Because data control
interval 3 now contains data, the pointer to this control interval is removed from
the free list and associated with the new key entry in the index. Note that key
values in the index are in proper ascending sequence, but the data control
intervals are no longer in physical sequence.
Only the last (leftmost) index entry for a spanned record contains the key of the
record. The key is compressed according to the rules described above. All the
other index entries for the record look like this:
┌─────┬─────┬─────┐
│ F │ L │ P │
├─────┼─────┼─────┤
│ Y │ O │ X │
│ │ │ │
└─────┴─────┴─────┘
A utility program initializes each DASD volume before it is used on the system. The
initialization program generates the volume label and builds the VTOC. For
additional information on direct access labels, see Appendix A, “Using Direct
Access Labels” on page 493.
Track Format
Information is recorded on all DASD volumes in a standard format. For historical
reasons System 390 hardware manuals and software manuals use inconsistent
terminology when referring to units of data written on DASD. Hardware manuals
call them records. Software manuals call them blocks and use ‘record’ for
something else. This DASD section of this manual uses both terms as appropriate.
Software records are described in Chapter 21, “Record Formats of Non-VSAM
Data Sets” on page 269.
The application program deals with byte streams which the application program
might regard as being grouped in records. The program does not see blocks.
Examples include HFS files and OAM objects.
Each of these data records is called a block. BDAM uses the track descriptor
record to contain the number of bytes following the last user data record on the
track. ISAM uses the track descriptor record for that purpose for variable-length
format data sets.
Figure 64 shows that there are two different data formats—count-data and
count-key-data—only one of which can be used for a particular data set. An
exception is partitioned data sets that are not PDSEs. The directory records are
count-key-data and the member records are count-data.
Count-Data Format
Count-Key-Data Format
Count-Data Format:
Records are formatted without keys. The key length is 0. The count area contains 8
bytes that identify the location of the block by cylinder, head, and record numbers,
and its data length.
Count-Key-Data Format:
The blocks are written with hardware keys. The key area (1 to 255 bytes) contains
a record key that specifies the data record, such as the part number, account
number, sequence number, or some other identifier.
Only ISAM, BDAM, BSAM, CVOL catalogs, EXCP, and partitioned data set
directories support blocks with hardware keys. IBM recommends that you not use
hardware keys, because they are less efficient than software keys, which are
supported by VSAM.
Track Overflow
MVS no longer supports the track overflow feature. The system ignores any request
for it.
Actual Addresses
When the system returns the actual address of a block on a direct access volume
to your program, it is in the form MBBCCHHR, where:
M is a 1-byte binary number specifying the relative extent number. Each extent is
a set of consecutive tracks allocated for the data set.
BBCCHH
is three 2-byte binary numbers specifying the cell (bin), cylinder, and head
number for the block (its track address). The cylinder and head numbers are
recorded in the count area for each block. All currently supported DASDs
require that the bin number (BB) be zero.
R is a 1-byte binary number specifying the relative block number on the track.
The block number is also recorded in the count area.
If your program stores actual addresses in your data set, and you refer to those
addresses, the data set must be treated as unmovable. Data sets that are
unmovable cannot reside on system-managed volumes.
Relative Addresses
DFSMS/MVS supports two kinds of relative block addresses: relative block
addresses and relative track addresses. BDAM uses relative block addresses.
BSAM (except with extended format data sets), BPAM, and BDAM use relative
track addresses. ISAM records are accessed by key or sequentially.
The relative block address is a 3-byte binary number that shows the position of the
block starting from the first block of the data set. Allocation of noncontiguous sets
of blocks does not affect the number. The first block of a data set always has a
relative block address of 0.
TT is an unsigned 2-byte binary number specifying the position of the track starting
from the first track allocated for the data set. The TT for the first track is 0.
Allocation of noncontiguous sets of tracks does not affect the TT number.
R is a 1-byte binary number specifying the number of the block starting from the
first block on the track TT. The R value for the first block of data on a track is
1.
Note: With some devices, such as the IBM 3380 Model K, a data set can contain
more than 32767 tracks. Therefore, assembler halfword instructions could
result in invalid data being processed.
Relative Block Addresses for Extended Format Data Sets: For extended
format data sets, addressing capability is provided through the use of block locator
tokens (BLTs). The BLT can be used transparently as if it were a TTR. The NOTE
macro returns a 4-byte value in which the three high order bytes are the BLT value
and the fourth byte is a zero. The value from the NOTE macro is used as input to
the POINT macro which provides positioning within a sequential data set via BSAM.
The BLT is essentially the relative block number (RBN) within each logical volume
of the data set (where the first block has an RBN of 1). For compressed format
data sets, the relative block numbers represent uncompressed simulated blocks,
not the real compressed blocks.
Note: A multi-striped data set is seen by the user as a single logical volume.
Therefore, for a multi-striped data set, the RBN is relative to the beginning
of the data set and incorporates all stripes.
Relative Track Addresses for PDSEs: For PDSEs, the relative track addresses
(TTRs) do not represent the actual track and record location. Instead, the TTRs are
tokens that define the record's position within the data set. See “Relative Track
Addresses (TTR)” on page 399 for a description of TTRs for PDSE members and
blocks.
For more information about magnetic tape volume labels, see DFSMS/MVS Using
Magnetic Tapes . For more information about nonstandard label processing
routines, see DFSMS/MVS Installation Exits.
Data Conversion
The data on magnetic tape volumes can be in any format. For example it might be
in EBCDIC, ASCII, or binary. ASCII is a 7-bit code consisting of 128 characters. It
permits data on magnetic tape to be transferred from one computer to another,
even though the two computers can be products of different manufacturers.
If you specify volume serial numbers for unlabeled volumes where a data set is to
be stored, the system assigns volume serial numbers to any additional volumes
required. If data sets residing on unlabeled volumes are to be cataloged or passed,
you should specify the volume serial numbers for the volumes required. Specifying
the volume serial numbers ensures that data sets residing on different volumes are
not cataloged or passed under identical volume serial numbers. Retrieving such
data sets can give unpredictable errors.
Tape Marks
Each data set and data set label group must be followed by a tape mark.
Tape marks cannot exist within a data set. When a program writes data on a
standard labeled or unlabeled tape, the system automatically reads and writes
labels and tape marks. Two tape marks follow the last trailer label group on a
standard-label volume. On an unlabeled volume, the two tape marks appear after
the last data set.
When a program writes data on a non-standard labeled tape, the installation must
supply routines to process labels and tape marks, and position the tape. If you
want the system to retrieve a data set, the installation routine that creates
nonstandard-labels must write tape marks. Otherwise, tape marks are not required
after nonstandard labels, because positioning of the tape volumes must be handled
by installation routines.
Format Selection
Before selecting a record format, you should consider:
The data type (for example, EBCDIC) your program can receive and the type of
output it can produce
The I/O devices that contain the data set
The access method you use to read and write the records.
Whether the records can be blocked.
Blocking is the process of grouping records into blocks before they are written on a
volume. A block consists of one or more logical records. Each block is written
between consecutive interblock gaps. Blocking conserves storage space on a
volume by reducing the number of interblock gaps in the data set, and increases
processing efficiency by reducing the number of I/O operations required to process
the data set.
If you do not specify a block size, the system generally determines a block size that
is optimum for the device to which your data set is allocated. See
“System-Determined Block Size” on page 298.
You select your record format in the data control block (DCB) using the options in
the DCB macro, the DD statement, dynamic allocation, automatic class selection
routines or the data set label. Before executing your program, you must supply the
operating system with the record format (RECFM) and device-dependent
information in data class, a DCB macro, a DD statement, or a data set label. A
complete description of the DD statement keywords and a glossary of DCB
subparameters are contained in OS/390 MVS JCL Reference.
Block Block
Blocked
Records Record A Record B Record C Record D Record E Record F
Record
a b Data
Unblocked
Records Record A Record B Record C Record D
The records can be blocked or unblocked. If the data set contains unblocked
format-F records, one record constitutes one block. If the data set contains blocked
format-F records, the number of records within a block typically is constant for
every block in the data set. The data set can contain truncated (short) blocks. The
system automatically checks the length (except for card readers) on blocked or
unblocked format-F records. Allowances are made for truncated blocks.
The optional control character (a), used for stacker selection or carriage control,
can be included in each record to be printed or punched. The optional table
reference character (b) is a code to select the font to print the record on a page
printer. See “Using Optional Control Characters” on page 287 and “Table
Reference Character” on page 290. See IBM 3800 Printing Subsystem
Programmer’s Guide or IBM 3800 Printing Subsystem Models 3 and 8
Programmer’s Guide.
Standard Format
During creation of a sequential data set (to be processed by BSAM or QSAM) with
fixed-length records, the RECFM subparameter of the DCB macro can specify a
standard format (RECFM=FS or FBS). A sequential data set with standard format
records (format-FS or -FBS) sometimes can be read more efficiently than a data
set with format-F or -FB records. This efficiency is possible because the system is
able to determine the address of each record to be read, because each track
contains the same number of blocks.
Restrictions
If the last block is truncated, you should never extend a standard-format data set by
coding:
EXTEND or OUTINX on the OPEN macro
OUTPUT, OUTIN, or INOUT on the OPEN macro with DISP=MOD on the
allocation
CLOSE LEAVE, TYPE=T, and then a WRITE
POINT to after the last block and then a WRITE
CNTRL on tape to after the last block, and then a WRITE.
If the data set is extended, it contains a truncated block that is not the last block.
Reading an extended data set with this condition results in a premature end-of-data
condition when the truncated block is read, giving the appearance that the blocks
following this truncated block do not exist. This type of data set on magnetic tape
should not be read backward, because the data set would begin with a truncated
block. Therefore, you cannot want to use this type of data set with magnetic tape.
A format-F data set will not meet the requirements of a standard-format data set if
you do the following:
Extend a fixed-length, blocked standard data set when the last block was
truncated.
Use the POINT macro to prevent BSAM from filling a track other than the last
one. Do not skip a track when writing to a data set.
Standard format should not be used to read records from a data set that was
created using a record format other than standard, because other record formats
might not create the precise format required by standard.
If the characteristics of your data set are altered from the specifications described
above at any time, the data set should no longer be processed with the standard
format specification.
Format-V Records
Figure 66 shows blocked and unblocked variable-length (format-V) records without
spanning. A block in a data set containing unblocked records is in the same format
as a block in a data set containing block records. The only difference is that with
blocked records each block can contain multiple records.
Block
BDW LL
Reserved: 2 Bytes
Block Lenght:
2 Bytes LL
RDW Data
Record LL 00 a b
Optional Table
Reference Character
Optional Control Character
Block Reserved: 2 Bytes Block
Record Lenght:
2 Bytes
LL BDW BDW Record
Unblocked
Records LL 00 Record B LL 00 Record C LL 00 Record D
Reserved: 2 Bytes
Record Lenght: 2 Bytes
The system uses the record or segment length information in blocking and
unblocking. The first 4 bytes of each record, record segment, or block make up a
descriptor word containing control information. You must allow for these additional 4
bytes in both your input and output buffers.
For output, you must provide the RDW, except in data mode for spanned records
(described under “Controlling Buffers” on page 318). For output in data mode, you
must provide the total data length in the physical record length field (DCBPRECL)
of the DCB.
For input, the operating system provides the RDW, except in data mode. In data
mode, the system passes the record length to your program in the logical record
length field (DCBLRECL) of the DCB.
The optional control character (a) can be specified as the fifth byte of each record.
The first byte of data is a table reference character (b) if OPTCD=J has been
specified. The RDW, the optional control character, and the optional table reference
character are not punched or printed.
Block
BDW LL
When spanning is specified for blocked records, the system tries to fill all blocks.
For unblocked records, a record larger than the block size is split and written in two
or more blocks, each block containing only one record or record segment. Thus,
the block size can be set to the best block size for a given device or processing
situation. It is not restricted by the maximum record length of a data set. A record
can, therefore, span several blocks, and can even span volumes.
If you issue the FEOV macro when reading a data set that spans volumes, or if a
spanned multivolume data set is opened to other than the first volume, make sure
that each volume begins with the first (or only) segment of a logical record. Input
routines cannot begin reading in the middle of a logical record.
Notes:
1. A logical record spanning three or more volumes cannot be processed when
the data set is opened for update.
When unit record devices are used with spanned records, the system assumes
that unblocked records are being processed and the block size must be
equivalent to the length of one print line or one card. Records that span blocks
are written one segment at a time.
2. Spanned variable-length records cannot be specified for a SYSIN data set.
When QSAM opens a spanned record data set in UPDAT mode, it uses the logical
record interface (LRI) to assemble all segments of the spanned record into a single,
logical input record, and to disassemble a single logical record into multiple
segments for output data blocks. A record area must be provided by using the
BUILDRCD macro or by specifying BFTEK=A in the DCB.
Note: When you specify BFTEK=A, the Open routine provides a record area equal
to the LRECL specification, which should be the maximum length in bytes.
(An LRECL=0 is not valid.)
The remaining bits of the third byte and all of the fourth byte are reserved for
possible future system use and must be 0.
The SDW for the first segment replaces the RDW for the record after the record is
segmented. You or the operating system can build the SDW, depending on which
access method is used.
In the basic sequential access method, you must create and interpret the
spanned records yourself.
In the queued sequential access method move mode, complete logical records,
including the RDW, are processed in your work area. GET consolidates
segments into logical records and creates the RDW. PUT forms segments as
required and creates the SDW for each segment.
Data mode is similar to move mode, but allows reference only to the data
portion of the logical record (that is, to one segment) in your work area. The
logical record length is passed to you through the DCBLRECL field of the data
control block.
In locate mode, both GET and PUT process one segment at a time. However,
in locate mode, if you provide your own record area using the BUILDRCD
macro, or if you ask the system to provide a record area by specifying
BFTEK=A, then GET, PUT, and PUTX process one logical record at a time.
Null Segments
A 1 in bit position 0 of the SDW indicates a null segment. A null segment means
that there are no more segments in the block. Bits 1 to 7 of the SDW and the
remainder of the block must be binary zeros. A null segment is not an
end-of-logical-record delimiter. (You do not have to be concerned about null
segments unless you have created a data set using null segments.)
Note: Null segments are not recreated in PDSEs. See “Processing Restrictions for
PDSEs” on page 402 for more information.
Block
BDW LL
Reserved:
2 Bytes
LL LL LL
Block Length:
2 Bytes SDW Data SDW Data
SDW Data
Inter-
First mediate Last
Segment Segment Segment
LL* LL* of Logical LL
of Logical of Logical
Record Record Record
LL
When you specify spanned, unblocked record format for the basic direct access
method, and when a complete logical record cannot fit on the track, the system
tries to fill the track with a record segment. Thus, the maximum record length of a
data set is not restricted by track capacity. Segmenting records allows a record to
span several tracks, with each segment of the record on a different track. However,
because the system does not allow a record to span volumes, all segments of a
logical record in a direct data set are on the same volume.
Note: Use of the basic direct access method (BDAM) is not recommended.
Record
a b Data
Optional Table
Reference Character:
1 Byte
Optional Control
Character: 1 Byte
Block Block Block
For format-U records, you must specify the record length when issuing the WRITE,
PUT, or PUTX macro. No length checking is performed by the system, so no error
indication will be given if the specified length does not match the buffer size or
physical record size.
In update mode, you must issue a GET or READ macro before you issue a PUTX
or WRITE macro to a data set on a direct access storage device. If you change
the record length when issuing the PUTX or WRITE macro, the record will be
padded with zeros or truncated to match the length of the record received when the
GET or READ macro was issued. No error indication will be given.
ISO/ANSI Tapes
ISO/ANSI tape records are written in format-F, -D, -S, or -U.
| – Supervisor state callers are not supported for any data management calls
| and will result in an error.
| Default Character Conversion
| Data management provides conversion from ASCII to EBCDIC on input, and
| EBCDIC to ASCII for output in any of the following cases (see “Tables for
| Default Conversion Codes” on page 573):
| – ISO/ANSI V1 and V3 tapes
| – ISO/ANSI V4 tapes without CCSID
| – Unlabeled tapes with OPTCD=Q.
| Note: As explained in DFSMS/MVS Using Magnetic Tapes, conversion
| routines supplied by the system for this type of conversion converts to
| ASCII 7-bit code.
| When converting between EBCDIC to ASCII, if the character to be converted
| contains a bit in the high order position, the 7-bit conversion does not produce
| an equivalent character. Instead, it produces a substitute character to note the
| loss in conversion. This means, for example, that random binary data (such as
| a dump) cannot be recorded in ISO/ANSI 7-bit code.
| Note: An existing data set created using Default Character conversion cannot
| be read or written (unless DISP=OLD) using CCSID conversion.
| The closest equivalent to Default Character conversion, when using CCSIDs, is
| between CCSID of 367 representing 7-bit ASCII and CCSID of 500
| representing International EBCDIC.
Format-F Records
For ISO/ANSI tapes, format-F records are the same as described in “Fixed-Length
Record Formats” on page 270, with three exceptions:
Control Characters:
Control characters, when present, must be ISO/ANSI control characters. For more
information about control characters, see DFSMS/MVS Macro Instructions for Data
Sets.
Block Prefix:
Record blocks can contain block prefixes. The block prefix can vary from 0 to 99
bytes, but the length must be constant for the data set being processed. For
blocked records, the block prefix precedes the first logical record. For unblocked
records, the block prefix precedes each logical record.
Using QSAM and BSAM to read records with block prefixes requires that you
specify the BUFOFF parameter in the DCB. When using QSAM, you do not have
access to the block prefix on input. When using BSAM, you must account for the
block prefix on both input and output. When using either QSAM or BSAM, you
must account for the length of the block prefix in the BLKSIZE and BUFL
parameters of the DCB.
When you use BSAM on output records, the operating system does not recognize a
block prefix. Therefore, if you want a block prefix, it must be part of your record.
Note that you cannot include block prefixes in QSAM output records.
| The block prefix can only contain EBCDIC characters that correspond to the 128,
| seven-bit ASCII characters. Thus, you must avoid using data types such as binary,
| packed decimal, and floating point that cannot always be converted into ASCII. This
| is also true when CCSIDs are used when writing to ISO/ANSI V4 tapes.
| Note: As explained in DFSMS/MVS Using Magnetic Tapes, conversion routines
| supplied by the system converts to ASCII 7-bit code.
Figure 71 shows the format of fixed-length records for ISO/ANSI tapes and where
control characters and block prefixes are positioned if they exist.
Block Block
Optional Optional
Blocked Block Record A Record B Record C Block Record D Record E Record F
Records Prefix Prefix
Record
a Data
Optional Control
Character: 1 Byte
Block Block Block
Circumflex Characters:
The GET routine tests each record (except the first) for all circumflex characters
(X'5E'). If a record completely filled with circumflex characters is detected, QSAM
ignores that record and the rest of the block. A fixed-length record must not consist
of only circumflex characters. This restriction is necessary because circumflex
characters are used to pad out a block of records when fewer than the maximum
number of records are included in a block, and the block is not truncated.
Format-D Records
Format-D, -DS, and -DBS records are used for ISO/ANSI tape data sets. ISO/ANSI
records are the same as format-V records, with these exceptions, which are
discussed in the following sections:
Block prefix
Block size
Control characters.
Block Prefix:
A record block can contain a block prefix. To specify a block prefix, code BUFOFF
in the DCB macro. The block prefix can vary in length from 0 to 99 bytes, but its
length must remain constant for all records in the data set being processed. For
blocked records, the block prefix precedes the RDW for the first or only logical
record in each block. For unblocked records, the block prefix precedes the RDW for
each logical record.
To specify that the block prefix is to be treated as a BDW by data management for
format-D or -DS records on output, code BUFOFF=L as a DCB parameter. Your
block prefix must be 4 bytes long, and it must contain the length of the block,
including the block prefix. The maximum length of a format-D or -DS, BUFOFF=L
block is 9999, because the length (stated in binary numbers by the user) is
converted to a 4-byte ASCII character decimal field on the ISO/ANSI tape when the
block is written. It is converted back to a 2-byte length field in binary followed by
two bytes of zeros when the block is read.
If you use QSAM to write records, data management fills in the block prefix for you.
If you use BSAM to write records, you must fill in the block prefix yourself. If you
are using chained scheduling to read blocked DB or DBS records, you cannot code
BUFOFF=absolute expression in the DCB. Instead, BUFOFF=L is required,
because the access method needs binary RDWs and valid block lengths to unblock
the records.
When you use QSAM, you cannot read the block prefix into your record area on
input. When using BSAM, you must account for the block prefix on both input and
output. When using either QSAM or BSAM, you must account for the length of the
block prefix in the BLKSIZE and BUFL parameters.
| When using QSAM to access DB or DBS records, and BUFOFF=0 is specified, the
| value of BUFL, if specified, must be increased by 4. If BUFL is not specified, then
| BLKSIZE must be increased by 4. This allows for a 4-byte QSAM internal
| processing area to be included when the system acquires the buffers. These 4
| bytes do not become part of the user's block.
When you use BSAM on output records, the operating system does not recognize
the block prefix. Therefore, if you want a block prefix, it must be part of your record.
The block prefix can only contain EBCDIC characters that correspond to the 128,
seven-bit ASCII characters. Thus, you must avoid using data types (such as binary,
packed decimal, and floating point), that cannot always be converted into ASCII.
For DB and DBS records, the only time the block prefix can contain binary data is
when you have coded BUFOFF=L, which tells data management that the prefix is a
BDW. Unlike the block prefix, the RDW must always be binary.
| Note: The above is true whether conversion or no conversion is specified with
| CCSID for Version 4 tapes.
Block Size:
| Version 3 tapes have a maximum block size of 2048. This limit can be overridden
| by a label validation installation exit. For Version 4 tapes, the maximum size is
| 32,760.
If you specify a maximum data set block size of 18 or greater when creating
variable-length blocks, then individual blocks can be shorter than 18 bytes. In
those cases data management pads each one to 18 bytes when the blocks are
written onto an ISO/ANSI tape. The padding character used is the ASCII circumflex
character, which is X'5E'.
Control Characters:
Block Block
Optional Optional
Blocked Block Record A Record B Record C Block Record D Record E Record F
Records Prefix Prefix
LL
RDW Data
LL a
Block Block
Figure 73 on page 284 shows what spanned variable-length records for ISO/ANSI
tapes look like.
Blocked Optional Last Seg. First Seg. of Optional Intermediate Seg. Optional Last Seg. First Seg.
Records Block of Logical Logical Block of Logical Record B Block of Logical of Logical
Prefix Record A Record B Prefix Prefix Record B Record C
Reserved: 1 Byte
Segment Position
Indicator: 1 Byte
Segment Length: 2 Bytes LL
Field Expansion Byte
The figure shows the segment descriptor word (SDW), where the record descriptor
word (RDW) is located, and where block prefixes must be placed when they are
used. If you are not using IBM access methods, see DFSMS/MVS Macro
Instructions for Data Sets for a description of ISO/ANSI record control words and
segment control words.
QSAM or BSAM convert between ISO/ANSI segment control word (SCW) format
and IBM segment descriptor word (SDW) format. On output, the binary SDW LL
value (provided by you when using BSAM and by the access method when using
QSAM), is increased by 1 for the extra byte and converted to four ASCII numeric
characters. Because the binary SDW LL value will result in four numeric characters,
the binary value must not be greater than 9998. The fifth character is used to
designate which segment type (complete logical record, first segment, last segment,
or intermediate segment) is being processed.
On input, the four numeric characters designating the segment length are converted
to two binary SDW LL bytes and decreased by one for the unused byte. The
ISO/ANSI segment control character maps to the DS/DBS SDW control flags. This
conversion leaves an unused byte at the beginning of each SDW. It is set to X'00'.
Refer to DFSMS/MVS Macro Instructions for Data Sets for more details on this
process.
DS/DBS records with a record length of over 32,760 bytes can be processed using
XLRI. (XLRI is supported only in QSAM locate mode for ISO/ANSI tapes.) Using
the LRECL=X for ISO/ANSI causes an 013-DC abend.
LRECL=0K in the DCB macro specifies that the LRECL value will come from the
file label or JCL. When LRECL is from the label, the file must be opened as an
input file. The label (HDR2) value for LRECL will be converted to kilobytes and
rounded up when XLRI is in effect. When the ISO/ANSI label value for LRECL is
00000 to show that the maximum record length can be greater than 99999, you
must use LRECL=nK in the JCL or in the DCB to specify the maximum record
length.
| When the DCB specifies XLRI, the system converts the LRECL from JCL is
| expressed in absolute form or with the K notation. Absolute values, permissible only
| from 5 to 32,760, are converted to kilobytes by rounding up to an integral multiple
| of 1024.
To show the record area size in the DD statement, code LRECL=nK, or specify a
data class that has the LRECL attribute you need. The value nK can range from 1K
to 16 383K (expressed in 1024-byte multiples). However, depending on the buffer
space available, the value you can specify in most systems will be much smaller
than 16 383K bytes. This value is used to determine the size of the record area
required to contain the largest logical record of the spanned format file.
When using XLRI, the exact LRECL size is communicated in the three low-order
bytes of the RDW in the record area. This special RDW format exists only in the
record area to communicate the length of the logical record (including the 4-byte
RDW) to be written or read. (See the XLRI format of the RDW in Figure 73 on
page 284.) DCB LRECL shows the 1024-multiple size of the record area (rounded
up to the next nearest kilobyte). The normal DS/DBS SDW format is used at all
other times before conversion.
Format-U Records
Data can only be in format-U for ISO/ANSI Version 1 tapes (ISO 1001-1969 and
ANSI X3.27-1969). These records can be used for input only. They are the same
as the format-U records described in “Undefined-Length Record Format” on
page 278 except the control characters must be ISO/ANSI control characters, and
block prefixes can be used.
| Format-U records are not supported for Version 3 or Version 4 ISO/ANSI tapes. An
| attempt to process a format-U record from a Version 3 or Version 4 tape results in
| entering the label validation installation exit.
11 For more information, see Chapter 25, “Spooling and Scheduling Data Sets” on page 343.
The device-dependent (DEVD) parameter of the DCB macro specifies the type of
device where the data set's volume resides:
TA Magnetic tape
RD Card reader
PC Card punch
PR Printer
DA Direct access storage devices
Note: Because the DEVD option affects only for the DCB macro expansion, you
are guaranteed the maximum device flexibility by letting it default to
DEVD=DA and not coding any device-dependent parameter.
If the immediate destination of the data set is a device, such as a disk or tape,
which does not recognize the control character, the system assumes that the
control character is the first byte of the data portion of the record. If the destination
of the data set is a printer or punch and you have not indicated the presence of a
control character, the system regards the control character as the first byte of data.
If the destination of the data set is SYSOUT, the effect of the control characters is
determined at the ultimate destination of the data set. Refer to DFSMS/MVS Macro
Instructions for Data Sets for a list of the control characters.
The optional control character must be in the first byte of format-F and format-U
records, and in the fifth byte of format-V records and format-D records where
BUFOFF=L. If the immediate destination of the data set is a sequential DASD data
set or an IBM standard or ISO/ANSI standard labelled tape, OPEN records the
presence and type of control characters in the data set label. This is so that a
program that copies the data set to a print, punch, or SYSOUT data set can
propagate RECFM and therefore control the type of control character.
When you create a tape data set with variable-length record format-V or -D, the
control program pads any data block shorter than 18 bytes. For format-V records, it
pads to the right with binary zeros so that the data block length equals 18 bytes.
For format-D (ASCII) records, the padding consists of ASCII circumflex characters,
which are equivalent to X'5E's.
Figure 74 shows how the tape density (DEN) specifies the recording density in bits
per inch per track.
When DEN is not specified, the highest density capable by the unit will be used.
The DEN parameter has no effect on an 18-track or 36-track tape cartridge.
The track recording technique (TRTCH) for 7-track tape can be specified as:
C Data conversion is to be used. Data conversion makes it possible to write 8
binary bits of data on 7 tracks. Otherwise, only 6 bits of an 8-bit byte are
recorded. The length field of format-V records contains binary data and is not
recorded correctly without data conversion.
E Even parity is to be used. If E is omitted, odd parity is assumed.
T BCDIC to EBCDIC conversion is required.
The track recording technique (TRTCH) for magnetic tape drives with Improved
Data Recording Capability can be specified as:
COMP Data is written in compacted format.
NOCOMP Data is written in standard format.
The following symbols show the code in which the data was punched. If this
information is omitted, I is assumed.
I IBM BCD perforated tape and transmission code (8 tracks)
F Friden (8 tracks)
B Burroughs (7 tracks)
C National Cash Register (8 tracks)
A ASCII (8 tracks)
T Teletype12 (5 tracks)
N No conversion
A record size of 80 bytes is called EBCDIC mode (E) and a record size of 160
bytes is called column binary mode (C). Each punched card corresponds to one
physical record. Therefore, you should restrict the maximum record size to
EBCDIC mode (80 bytes) or column binary mode (160 bytes). When column binary
mode is used for the card punch, BLKSIZE must be 160 unless you are using PUT.
Then you can specify BLKSIZE as 160 or a multiple of 160, and the system
handles this as described under “PUT—Write a Record” on page 331. You can
specify the read/punch mode of operation (MODE) parameter as either card image
column binary mode (C) or EBCDIC mode (E). If this information is omitted, E is
assumed. The stacker selection parameter (STACK) can be specified as either 1 or
2 to show which bin is to receive the card. If STACK is not specified, 1 is assumed.
For all QSAM RECFM=FB card punch data sets, the block size in the DCB is
adjusted by the system to equal the logical record length. This data set is treated
as RECFM=F. If the system builds the buffers for this data set, the buffer length is
determined by the BUFL parameter. If the BUFL parameter was not specified, the
adjusted block size is used for the buffer length.
If the DCB is to be reused with a block size larger than the logical record length,
you must reset DCBBLKSI in the DCB and ensure that the buffers are large
enough to contain the largest block size expected. You can ensure the buffer size
by specifying the BUFL parameter before the first time the data set is opened, or by
issuing the FREEPOOL macro after each CLOSE macro so the system will build a
new buffer pool of the correct size each time the data set is opened.
Punch error correction on the IBM 2540 Card Read Punch is not performed.
The IBM 3525 Card Punch accepts only format-F records for print and associated
data sets. Other record formats are allowed for the read data set, the punch data
set, and the interpret punch data set. For more information on programming for the
3525 Card Punch, see Programming Support for the IBM 3505 Card Reader and
the IBM 3525 Card Punch.
A numeric table reference character (such as 0) selects the font to which the
character corresponds. The characters' number values represent the order in which
the font names have been specified with the CHARS parameter. In addition to
using table reference characters to correspond to font names specified on the
CHARS parameter, you can also code table reference characters that correspond
to font names specified in the PAGEDEF control structure. With CHARS, valid table
reference characters vary and range between 0 and 3. With PAGEDEF, they range
between 0 and 126. Table reference characters with values greater than the limit
are treated as 0 (zero). For additional information, see IBM 3800 Printing
Subsystem Programmer’s Guide.
The system processes table reference characters on printers such as the IBM 3800
and IBM 3900 that support the CHARS and PAGEDEF parameters on the DD
statement. If the device is a printer that does not support CHARS or PAGEDEF, the
system discards the table reference character. This is true both for printers that are
allocated directly to the job step and for SYSOUT data sets. This makes it
unnecessary for your program to know whether the printer supports table reference
characters.
If the immediate destination of the data set for which OPTCD=J was specified is
DASD, the system treats the table reference characters as part of the data. The
system also records the OPTCD value in the data set label. If the immediate
destination is tape, the system does not record the OPTCD value in the data set
label.
Record Formats
Records of format-F, -V, and -U are acceptable to the printer. The first 4 bytes
(record descriptor word or segment descriptor word) of format-V records or record
segments are not printed. For format-V records, at least 1 byte of data must follow
the record or segment descriptor word or the carriage control character. The
carriage control character, if specified in the RECFM parameter, is not printed. The
system does not position the printer to channel 1 for the first record unless
specified by a carriage control character.
Because each line of print corresponds to one record, the record length should not
exceed the length of one line on the printer. For variable-length spanned records,
each line corresponds to one record segment, and block size should not exceed
the length of one line on the printer.
If carriage control characters are not specified, you can show printer spacing
(PRTSP) as 0, 1, 2, or 3. If it is not specified, 1 is assumed.
For all QSAM RECFM=FB printer data sets, the block size in the DCB will be
adjusted by the system to equal the logical record length. This data set will be
treated as RECFM=F. If the system builds the buffers for this data set, the buffer
length will be determined by the BUFL parameter. If the BUFL parameter was not
specified, the adjusted block size is used for the buffer length.
If the DCB is to be reused with a block size larger than the logical record length,
you must reset DCBBLKSI in the DCB, and ensure that the buffers are large
enough to contain the largest block size expected. You can ensure the buffer size
by specifying the BUFL parameter before the first time the data set is opened, or by
issuing the FREEPOOL macro after each CLOSE macro, so the system will build a
new buffer pool of the correct size each time the data set is opened.
Primary sources of information to be placed in the data control block are a DCB
macro, a data definition (DD) statement, a dynamic allocation SVC 99 parameter
list, a data class, and a data set label. A data class can be used to specify all of
your data set's attributes except data set name and disposition. Also, you can
provide or change some of the information during execution by storing the
applicable data in the appropriate field of the data control block.
The data control block is filled in with information from the DCB macro, the JFCB,
or an existing data set label. If more than one source specifies information for a
particular field, only one source is used. A DD statement takes priority over a data
set label, and a DCB macro over both.
You can change most data control block fields either before the data set is opened
or when the operating system returns control to your program (at the DCB OPEN
user exit). Some fields can be changed during processing. Do not try to change a
data control block field, such as data set organization, from one that allowed the
data set to be allocated to a system-managed volume, to one that makes the data
set ineligible to be system-managed. For example, do not specify a data set
organization in the DD statement as physical sequential and, after the data set has
been allocated to a system-managed volume, try to open the data set with a data
control block that specifies the data set as physical sequential unmovable. The
types of data sets that cannot be system-managed are listed in “Storage
Management Subsystem Restrictions” on page 23.
6 Installation
4 DCB OPEN
7
exit routine
Old
Data Set
Label
Figure 75. Sources and Sequence of Operations for Completing the Data Control Block
not to exist or to apply to the current data set unless you specify DISP=MOD or
open for EXTEND or OUTINX and a volume serial number in the volume
parameter of the DD statement (or the data set is cataloged).
Note: A data set label to JFCB merge is not performed for concatenated data
sets at the end-of-volume time. If you want a merge, turn on the unlike
attribute bit (DCBOFPPC) in the DCB. The unlike attribute forces the
system through OPEN for each data set in the concatenation, where a
label to JFCB merge takes place. For more information, see
“Concatenating Unlike Data Sets” on page 354.
4. Any field not completed in the DCB is filled in from the JFCB.
5. Certain fields in the DCB can then be completed or changed by your own DCB
user exit routine or JFCBE exit routine. The DCB macro fields are described in
DFSMS/MVS Macro Instructions for Data Sets. These exits are described in
“DCB Open Exit” on page 475 and “JFCB Exit” on page 479.
Note:
System Determined Block Size (SDB) will have a different effect on
your exit depending upon whether it is for tape or for DASD.
For tape, the label is built by OPEN, and the problem program's
OPEN exit is given control before SDB is invoked. This means that
the problem program's OPEN exit would find a block size of zero if
that is what was specified or defaulted up to that point in the
completion of the DCB.
| For DASD, the label is built by DADSM when the data set is
| allocated, so by the time the problem program's OPEN exit is called
| at OPEN time, the system has already calculated the blocksize if
| LRECL, RECFM and DSORG are available. The problem program's
| OPEN exit would then find the non-zero block size already
| calculated by SDB.
6. After OPEN calls the user's optional DCB OPEN exit or JFCBE routine, it calls
the installation's optional OPEN exit routine. Either type of exit or both can
make certain changes to the DCB and DCBE.
| After possibly calling these exits, OPEN tests whether the block size field is
| zero or an exit changed LRECL or RECFM after the system calculated a block
| size when the DASD data set space was allocated. In either case, OPEN
| calculates an optimal block size.
7. Then all DCB fields are unconditionally merged into corresponding JFCB fields
if your data set is opened for output. Merging the DCB fields is done by
specifying OUTPUT, OUTIN, EXTEND, or OUTINX in the OPEN macro.
The DSORG field is only merged when it contains zeros in the JFCB. If your
data set is opened for input (INPUT, INOUT, RDBACK, or UPDAT is specified
in the OPEN macro), the DCB fields are not merged unless the corresponding
JFCB fields contain zeros.
8. The open and close routines use the updated JFCB to write the data set labels
if the data set was open for OUTPUT, OUTIN, OUTINX, or EXTEND. For
OPEN option INOUT, CLOSE uses the JFCB for information if CLOSE writes
trailer labels on tape. If the data set is not closed when your program ends, the
operating system will close it automatically.
When the data set is closed, the data control block is restored to the condition it
had before the data set was opened (except that the buffer pool is not freed) unless
you coded RMODE31=BUFF and OPEN accepted it.
Because macros are expanded during the assembly of your program, you must
supply the macro forms to be used in processing each data set in the associated
DCB macro. You can supply buffering requirements and related information in the
DCB macro, the DD statement, or by storing the applicable data in the appropriate
field of the data control block before the end of your DCB exit routine. If the
addresses of special processing routines (EODAD, SYNAD, or user exits) are
omitted from the DCB macro, you must complete them in the DCB before they are
required.
Processing Methods
You can process a data set as input, output, or update by coding the processing
method in the OPEN macro. If the processing method parameter is omitted from
the OPEN macro, INPUT is assumed. BISAM and QISAM scan mode ignore all
the OPEN processing options. You can use OUTPUT or EXTEND when using
QISAM load mode with an indexed sequential data set.
If the data set resides on a direct access volume, you can code UPDAT in the
processing method parameter to show that records can be updated.
RDBACK is supported only for magnetic tape. By coding RDBACK, you can specify
that a magnetic tape volume containing format-F or format-U records is to be read
backward. (Variable-length records cannot be read backward.)
Note: When you read backward a tape that is recorded in IDRC (Improved Data
Recording Capability) mode, it will have a severe performance degradation.
You can override the INOUT, OUTIN, UPDAT, or OUTINX at execution time by
using the IN or OUT options of the LABEL parameter of the DD statement, as
discussed in OS/390 MVS JCL Reference. The IN option indicates that a BSAM
data set opened for INOUT or a direct data set opened for UPDAT is to be read
only. The OUT option indicates that a BSAM data set opened for OUTIN or
OUTINX is to be written in only.
Note: Unless allowed by the label validation installation exit, OPEN for MOD (OLD
OUTPUT/OUTIN), INOUT, EXTEND, or OUTINX cannot be processed for
ISO/ANSI Version 3 tapes, because this kind of processing updates only the
closing label of the file, causing a label symmetry conflict. An unmatching
label should not frame the other end of the file. The label validation
installation exit is described in DFSMS/MVS Installation Exits.
Processing PDSEs:
For PDSEs, INOUT is treated as INPUT. OUTIN, EXTEND, and OUTINX are
treated as OUTPUT.
In Figure 77, the data sets associated with three DCBs are to be opened
simultaneously.
OPEN (TEXTDCB,,CONVDCB,(OUTPUT),PRINTDCB, X
(OUTPUT))
Figure 77. Opening Three Data Sets at the Same Time
DCB Parameters
After you have specified the data set characteristics in the DCB macro, you can
change them only by changing the DCB during execution. The fields of the DCB
discussed below are common to most data organizations and access methods. (For
more information about the DCB fields, see DFSMS/MVS Macro Instructions for
Data Sets .)
The system can derive the best block size for DASD and tape and spooled data
sets. The system does not derive a block size for BDAM, old, or unmovable data
sets, or when the RECFM is U. See “System-Determined Block Size” for more
information on system-determined block sizes for DASD and tape data sets.
If you specify a block size other than zero, note that there is no minimum
requirement for block size except that format-V blocks have a minimum size of 8.
However, if a data check occurs on a magnetic tape device, any block shorter than
12 bytes in a read operation, or 18 bytes in a write operation, is treated as a noise
record and lost. No check for noise is made unless a data check occurs. The
maximum block size for an ISO/ANSI Version 3 tape is 2048 bytes. The 2048-byte
limit can be overridden by a label validation installation exit. (See DFSMS/MVS
Installation Exits.)
IBM recommends that you take advantage of this feature for these reasons:
The program can write to DASD, tape or sysout without you or the program
calculating the optimal block size. DASD track capacity calculations are
complicated.
If the data set later is moved to a different DASD type, such as by DFSMShsm,
the system will recalculate an appropriate block size and reblock the data.
DASD Data Sets: When you create (allocate space for) a new DASD data set, the
system derives the optimum block size and saves it in the data set label if all of the
following are true:
Block size is not available or specified from any source. BLKSIZE=0 can be
specified.
| You specify LRECL or it is in the data class. The data set does not have to be
| system-managed.
You specify RECFM or it is in the data class. It must be fixed or variable.
| The above occurs before an application program opens the data set. Your DCB
| OPEN exit can examine the calculated block size in the DCB if no other source
| supplied it.
When a program opens a DASD data set for writing the first time since it was
created, OPEN derives the optimum block size after calling the optional DCB OPEN
exit and all the following are true:
Either block size in the DCB is zero or the system determined the block size
when the data set was created and RECFM or LRECL in the DCB are different
from originally.
LRECL is in the DCB.
RECFM is in the DCB and it is fixed or variable.
The access method is BSAM, BPAM, or QSAM.
For sequential or partitioned data sets, the system-determined block size returned
is optimal in terms of DASD space utilization. For PDSE's, the system-determined
block size is optimal in terms of I/O buffer size since PDSE physical block size on
the DASD is a fixed size determined by PDSE.
For a compressed format data set, the system does not consider track length. The
access method simulates blocks whose length is independent of the real physical
block size. The system–determined block size is optimal in terms of I/O buffer size.
The system chooses a value for the BLKSIZE parameter as it would for an IBM
standard labeled tape as in Figure 78 on page 300. This value is stored in
DCBBLKSI and DS1BLKL in the DSCB. However, regardless of the blksize found in
the DCB and DSCB, the actual size of the physical blocks written to DASD is
calculated by the system to be optimal for the device.
The system does not determine the block size for the following types of data sets:
Unmovable data sets
Data sets with a record format of U
Old data sets
Direct data sets.
| Tape Data Sets: The system can determine the optimum block size for tape data
| sets. The system sets the block size at OPEN on return from the DCB OPEN exit
| and installation DCB OPEN exit if:
| The block size in DCBBLKSI is zero
| The record length is not zero
| The record format is fixed or variable
| The tape data set is open for OUTPUT or OUTIN
| The access method is BSAM or QSAM.
Note: For programming languages, the program must specify that the file is
blocked in order to get tape system-determined block size. For example,
with COBOL, the program should specify “BLOCK CONTAINS 0
RECORDS.”
The system-determined block size depends on the record format of the tape data
set. Figure 78 on page 300 shows the block sizes that are set for tape data sets:
| Figure 78. Rules for Setting Block Sizes for Tape Data Sets or Compressed Format Data
| Sets
| RECFM Block Size Set
| F or FS DCBBLKSI = LRECL
| FB or FBS (Label type=AL DCBBLKSI = Highest possible multiple of LRECL that is ≤
| Version 3) 2048 if LRECL ≤ 2048
| DCBBLKSI = Highest possible multiple of LRECL that is ≤
| 32,760 if LRECL > 2048
| FB or FBS (Label type=AL DCBBLKSI = Highest possible multiple of LRECL that is ≤
| Version 4 or not AL) 32,760
| V (Label type=AL Version 4 DCBBLKSI = LRECL + 4 (LRECL must be ≤ 32,756)
| or not AL)
| VS (Label type=AL Version DCBBLKSI = LRECL + 4 if LRECL ≤ 32,756
| 4 or not AL)
| DCBBLKSI = 32,760 if LRECL > 32 756
| VB or VBS (Label type=AL DCBBLKSI = 32,760
| Version 4 or not AL)
| D (Label type=AL) DCBBLKSI = LRECL + 4 (LRECL must be ≤ 32,756)
| DBS or DS (Label type=AL DCBBLKSI = 2048 (the maximum block size allowed).
| Version 3)
| D or DS (Label type NL or DCBBLKSI = LRECL + 4 (LRECL must be ≤ 32,756)
| NSL)
| DB or DBS (Label type NL DCBBLKSI = 32,760
| or NSL or AL Version 4)
| DB not spanned (Label DCBBLKSI = 2048 if LRECL ≤ 2044
| type=AL Version 3)
| DCBBLKSI = 32,760 if LRECL > 2044 (You have the
| option, for AL Version 3, to accept this block size in the
| label validation installation exit.)
| Label Types:
| AL = ISO/ANSI labels
| NL = no labels
| NSL = nonstandard labels
| SL = IBM standard labels
| not AL = NL, NSL, or SL labels
Notes:
1. RECFM=D is not allowed for SL tapes.
2. RECFM=V is not allowed for AL tapes.
Notes:
1. For fixed-length unblocked records, LRECL must equal BLKSIZE.
2. For PDSEs with fixed-length blocked records, LRECL must be specified when
the PDSE is opened for output.
3. For the extended logical record interface (XLRI) for ISO/ANSI variable spanned
records, LRECL must be specified as LRECL=0K or LRECL=nK.
4. For compressed format data sets with fixed-length blocked records, LRECL
must be specified when the data set is opened for output.
For buffered tape devices, the write validity check option delays the device end
interrupt until the data is physically on tape. When you use the write validity check
option, you get none of the performance benefits of buffering and the average data
transfer rate is much less.
Note: OPTCD=W is ignored for PDSEs and for extended format data sets.
DD Statement Parameters
Each of the data set description fields of the data control block, except for direct
data sets, can be specified when your job is to be executed. Also, data set
identification and disposition, and device characteristics, can be specified at that
time. To allocate a data set, you must specify the data set name and disposition in
the DD statement. In the DD statement, you can specify a data class, storage
class, and management class, and other JCL keywords. You can specify the
classes using the JCL keywords DATACLAS, STORCLAS, and MGMTCLAS. If you
do not specify a data class, storage class, or management class, the ACS routines
assign classes based on the defaults defined by your storage administrator.
Storage class and management class can be assigned only to data sets that are to
be system-managed.
Your storage administrator uses the ACS routines to determine which data sets are
to be system-managed. The valid classes that can either be specified in your DD
statement or assigned by the ACS routines are defined in the SMS configuration by
your storage administrator. The ACS routines analyze your JCL, and if you specify
a class that you are not authorized to use or a class that does not exist, your
allocation fails. For more information on how to specify data class, storage class,
and management class in your DD statement, see OS/390 MVS JCL User's Guide.
Data Class
Data class can be specified for both system-managed and non-system-managed
data sets. It can be specified for both DASD and tape data sets. You can use data
class together with the JCL keyword LIKE for tape data sets. This simplifies
migration to and from system-managed storage. When you allocate a data set, the
ACS routines assign a data class to the data set, either the data class you specify
in your DD statement, or the data class defined as the default by your storage
administrator. The data set is allocated using the information contained in the
assigned data class. See your storage administrator for information on the data
classes available to your installation and DFSMS/MVS DFSMSdfp Storage
You can override any of the information contained in a data class by specifying the
values you want in your DD statement or dynamic allocation. A data class can
contain any of the following information:
For more information on the JCL keywords that override data class information, see
OS/390 MVS JCL User's Guide and OS/390 MVS JCL Reference.
Note: On a DD statement or dynamic allocation call you cannot directly specify via
the DSNTYPE parameter that the data set is to be an extended format data
set.
The easiest data set allocation is one that uses the data class, storage class, and
management class defaults defined by your storage administrator. The following
example shows how to allocate a system-managed data set:
//ddname DD DSNAME=NEW.PLI,DISP=(NEW,KEEP)
Note: You cannot specify the keyword DSNTYPE with the keyword RECORG in
the JCL DD statement. They are mutually exclusive.
You should not attempt to change the data set characteristics of a system-managed
data set to characteristics that make it ineligible to be system-managed. For
example, do not specify a data set organization in the DD statement as PS and,
after the data set has been allocated to a system-managed volume, change the
DCB to specify DSORG=PSU. That causes abnormal end of your program.
The DCBD macro generates a dummy control section (DSECT) named IHADCB.
Each field name symbol consists of DCB followed by the first 5 letters of the
keyword subparameter for the DCB macro. For example, the symbolic name of the
block size parameter field is DCBBLKSI. (For other DCB field names, see
DFSMS/MVS Macro Instructions for Data Sets.)
The attributes of each DCB field are defined in the dummy control section. Use the
DCB macro's assembler listing to determine the length attribute and the alignment
of each DCB field.
...
OPEN (TEXTDCB,INOUT),MODE=31
...
EOFEXIT CLOSE (TEXTDCB,REREAD),MODE=31,TYPE=T
LA 1ð,TEXTDCB
USING IHADCB,1ð
MVC DCBSYNAD+1(3),=AL3(OUTERROR)
B OUTPUT
INERROR STM 14,12,SYNADSA+12
...
OUTERROR STM 14,12,SYNADSA+12
...
TEXTDCB DCB DSORG=PS,MACRF=(R,W),DDNAME=TEXTTAPE, C
EODAD=EOFEXIT,SYNAD=INERROR
DCBD DSORG=PS
...
The data set defined by the data control block TEXTDCB is opened for both input
and output. When the problem program no longer needs it for input, the EODAD
routine closes the data set temporarily to reposition the volume for output. The
EODAD routine then uses the dummy control section IHADCB to change the error
exit address (SYNAD) from INERROR to OUTERROR.
The EODAD routine loads the address TEXTDCB into register 10, the base register
for IHADCB. Then it moves the address OUTERROR into the DCBSYNAD field of
the DCB. Even though DCBSYNAD is a fullword field and contains important
information in the high-order byte, change only the 3 low-order bytes in the field.
All unused address fields in the DCB, except DCBEXLST, are set to 1 when the
DCB macro is expanded. Many system routines interpret a value of 1 in an address
field as meaning no address was specified, so use it to dynamically reset any field
you do not need.
The IHADCBE macro generates a dummy control section (DSECT) named DCBE.
For the symbols generated see DFSMS/MVS Macro Instructions for Data Sets.
All address fields in the DCBE are four bytes. All undefined addresses are set to 0.
In the following example, the data sets associated with three DCBs are to be
closed simultaneously. Because no volume positioning parameters (LEAVE,
REWIND) are specified, the position indicated by the DD statement DISP
parameter is used.
CLOSE (TEXTDCB,,CONVDCB,,PRINTDCB)
The TYPE=T parameter causes the system control program to process labels,
modify some of the fields in the system control blocks for that data set, and
reposition the volume (or current volume for multivolume data sets) in much the
same way that the normal CLOSE macro does. When you code TYPE=T, you can
specify that the volume is either to be positioned at the end of data (the LEAVE
option) or to be repositioned at the beginning of data (the REREAD option).
Magnetic tape volumes are repositioned either immediately before the first data
record or immediately after the last data record. The presence of tape labels has no
effect on repositioning.
Note: When a data control block is shared among multiple tasks, only the task
that opened the data set can close it unless TYPE=T is specified.
Figure 81, which assumes a sample data set containing 1000 records, illustrates
the relationship between each positioning option and the point where you resume
processing the data set after issuing the temporary close.
Begin Begin processing
processing tape data set
data set (open for read
backward)
REREAD (with tape data set open Immediately after record 1000
for read backward)
Figure 81. Record Processed When LEAVE or REREAD Is Specified for CLOSE TYPE=T
Releasing Space
The close function attempts to release unused tracks or cylinders for a data set if
all of the following are true:
The SMS management class specifies YI or CI for the partial release attribute
or you specified RLSE for the SPACE parameter on the DD statement or
RELEASE on the TSO/E ALLOCATE command.
You did not specify TYPE=T on the CLOSE macro.
The DCB was opened with the OUTPUT, OUTIN, OUTINX, INOUT or EXTEND
option and the last operation before CLOSE was WRITE (and CHECK), STOW
or PUT.
No other DCB for this data set in the address space was open.
No other address space in any system is allocated to the data set.
The data set is sequential or partitioned.
Certain functions of dynamic allocation are not currently executing in the
address space.
For a multivolume data set that is not in extended format, CLOSE releases space
only on the current volume.
Space is released on a track boundary if the extent containing the last record was
allocated in units of tracks or in units of average record or block lengths with
ROUND not specified. Space is released on a cylinder boundary if the extent
containing the last record was allocated in units of cylinders or in units of average
block lengths with ROUND specified. However, a cylinder boundary extent could be
released on a track boundary if:
The DD statement used to access the data set contains a space parameter
specifying units of tracks or units of average block lengths with ROUND not
specified, or
No space parameter is supplied in the DD statement and no secondary space
value has been saved in the data set label for the data set.
After the data set has been closed, the DCB can be used for another data set. If
you do not close the data set before a task completes, the operating system closes
it automatically. If the DCB is not available to the system at that time, the operating
system abnormally ends the task, and data results can be unpredictable. Note,
however, that the operating system cannot automatically close any open data sets
after the normal end of a program that was brought into virtual storage by the
loader. Therefore, loaded programs must include CLOSE macros for all open data
sets.
The short form parameter list must reside below 16 MB, but the calling program
can be above 16 MB.
The long form parameter list can reside above or below 16 MB. Although you can
code MODE=31 on the OPEN or CLOSE call for a DCB, the DCB must reside
below 16 MB. All non-VSAM and non-VTAM (Virtual Telecommunications Access
Method) access control blocks (ACBs) must also reside below 16 MB. Therefore,
the leading byte of the 4-byte ACB or DCB address must contain zeros. If the byte
contains something other than zeros, an error message is issued. If an OPEN was
attempted, the data set is not opened. If a CLOSE was attempted, the data set is
not closed.
You need to keep the mode specified in the MF=L and MF=E versions of the
OPEN and CLOSE macros consistent. If MODE=31 is specified in the MF=L
version of the OPEN or CLOSE macro, MODE=31 must also be coded in the
corresponding MF=E version of the macro. Unpredictable results occur if the mode
specified is not consistent.
For data sets other than system-managed with DSORG=PS or null, the program
will receive unpredictable results such as reading residual data from a prior user,
getting an I/O error or getting an ABEND. Reading residual data can cause your
program to appear to run correctly, but you can get unexpected output from the
residual data. You can use one of the following methods to make the data set
appear empty:
1. At allocation time, specify a primary allocation value of zero; such as
SPACE=(TRK,(0,10)) or SPACE=(CYL,(0,50)).
2. After allocation time, to put an end-of-file mark at the beginning of the data set,
run a program that opens the data set for output and closes it without writing
anything.
After you delete your data set containing confidential data, you can be certain
another user cannot read your residual data if you use the erase feature described
in “Erasing Residual Data” on page 48.
Open/Close/EOV Errors
| There are two classes of errors that can occur during open, close, and
| end-of-volume processing: determinate and indeterminate errors. Determinate
| errors are errors associated with an ABEND issued by OPEN, CLOSE, or EOV. For
| example, a condition associated with the 213 completion code with a return code of
| 04 might be detected during open processing, indicating that the data set label
| could not be found for a data set being opened. In general, the OPEN, CLOSE
| and other system functions attempt to react to errors with return codes and
| determinate abends; however, in some cases, the result is indeterminate errors,
| such as program checks. In such cases, you should examine the last action taken
| by your program. Pay particular attention to bad addresses supplied by your
| program or overlaid storage.
To determine the status of any DCB after an error, check the OPEN (CLOSE)
return code in register 15. See DFSMS/MVS Macro Instructions for Data Sets for
the OPEN and CLOSE return codes.
During task termination, the system issues a CLOSE macro for each data set that
is still open. If the task is terminating abnormally due to a determinate system
ABEND for an output QSAM data set, the close routines that would normally finish
processing buffers are bypassed. Any outstanding I/O requests are purged. Thus,
your last data records might be lost for a QSAM output data set.
You should close an indexed sequential data set before task termination because, if
an I/O error is detected, the ISAM close routines cannot return the problem
program registers to the SYNAD routine, causing unpredictable results.
Installation Exits
| Four installation exit routines are provided for abnormal end with ISO/ANSI Version
| 3 or Version 4 tapes.
| The label validation exit is entered during OPEN/EOV if an invalid label
| condition is detected and label validation has not been suppressed. Invalid
| conditions include incorrect alphanumeric fields, nonstandard values (for
| example, RECFM=U, block size greater than 2048, or a zero generation
| number), invalid label sequence, nonsymmetrical labels, invalid expiration date
| sequence, and duplicate data set names. However, Version 4 tapes allow block
| size greater than 2048, invalid expiration date sequence, and duplicate data set
| names.
| The validation suppression exit is entered during OPEN/EOV if volume security
| checking has been suppressed, if the volume label accessibility field contains
| an ASCII space character, or if RACF accepts a volume and the accessibility
| field does not contain an uppercase A through Z (see note below).
| The volume access exit is entered during OPEN/EOV if a volume is not RACF
| protected and the accessibility field in the volume label contains an ASCII
| uppercase A through Z (see note below).
| The file access exit is entered after locating a requested data set if the
| accessibility field in the HDR1 label contains an ASCII uppercase A through Z
| (see note below).
| Note: ISO/ANSI Version 4 tapes also allows special characters
| !*"%&'()+,-./:;<=>?_ and numeric 0–9.
Positioning Volumes
Volume positioning is releasing the DASD or tape volume or rotating the tape
volume so that the read-write head is at a particular point on the tape. The
following sections discuss the steps in volume positioning: releasing the volume,
processing end-of-volume, positioning the volume.
There are two ways to code the CLOSE macro that can result in releasing a data
set and the volume on which it resides at the time the data set is closed:
1. Together with the FREE=CLOSE parameter of the DD statement, you can
code:
CLOSE (DCB1,DISP) or
CLOSE (DCB1,REWIND)
See OS/390 MVS JCL Reference for information about how to use and code
the FREE=CLOSE parameter of the DD statement.
2. If you do not code FREE=CLOSE on the DD statement, you can code:
CLOSE (DCB1,FREE)
In either case, tape data sets and volumes are freed for use by another job step.
Data sets on direct access storage devices are freed and the volumes on which
they reside are freed if no other data sets on the volume are open. Additional
information on volume disposition is provided in OS/390 MVS JCL User's Guide .
If you issue a CLOSE macro with the TYPE=T parameter, the system does not
release the data set or volume. They can be released via a subsequent CLOSE
without TYPE=T or by the unallocation of the data set.
For additional information and coding restrictions on the CLOSE macro, see
DFSMS/MVS Macro Instructions for Data Sets.
Processing End-of-Volume
The access methods pass control to the data management end-of-volume routine
when any of the following conditions is detected:
Tape mark (input tape volume)
File mark or end of last extent (input direct access volume)
End-of-data indicator (input device other than magnetic tape or direct access
volume). An example of this would be the last card read on a card reader.
End of reel or cartridge (output tape volume)
End of extent (output direct access volume).
You can issue a force end-of-volume (FEOV) macro before the end-of-volume
condition is detected.
If the LABEL parameter of the associated DD statement shows standard labels, the
end-of-volume routine checks or creates standard trailer labels. If SUL or AUL is
specified, control is passed to the appropriate user label routine if it is specified in
your exit list.
If multiple volume data sets are specified in your DD statement, automatic volume
switching is accomplished by the end-of-volume routine. When an end-of-volume
If you perform multiple opens and closes without writing any user data in the area
of the end-of-tape reflective marker, then header and trailer labels can be written
past the marker. Access methods detect the marker. Because the creation of empty
data sets does not involve access methods, the end-of-tape marker is not detected,
which can cause the tape to run off the end of the reel.
LEAVE
positions a labeled tape to the point following the tape mark that follows the
data set trailer label group. Positions an unlabeled volume to the point following
the tape mark that follows the last block of the data set.
REREAD
positions a labeled tape to the point preceding the data set header label group.
Positions an unlabeled tape to the point preceding the first block of the data
set.
LEAVE
positions a labeled tape to the point preceding the data set header label group,
and positions an unlabeled tape to the point preceding the first block of the
data set.
REREAD
positions a labeled tape to the point following the tape mark that follows the
data set trailer label group. Positions an unlabeled tape to the point following
the tape mark that follows the last block of the data set.
DISP
specifies that a tape volume is to be disposed of in the manner implied by the
DD statement associated with the data set. Direct access volume positioning
and disposition are not affected by this parameter of the OPEN macro. There
are several dispositions that can be specified in the DISP parameter of the DD
statement; DISP can be PASS, DELETE, KEEP, CATLG, or UNCATLG.
The resultant action when an end-of-volume condition arises depends on (1)
how many tape units are allocated to the data set, and (2) how many volumes
are specified for the data set in the DD statement. The UNIT and VOLUME
parameters of the DD statement associated with the data set determine the
number of tape units allocated and the number of volumes specified. If the
number of volumes is greater than the number of units allocated, the current
volume will be rewound and unloaded. If the number of volumes is less than or
equal to the number of units, the current volume is merely rewound.
For magnetic tape volumes that are not being unloaded, positioning varies
according to the direction of the last input operation and the existence of tape
labels.
FEOV—Force End-of-Volume
The FEOV macro directs the operating system to start the end-of-volume
processing before the physical end of the current volume is reached. If another
volume has been specified for the data set, volume switching takes place
automatically. The REWIND and LEAVE volume positioning options are available.
If an FEOV macro is issued for a spanned multivolume data set that is being read
using QSAM, errors can occur when the next GET macro is issued. Make sure that
each volume begins with the first (or only) segment of a logical record. Input
routines cannot begin reading in the middle of a logical record.
The FEOV macro can only be used when you are using BSAM or QSAM. FEOV is
ignored if issued for a SYSIN or SYSOUT data set.
You can assign more than one buffer to a data set by associating the buffer with a
buffer pool. A buffer pool must be constructed in a virtual storage area allocated for
a given number of buffers of a given length.
The number of buffers you assign to a data set should be a trade-off against the
frequency with which you refer to each buffer. A buffer that is not referred to for a
fairly long period could be paged out. If much of this were allowed, throughput
could decrease.
Using QSAM, buffer segments and buffers within the buffer pool are controlled
automatically by the system. However, you can tell the system you are finished
processing the data in a buffer by issuing a release (RELSE) macro for input, or a
truncate (TRUNC) macro for output. This simple buffering technique can be used
to process a sequential or an indexed sequential data set.
If you use the basic access methods, you can use buffers as work areas rather
than as intermediate storage areas. You can control the buffers directly by using
the GETBUF and FREEBUF macros.
IBM recommends: For QSAM you let the system build the buffer pool automatically
during OPEN and that you omit the BUFL parameter. This simplifies your program.
It allows concatenation of data sets in any order of block size. If you code
RMODE31=BUFF on the DCBE macro, the system attempts to get buffers above
the line. IBM recommends: Not using the RELSE or QSAM TRUNC macros
because they tend to cause your program to become dependent on the size of
each block.
IBM recommends: For BSAM you allocate data areas or buffers via GETMAIN,
STORAGE, or CPOOL macros and not via BUILD, GETPOOL, or by the system
during OPEN. Areas that you allocate can be above the line. Areas that you
allocate can be better integrated with your other areas.
If QSAM simple buffering is used, the buffers are automatically returned to the pool
when the data set is closed. If the buffer pool is constructed explicitly or
automatically (except when they reside above the 16 MB line), you should use the
FREEPOOL macro to return the virtual storage area to the system. If you code
RMODE31=BUFF on a DCBE macro, then FREEPOOL has no effect and is
optional. The system will automatically free the buffer pool. For both data areas and
buffers that have virtual addresses greater than 16 MB or less than 16 MB, the real
address can exceed 16 MB.
Note that buffer alignment provides alignment for only the buffer. If records from
ASCII magnetic tape are read and the records use the block prefix, the boundary
alignment of logical records within the buffer depends on the length of the block
prefix. If the length is 4, logical records are on fullword boundaries. If the length is
8, logical records are on doubleword boundaries.
If you use the BUILD macro to construct the buffer pool, alignment depends on the
alignment of the first byte of the reserved storage area.
A BUILD macro, issued during execution of your program, uses the reserved
storage area to build a buffer pool. The address of the buffer pool must be the
same as that specified for the buffer pool control block (BUFCB) in your DCB. The
BUFCB parameter cannot refer to an area that resides above the 16 MB line. The
buffer pool control block is an 8-byte field preceding the buffers in the buffer pool.
The number (BUFNO) and length (BUFL) of the buffers must also be specified.
The length of BUFL must be at least the block size.
When the data set using the buffer pool is closed, you can reuse the area as
required. You can also reissue the BUILD macro to reconstruct the area into a new
buffer pool to be used by another data set.
You can assign the buffer pool to two or more data sets that require buffers of the
same length. To do this, you must construct an area large enough to accommodate
the total number of buffers required at any one time during execution. That is, if
each of two data sets requires 5 buffers (BUFNO=5), the BUILD macro should
specify 10 buffers. The area must also be large enough to contain the 8-byte buffer
pool control block.
You can issue the BUILD macro in 31-bit mode, but the buffer area cannot reside
above the line and be associated with a DCB.
You must be processing with QSAM in the locate mode and you must be
processing either VS/VBS or DS/DBS records, if you want to access the
variable-length, spanned records as logical records. If you issue the BUILDRCD
macro before the data set is opened, or during your DCB exit routine, you
automatically get logical records rather than segments of spanned records.
Only one logical record storage area is built, no matter how many buffers are
specified; therefore, you cannot share the buffer pool with other data sets that
might be open at the same time.
You can issue the BUILDRCD macro in 31-bit mode, but the buffer area cannot
reside above the line and be associated with a DCB.
The GETPOOL macro causes the system to allocate a virtual storage area to a
buffer pool. The system builds a buffer pool control block and stores its address in
the data set's DCB. The GETPOOL macro should be issued either before opening
of the data set or during your DCB's OPEN exit routine.
When using GETPOOL with QSAM, specify a buffer length (BUFL) at least as large
as the block size or omit the BUFL parameter.
If all of your GET, PUT, PUTX, RELSE, and TRUNC macros for a particular DCB
are issued in 31-bit mode, then you should consider supplying a DCBE macro with
RMODE31=BUFF.
Because a buffer pool obtained automatically is not freed automatically when you
issue a CLOSE macro, unless the system recognized your specification of
RMODE31=BUFF on the DCBE macro, you should also issue a FREEPOOL or
FREEMAIN macro (discussed in “FREEPOOL—Free a Buffer Pool”).
If the OPEN macro was issued while running under a protect key of zero, a buffer
pool that was obtained by OPEN should be released by issuing the FREEMAIN
macro instead of the FREEPOOL macro. This is necessary because the buffer pool
acquired under these conditions will be in storage assigned to subpool 252 (in user
key storage).
In Figure 82, a static storage area named INPOOL is allocated during program
assembly.
... Processing
BUILD INPOOL,1ð,52 Structure a buffer pool
OPEN (INDCB,,OUTDCB,(OUTPUT))
... Processing
ENDJOB CLOSE (INDCB,,OUTDCB)
... Processing
RETURN Return to system control
INDCB DCB BUFNO=5,BUFCB=INPOOL,EODAD=ENDJOB,---
OUTDCB DCB BUFNO=5,BUFCB=INPOOL,---
CNOP ð,8 Force boundary alignment
INPOOL DS CL528 Buffer pool
...
The BUILD macro, issued during execution, arranges the buffer pool into 10
buffers, each 52 bytes long. Five buffers are assigned to INDCB and 5 to
OUTDCB, as specified in the DCB macro for each. The two data sets share the
buffer pool because both specify INPOOL as the buffer pool control block. Notice
that an additional 8 bytes have been allocated for the buffer pool to contain the
buffer pool control block. The 4-byte chain pointer that occupies the first 4 bytes of
the buffer is included in the buffer, so no allowance need be made for that field.
In Figure 83, two buffer pools are constructed explicitly by the GETPOOL macros.
...
GETPOOL INDCB,1ð,52 Construct a 1ð-buffer pool
GETPOOL OUTDCB,5,112 Construct a 5-buffer pool
OPEN (INDCB,,OUTDCB,(OUTPUT))
...
ENDJOB CLOSE (INDCB,,OUTDCB)
FREEPOOL INDCB Release buffer pools after all
I/O is complete
FREEPOOL OUTDCB
...
RETURN Return to system control
INDCB DCB DSORG=PS,BFALN=F,LRECL=52,RECFM=F,EODAD=ENDJOB,---
OUTDCB DCB DSORG=IS,BFALN=D,LRECL=52,KEYLEN=1ð,BLKSIZE=1ð4, C
... RKP=ð,RECFM=FB,---
Ten input buffers are provided, each 52 bytes long, to contain one fixed-length
record. 5 output buffers are provided, each 112 bytes long, to contain 2 blocked
records plus an 8-byte count field (required by ISAM). Notice that both data sets
are closed before the buffer pools are released by the FREEPOOL macros. The
same procedure should be used if the buffer pools were constructed automatically
by the OPEN macro.
Controlling Buffers
You can use several techniques to control which buffers are used by your program.
The advantages of each depend to a great extent on the type of job you are doing.
The queued access methods allows simple buffering. The basic access methods
allows either direct or dynamic buffer control.
Although only simple buffering can be used to process an indexed sequential data
set, buffer segments and buffers within a buffer pool are controlled automatically by
the operating system.
Move Mode
The system moves the record from a system input buffer to your work area, or from
your work area to an output buffer.
Locate Mode
The system does not move the record. Instead, the access method macro places
the address of the next input or output buffer in register 1. For QSAM format-V
spanned records, if you have specified logical records by specifying BFTEK=A or
by issuing the BUILDRCD macro, the address returned in register 1 points to a
record area where the spanned record is assembled or segmented.
PUT-Locate Mode: The PUT-locate routine uses the value in the DCBLRECL field
to determine whether another record will fit into your buffer. Therefore, when you
write a short record, you can get the largest number of records per block by
modifying the DCBLRECL field before you issue a PUT-locate to get a buffer
segment for the short record. Perform the following steps:
1. Record the length of the next (short) record into DCBLRECL.
2. Issue PUT-locate.
3. Move the short record into the buffer segment.
GET-Locate Mode: Two processing modes of the PUTX macro can be used with
a GET-locate macro. The update mode returns an updated record to the data set
from which it was read. The output mode transfers an updated record to an output
Simple Buffering
The term simple buffering refers to the relationship of segments within the buffer.
All segments in a simple buffer are together in storage and are always associated
with the same data set. Each record must be physically moved from an input buffer
segment to an output buffer segment. The record can be processed within either
segment or in a work area.
If you use simple buffering, records of any format can be processed. New records
can be inserted and old records deleted as required to create a new data set. A
record can be moved and processed as follows:
GET-move, PUT-locate
Moved from an input buffer to an output buffer where it can be processed
GET-move, PUT-move
Moved from an input buffer to a work area where it can be processed and then
moved to an output buffer
GET-locate, PUTX-update
Processed in an input buffer and returned to the same data set
GET
PUT
System
Its address is returned in register 1 by the system. The address is passed to the
PUT macro in register 0.
The PUT macro (step B, Figure 84) specifies the address of the record in register
0. The system then moves the record to the next output buffer.
Note: The PUTX-output macro can be used in place of the PUT-move macro.
However, processing will be as described in “Exchange Buffering” on
page 322 (see PUT-substitute).
The GET macro specifies the address of the output buffer into which the system
moves the next input record.
A filled output buffer is not written until the next PUT macro is issued.
GET
System
PUT
System
The PUT macro (step B, Figure 85) specifies the address of a work area from
which the system moves the record into the next output buffer.
GET
PUT
Program
The PUT macro (step B, Figure 86) locates the address of the next available
output buffer. Its address is returned in register 1. You must then move the record
from the input buffer to the output buffer (step C, Figure 86). Processing can be
done either before or after the move operation.
Note: A filled output buffer is not written until the next PUT macro is issued. The
CLOSE and FEOV macros write the last record of your data set by issuing
TRUNC and PUT macros.
Be careful not to issue an extra PUT before issuing CLOSE or FEOV. Otherwise,
when the CLOSE or FEOV macro tries to write your last record, the extra PUT will
write a meaningless record or produce a sequence error.
GET
INPUT/OUTPUT INPUT/OUTPUT
PUTX
(No movement of data takes place)
The GET macro locates the next input record to be processed and its address is
returned in register 1 by the system. You can update the record and issue a PUTX
macro that will cause the block to be written back in its original location in the data
set after all the logical records in that block have been processed.
Note that if you modify the contents of a buffer but do not issue a PUTX macro for
that record, the system can still write the modified record block to the data set. This
happens with blocked records when you issue a PUTX macro for one or more other
records in the buffer.
Exchange Buffering
Exchange buffering is no longer supported. Its request is ignored by the system
and move mode is used instead.
If you are using move mode, the buffer is made available for refilling when the
RELSE macro is issued. When you are using locate mode, the system does not
refill the buffer until the next GET macro is issued. If a PUTX macro has been
used, the block is written before the buffer is refilled.
If the locate mode is being used, the system assumes that a record has been
placed in the buffer segment pointed to by the last PUT macro.
The last block of a data set is truncated by the close routine. Note that a data set
containing format-F records with truncated blocks cannot be read from direct
access storage as efficiently as a standard format-F data set.
Note: A TRUNC macro issued against a PDSE does not create a short block
because the block boundaries are not saved on output. On input, the
system uses the block size specified in DCBBLKSI for reading the PDSE.
Logical records are packed into the user buffer without respect to the block
size specified when the PDSE member was created.
In order to help the storage administrator find programs that issue a QSAM TRUNC
macro for PDSEs, the SMF type 15 record contains an indicator that the program
did it. This record is described in OS/390 MVS System Management Facilities
(SMF).
IBM Recommends: Avoid using the QSAM TRUNC macro. Many data set copying
and backup programs reblock the records. This means they do not preserve the
block boundaries that your program can have set.
The READ and WRITE macros process blocks, not records. Thus, you must block
and unblock records. Buffers, allocated by either you or the operating system, are
filled or emptied individually each time a READ or WRITE macro is issued. The
READ and WRITE macros only start I/O operations. To ensure that the operation is
completed successfully, you must issue a CHECK, WAIT, or EVENTS macro to test
the data event control block (DECB). (The only exception is that, when the SYNAD
or EODAD routine is entered, do not issue a CHECK, WAIT, or EVENTS macro for
outstanding READ or WRITE requests.)
This technique is illustrated in Figure 107 on page 389, except for the FIND macro.
You can easily adapt this technique to use WRITE instead of READ.
READ—Read a Block
The READ macro retrieves a data block from an input data set and places it in a
designated area of virtual storage. To allow overlap of the input operation with
processing, the system returns control to your program before the read operation is
completed. You must test the DECB created for the read operation for successful
completion before the block is processed or the DECB is reused.
If an indexed sequential data set is being read, the block is brought into virtual
storage and the address of the record is returned to you in the DECB.
When you use the READ macro for BSAM to read a direct data set with spanned
records and keys, and you specify BFTEK=R in your DCB, the data management
routines displace record segments after the first in a record by key length. The next
sentence describes the procedure of offset reading. You can expect the block
descriptor word and the segment descriptor word at the same locations in your
buffer or buffers, regardless of whether you read the first segment of a record
(preceded in the buffer by its key), or a subsequent segment (which does not have
a key).
You can specify variations of the READ macro according to the organization of the
data set being processed and the type of processing to be done by the system as
follows:
Indexed Sequential
K Read the data set.
KU Read for update. The system maintains the device address of the record;
thus, when a WRITE macro returns the record, no index search is required.
Direct
D Use the direct access method.
I Locate the block using a block identification.
K Locate the block using a key.
F Provide device position feedback.
X Maintain exclusive control of the block.
R Provide next address feedback.
U Next address can be a capacity record or logical record, whichever occurred
first.
WRITE—Write a Block
| The WRITE macro places a data block in an output data set from a designated
| area of virtual storage. The WRITE macro can also be used to return an updated
| record to a data set. To allow overlap of output operations with processing, the
| system returns control to your program before the write operation is completed. The
| DECB created for the write operation must be tested for successful completion
| before the DECB can be reused. For ASCII tape data sets, do not issue more than
| one WRITE on the same block, because the WRITE macro causes the data in the
| record area to be converted from EBCDIC to ASCII or, if CCSIDs are specified for
| ISO/ANSI V4 tapes, from the CCSID specified for the problem program to the
| CCSID of the data records on tape.
As with the READ macro, you can specify variations of the WRITE macro according
to the organization of the data set and the type of processing to be done by the
system as follows:
Sequential
SF Write the data set sequentially.
Indexed Sequential
K Write a block containing an updated record, or replace a record with a fixed,
unblocked record having the same key. The record to be replaced need not
have been read into virtual storage.
KN Write a new record or change the length of a variable-length record.
Direct
SD Write a dummy fixed-length record. (BDAM load mode)
SZ Write a capacity record (R0). The system supplies the data, writes the
capacity record, and advances to the next track. (BDAM load mode)
SFR Write the data set sequentially with next-address feedback. (BDAM load
mode, variable spanned)
D Use the direct access method.
I Search argument identifies a block.
K Search argument is a key.
A Add a new block.
F Provide record location data (feedback).
X Release exclusive control.
The DECB is examined by the check routine when the I/O operation is completed
to determine if an uncorrectable error or exceptional condition exists. If it does,
control is passed to your SYNAD routine. If you have no SYNAD routine, the task is
abnormally ended.
The check routine passes control to the appropriate exit routines specified in the
DCB or DCBE for error analysis (SYNAD) or, for sequential or partitioned data sets,
end-of-data (EODAD). It also automatically starts the end-of-volume procedures
(volume switching or extending output data sets).
| If you specify OPTCD=Q in the DCB, CHECK causes input data to be converted
| from ASCII to EBCDIC or, if CCSIDs are specified for ISO/ANSI V4 tapes, from the
| CCSID of the data records on tape to the CCSID specified for the problem
| program.
If the system calls your SYNAD or EODAD routine, then all other I/O requests for
that DCB have been terminated, although they have not necessarily been posted.
There is no need to test them for completion or issue CHECK for them.
If you use overlapped BSAM or BPAM READ or WRITE macros, your program
might run faster if you use the MULTACC parameter on the DCBE macro. If you do
that and use WAIT or EVENTS for the DCB, then you must also use the TRUNC
macro. See TRUNC information mentioned earlier and “Considering DASD and
Tape Performance” on page 360.
For BDAM and BISAM, a WAIT macro must be issued for each READ or WRITE
macro if MACRF=C is not coded in the associated DCB. When MACRF=C is
coded, a CHECK macro must be issued for each READ or WRITE macro. Because
the CHECK macro incorporates the function of the WAIT macro, a WAIT is
normally unnecessary. The EVENTS macro or the ECBLIST form of the WAIT
macro might be useful, though, in selecting which of several outstanding events
should be checked first. Each operation must then be checked or tested separately.
Example
1. You have opened an input DCB for BSAM with NCP=2, and an output DCB for
BISAM with NCP=1 and without specifying MACRF=C.
2. You have issued two BSAM READ macros and one BISAM WRITE macro.
3. You now issue the WAIT macro with ECBLIST pointing to the BISAM DECB
and the first BSAM DECB. (Because BSAM requests are serialized, the first
request must execute before the second.)
4. When you regain control, inspect the DECBs to see which has completed
(second bit on).
a. If it was BISAM, issue another WRITE macro.
b. If it was BSAM, issue a CHECK macro and then another READ macro.
Most programs do not care about how much data in the data set is on each volume
and if there is a failure, they do not care what the failure was. A person is more
likely to want to know the cause of the failure.
In some cases you might want to take special actions before some of the system's
normal processing of an exceptional condition. One such exceptional condition is
reading a tape mark and another such exceptional condition is writing at the end of
the tape.
With BSAM, your program can detect when it has reached the end of a magnetic
tape and do some processing before BSAMs normal processing to go to another
volume. To do that, do the following:
1. Instead of issuing the CHECK macro, issue the WAIT or EVENTS macro. You
use the ECB, which is the first word in the DECB. The first byte is called the
post code. As a minor performance enhancement, you can skip all three
macros if the second bit of the post code already is 1.
2. Inspect the post code. Do one of the following:
a. Post code is X'7F': The READ or WRITE is successful. If you are reading
and either the tape label type is AL or OPTCD=Q is in effect, then you
must issue the CHECK macro in order to convert between ASCII and
EBDCIC. Otherwise, the CHECK is optional and you can continue normal
processing as if your program had issued the CHECK macro.
b. Post code is not X'7F': Issuing another READ or WRITE will be
unsuccessful unless you take one of the actions described below. All later
READs or WRITEs for the DCB have post codes that you cannot predict
but they are guaranteed not to have been started. If your only purpose of
issuing WAIT or EVENTS was to wait for multiple events, then issue
CHECK to handle the exceptional condition. If part or all of your purpose
was to handle a certain exceptional condition such as the volume being full,
then you can do one of these:
If the post code is X'41' and the status indicators show unit exception
and an error status, then you read a tape mark or wrote at the end of
the volume but you coincidentally got an I/O error that prevented the
block from being read or written. You can still take one of the above
actions. Issuing CHECK will cause your SYNAD routine to be entered
or an ABEND to be issued. Issuing CLOSE will bypass the SYNAD
routine but CLOSE might detect another I/O error and issue ABEND.
If the post code and status differ from the above, then the data block
probably was not read or written. Issue CHECK or CLOSE. Issuing
CHECK will cause your SYNAD routine to be entered or an ABEND to
be issued. All other WRITEs have been and will be discarded with an
unpredictable post code.
Because the operating system controls buffer processing, you can use as many I/O
buffers as needed without reissuing GET or PUT macros to fill or empty buffers.
Usually, more than one input block is in storage at a time, so I/O operations do not
delay record processing.
Because the operating system overlaps I/O with processing, you need not test for
completion, errors, or exceptional conditions. After a GET or PUT macro is issued,
control is not returned to your program until an input area is filled or an output area
is available. Exits to error analysis (SYNAD) and end-of-volume or end-of-data
(EODAD) routines are automatically taken when necessary.
GET—Retrieve a Record
The GET macro is used to obtain a record from an input data set. It operates in a
logical-sequential and device-independent manner. The GET macro schedules the
filling of input buffers, unblocks records, and directs input error recovery
procedures. For spanned–record data sets, it also merges record segments into
logical records.
After all records have been processed and the GET macro detects an end-of-data
indication, the system automatically checks labels on sequential data sets and
passes control to your end-of-data (EODAD) routine. If an end-of-volume condition
is detected for a sequential data set, the system automatically switches volumes if
the data set extends across several volumes, or if concatenated data sets are
being processed.
| If you specify OPTCD=Q in the DCB or the LABEL parameter on the DD statement
| specifies ISO/ANSI labels, GET converts input data from ASCII to EBCDIC or, if
| CCSIDs are specified for ISO/ANSI V4 tapes, from the CCSID of the data records
| on tape to the CCSID specified for the problem program. This parameter is
| supported only for a magnetic tape that does not have IBM standard labels.
PUT—Write a Record
The PUT macro is used to write a record into an output data set. Like the GET
macro, it operates in a logical-sequential and device-independent manner. As
required, the PUT macro blocks records, schedules the emptying of output buffers,
and handles output error correction procedures. For sequential data sets, it also
starts automatic volume switching and label creation, and also segments records
for spanning.
| If you specify OPTCD=Q in the DCB or the LABEL parameter on the DD statement
| specifies ISO/ANSI labels, PUT causes output to be converted from EBCDIC to
| ASCII or, if CCSIDs are specified for ISO/ANSI V4 tapes, from the CCSID specified
| for the problem program to the CCSID of the data records on tape. This parameter
| is supported only for a magnetic tape that does not have IBM standard labels. If the
| tape has ISO/ANSI labels (LABEL=(,AL)), the system assumes OPTCD=Q.
If the PUT macro is directed to a card punch or printer, the system automatically
adjusts the number of records or record segments per block of format-F or format-V
blocks to 1. Thus, you can specify a record length (LRECL) and block size
(BLKSIZE) to provide an optimum block size if the records are temporarily placed
on magnetic tape or a direct access volume.
For spanned variable-length records, the block size must be equivalent to the
length of one card or one print line. Record size might be greater than block size in
this case.
When you use the PUTX macro to update, each record is returned to the data set
referred to by a previous locate mode GET macro. The buffer containing the
updated record is flagged and written back to the same location on the direct
access storage device where it was read. The block is not written until a GET
macro is issued for the next buffer, except when a spanned record is to be
updated. In that case, the block is written with the next GET macro.
When the PUTX macro is used to write a new output data set, you can add new
records by using the PUT macro. As required, the PUTX macro blocks records,
schedules the writing of output buffers, and handles output error correction
procedures.
Parallel input processing provides a logical input record from a queue of data sets
with equal priority. The function supports QSAM with input processing, simple
buffering, locate or move mode, and fixed-, variable-, or undefined-length records.
Spanned records, track-overflow records, dummy data sets, and SYSIN data sets
are not supported.
Parallel input processing can be interrupted at any time to retrieve records from a
specific data set, or to issue control instructions to a specific data set. When the
retrieval process has been completed, parallel input processing can be resumed.
Data sets can be added to or deleted from the data set queue at any time. You
should note, however, that, as each data set reaches an end-of-data condition, the
data set must be removed from the queue with the CLOSE macro before a
subsequent GET macro is issued for the queue. Otherwise, the task could be
ended abnormally.
Use the PDAB macro to create and format a work area that identifies the maximum
number of DCBs that can be processed at any one time. If you exceed the
maximum number of entries specified in the PDAB macro when adding a DCB to
the queue with the OPEN macro, the data set will not be available for parallel input
processing. However, it will be available for sequential processing.
When issuing a parallel GET macro, register 1 must always point to a PDAB. You
can load the register or let the GET macro do it for you. When control is returned to
you, register 1 contains the address of a logical record from one of the data sets in
the queue. Registers 2 through 13 contain their original contents at the time the
GET macro was issued. Registers 14, 15, and 0 are changed.
Through the PDAB, you can find the data set from which the record was retrieved.
A fullword address in the PDAB (PDADCBEP) points to the address of the DCB. It
should be noted that this pointer could be invalid from the time a CLOSE macro is
issued to the issuing of the next parallel GET macro.
In Figure 89 on page 334, not more than three data sets (MAXDCB=3 in the
PDAB macro) are open for parallel processing at a time.
...
OPEN (DATASET1,(INPUT),DATASET2,(INPUT),DATASET3, X
(INPUT),DATASET4,(OUTPUT))
TM DATASET1+DCBQSWS-IHADCB,DCBPOPEN Opened for
\ parallel processing
BZ SEQRTN Branch on no to
\ sequential routine
TM DATASET2+DCBQSWS-IHADCB,DCBPOPEN
BZ SEQRTN
TM DATASET3+DCBQSWS-IHADCB,DCBPOPEN
BZ SEQRTN
GETRTN GET DCBQUEUE,BUFFERAD,TYPE=P
LR 1ð,1 Save record pointer
...
... Record updated in place
...
PUT DATASET4,(1ð)
B GETRTN
EODRTN L 2,DCBQUEUE+PDADCBEP-IHAPDAB
L 2,ð(ð,2)
CLOSE ((2))
CLC ZEROS(2),DCBQUEUE+PDANODCB-IHAPDAB Any DCBs left?
BL GETRTN Branch if yes
...
DATASET1 DCB DDNAME=DDNAME1,DSORG=PS,MACRF=GL,RECFM=FB, X
LRECL=8ð,EODAD=EODRTN,EXLST=SET3XLST
DATASET2 DCB DDNAME=DDNAME2,DSORG=PS,MACRF=GL,RECFM=FB, X
LRECL=8ð,EODAD=EODRTN,EXLST=SET3XLST
DATASET3 DCB DDNAME=DDNAME3,DSORG=PS,MACRF=GMC,RECFM=FB, X
LRECL=8ð,EODAD=EODRTN,EXLST=SET3XLST
DATASET4 DCB DDNAME=DDNAME4,DSORG=PS,MACRF=PM,RECFM=FB, X
LRECL=8ð
DCBQUEUE PDAB MAXDCB=3
SET3XLST DC ðF'ð',X'92',AL3(DCBQUEUE)
ZEROS DC X'ðððð'
DCBD DSORG=QS
PDABD
...
Note: The number of bytes required for PDAB is equal to 24 + 8n, where n is the
value of the keyword, MAXDCB.
If data definition statements and data sets are supplied, DATASET1, DATASET2,
and DATASET3 are opened for parallel input processing as specified in the input
processing OPEN macro. Other attributes of each data set are QSAM (MACRF=G),
simple buffering by default, locate or move mode (MACRF=L or M), fixed-length
records (RECFM=F), and exit list entry for a PDAB (X'92'). Note that both locate
and move modes can be used in the same data set queue. The mapping macros,
DCBD and PDABD, are used to refer to the DCBs and the PDAB respectively.
Following the OPEN macro, tests are made to determine whether the DCBs were
opened for parallel processing. If not, the sequential processing routine is given
control.
When one or more data sets are opened for parallel processing, the get routine
retrieves a record, saves the pointer in register 10, processes the record, and
writes it to DATASET4. This process continues until an end-of-data condition is
detected on one of the input data sets. The end-of-data routine locates the
completed input data set and removes it from the queue with the CLOSE macro. A
test is then made to determine whether any data sets remain on the queue.
Processing continues in this manner until the queue is empty.
The SYNADAF message might come in two parts, with each message being an
unblocked variable-length record. If the data set being analyzed is not a PDSE or
an extended format data set, only the first message is filled in. If the data set is a
PDSE, or an extended format data set, both messages are filled in. An 'S' in the
last byte of the first message means a second message exists. This second
message is located 8 bytes past the end of the first message.
The text of the first message is 120 characters long, and begins with a field of
either 36 or 42 blanks. You can use the blank field to add your own remarks to the
message. The text of the second message is 128 characters long and ends with a
field of 76 blanks (reserved for later use). This second message begins in the 5th
byte in the message buffer.
Following is a typical message for a tape data set with the blank field omitted:
,TESTJOBb,STEP2bbb,283,TA,MASTERbb,READb,DATA CHECKbbbbb,ððððð15,BSAMb
Note: In the above example, 'b' means a blank.
That message shows that a data check occurred during reading of the 15th block of
a data set being processed with BSAM. The data set was identified by a DD
statement named MASTER, and was on a magnetic tape volume on unit 283. The
name of the job was TESTJOB; the name of the job step was STEP2.
Following are two typical messages for a PDSE with the blank fields omitted:
,PDSEJOBb,STEP2bbb,283,DA,PDSEDDbb,READb,DATA CHECKbbbbb,
ðððððððð1ðððð2,BSAMS
That message shows that a data check occurred during reading of a block referred
to by a BBCCHHR of X'00000000100002' of a PDSE being processed by BSAM.
The data set was identified by a DD statement named PDSEDD, and was on a
DASD on unit 283. The name of the job was PDSEJOB. The name of the job step
was STEP2. The 'S' following the access method 'BSAM' means that a second
message has been filled in. The second message identifies the record in which the
error occurred. The concatenation number of the data set is 3 (the third data set in
a concatenation), the TTR of the member is X'000005', and the relative record
number is 2. The SMS return and reason codes are zero, meaning that no error
occurred in SMS.
If the error analysis routine is entered because of an input error, the first 6 bytes of
the first message (bytes 8 to 13) contain binary information. If no data was
transmitted, or if the access method is QISAM, these first 6 bytes are blanks. If the
error did not prevent data transmission, these first 6 bytes contain the address of
the input buffer and the number of bytes read. You can use this information to
process records from the block. For example, you might print each record after
printing the error message. Before printing the message, however, you should
replace the binary information with EBCDIC characters.
The SYNADAF macro provides its own save area and makes this area available to
your error analysis routine. When used at the entry point of a SYNAD routine, it
fulfills the routine's responsibility for providing a save area. See DFSMS/MVS
Macro Instructions for Data Sets for more information on the SYNADAF macro.
There are two conditions under which a data set on a direct access device can be
shared by two or more tasks:
Two or more DCBs are opened and used concurrently by the tasks to refer to
the same, shared data set (multiple DCBs).
Only one DCB is opened and used concurrently by multiple tasks in a single
job step (a single, shared DCB).
Except for PDSEs, the system does not protect data integrity when multiple DCBs
open for output access a data set within the same job step. The system ensures
that a partitioned data set (PDS) cannot be open with the OUTPUT option by more
than one program in the parallel sysplex even if DISP=SHR is specified. If a
second program issues OPEN with the OUTPUT option for the PDS with
DISP=SHR while a DCB is still open with the OUTPUT option, the second program
will get 213-30 ABEND. This does not apply to two programs in one address space
with DISP=OLD or MOD. Those programs in one address space will cause overlaid
data. This 213-30 enforcement mechanism does not apply when OPEN is issued
with the UPDAT option. Therefore programs that issue OPEN with UPDAT and
DISP=SHR can corrupt the PDS directory. Use DISP=OLD to avoid the possibility
of abending when processing a PDS for output or of corrupting the directory when
open for update.
The DCBE must not be shared by multiple DCBs that are open. After the DCB is
successfully closed, you may open a different DCB pointing to the same DCBE.
Job control language (JCL) statements and macros that help you ensure the
integrity of the data sets you want to share among the tasks that process them are
provided in the operating system. Figure 90 on page 338 and Figure 91 on
page 339 show which JCL and macros you should use, depending on the access
method your task is using and the mode of access (input, output, or update).
Figure 90 describes the processing procedures you should use if more than one
DCB has been opened to the shared data set. The DCBs can be used by tasks in
the same or different job steps.
The purpose of the RLSE value on the space keyword on the DD statement is to
cause CLOSE to free unused space when the data set becomes closed. The
system does not perform this function if the DD has DISP=SHR or more than one
DCB is open to the data set.
MULTIPLE DCBs
┌───────────┬──────────────────────────────────────────────────────────────┐
│ │ ACCESS METHOD │
│ACCESS MODE├───────────┬─────────────┬──────────┬────────────┬────────────┤
│ │BSAM,BPAM │ QSAM │ BDAM │ QISAM │ BISAM │
│ │ │ │ │ │ │
├───────────┼───────────┼─────────────┼──────────┼────────────┼────────────┤
│ Input │DISP = SHR │DISP = SHR │DISP = SHR│DISP = SHR │DISP = SHR │
├───────────┼───────────┼─────────────┼──────────┼────────────┼────────────┤
│ │No facility│No facility │DISP = SHR│No facility │DISP = SHR │
│ Output │(not PDSE) │(not PDSE) │ │ │and ENQ on │
│ │DISP = SHR │DISP = SHR │ │ │Data Set │
│ │ (PDSE) │ (PDSE) │ │ │ │
├───────────┼───────────┼─────────────┼──────────┼────────────┼────────────┤
│ │DISP = SHR │DISP = SHR │DISP = SHR│DISP = SHR │DISP = SHR │
│ │user must │and guarantee│BDAM will │and ENQ on │and ENQ on │
│ │ENQ on │discrete │ENQ on │data set and│data set and│
│ Update │block │blocks │block │guarantee │guarantee │
│ │ │ │ │discrete │discrete │
│ │ │ │ │blocks │blocks │
└───────────┴───────────┴─────────────┴──────────┴────────────┴────────────┘
DISP=SHR:
Each job step sharing an existing data set must code SHR as the subparameter of the
DISP parameter on the DD statement for the shared data set to allow the steps to
execute concurrently. For additional information about ensuring data set integrity, see
OS/390 MVS JCL User's Guide. For additional information about sharing PDSEs, see
“Sharing PDSEs” on page 426. If the tasks are in the same job step, DISP=SHR is not
required.
No facility:
There are no facilities in the operating system for sharing a data set under these
conditions.
ENQ on block:
If you are updating a shared data set (specified by coding DISP=SHR on the DD
statement) using BSAM or BPAM, your task and all other tasks must serialize processing
of each block of records by issuing an ENQ macro before the READ macro and a DEQ
macro after the CHECK macro that follows the WRITE macro you issued to update the
record. If you are using BDAM, it provides for enqueuing on a block using the READ
exclusive option that is requested by coding MACRF=X in the DCB and an X in the type
operand of the READ and WRITE macros. (For an example of the use of the BDAM
macros, see “Exclusive Control for Updating” on page 504.)
Figure 90. JCL, Macros, and Procedures Required to Share a Data Set Using Multiple
DCBs
Figure 91 on page 339 describes the macros you can use to serialize processing
of a shared data set when a single DCB is being shared by several tasks in a job
step.
ENQ:
When a data set is being shared by two or more tasks in the same job step (all that use
the same DCB), each task processing the data set must issue an ENQ macro on a
predefined resource name before issuing the macro or macros that begin the I/O
operation. Each task must also release exclusive control by issuing the DEQ macro at
the next sequential instruction following the I/O operation. If, however, you are
processing an indexed sequential data set sequentially using the SETL and ESETL
macros, you must issue the ENQ macro before the SETL macro and the DEQ macro
after the ESETL macro. Note also that if two tasks are writing different members of a
partitioned data set, each task should issue the ENQ macro before the FIND macro and
issue the DEQ macro after the STOW macro that completes processing of the member.
Refer to OS/390 MVS Assembler Services Reference, for more information on the ENQ
and DEQ macros. For an example of the use of ENQ and DEQ macros with BISAM, see
Figure 162 on page 502.
No action required:
See “Sharing DCBs” on page 508.
ENQ on block:
When updating a shared direct data set, every task must use the BDAM exclusive
control option that is requested by coding MACRF=X in the DCB macro and an X in the
type operand of the READ and WRITE macros. See “Exclusive Control for Updating” on
page 504 for an example of the use of BDAM macros. Note that all tasks sharing a data
set must share subpool 0 (see the ATTACH macro description in OS/390 MVS
Assembler Services Reference).
Key sequence:
Tasks sharing a QISAM load mode DCB must ensure that the records to be written are
presented in ascending key sequence; otherwise, a sequence check will result in (1)
control being passed to the SYNAD routine identified by the DCB, or (2) if there is no
SYNAD routine, termination of the task.
Figure 91. Macros and Procedures Required to Share a Data Set Using a Single DCB
Data sets can also be shared both ways at the same time. More than one DCB can
be opened for a shared data set, while more than one task can be sharing one of
the DCBs. Under this condition, the serialization techniques specified for indexed
sequential and direct data sets in Figure 90 on page 338 satisfy the requirement.
For sequential and partitioned data sets, the techniques specified in Figure 90 and
Figure 91 must be used.
Open and Close of Data Sets Shared by More than One Task:
When more than one task is sharing a data set, the following restrictions must be
recognized. Failure to comply with these restrictions endangers the integrity of the
shared data set.
All tasks sharing a DCB must be in the job step that opened the DCB (see
Chapter 24, “Sharing Non-VSAM Data Sets” on page 337).
Any task that shares a DCB and starts any input or output operations using that
DCB must ensure that all those operations are complete before terminating the
task. A CLOSE macro issued for the DCB ends all input and output operations.
A DCB can be closed only by the task that opened it.
At some installations, direct access storage devices are shared by two or more
independent computing systems. Tasks executed on these systems can share data
sets stored on the device. Accessing a shared data set or the same storage area
on shared devices by multiple independent systems requires careful planning.
Without proper intersystem communication, data integrity could be endangered. For
details, see OS/390 MVS Authorized Assembler Services Guide.
To ensure data integrity in a shared DASD environment, your system must have
Global Resource Serialization (GRS) active or a functionally equivalent global
serialization method. Refer to OS/390 MVS Planning: Global Resource Serialization
for details on GRS.
PDSEs
See “Sharing PDSEs” on page 426 for more information on sharing PDSEs.
DD statement. This is true for both input and output, and is especially important
when you are using more than one access method. Any action on one DCB
that alters the JFCB affects the other DCBs and thus can cause unpredictable
results. Therefore, unless the parameters of all DCBs using one DD statement
are the same, you should use separate DD statements.
Associated data sets for the IBM 3525 Card Punch can be opened in any
order, but all data sets must be opened before any processing can begin.
Associated data sets can be closed in any order, but, after a data set has been
closed, I/O operations cannot be performed on any of the associated data sets.
See Programming Support for the IBM 3505 Card Reader and the IBM 3525
Card Punch for more information.
The OPEN macro gets user control blocks and user storage in the protection
key in which the OPEN macro is issued. Therefore, any task that processes the
DCB (such as Open, Close, or EOV) must be in the same protection key.
On systems that assure data set integrity across multiple systems, you may be
authorized to create checkpoints on shared DASD via the RACF facility class
“IHJ.CHKPT.volser,” where “volser” is the volume serial of the volume to contain
the checkpoint data set. Data set integrity across multiple systems is provided when
enqueues on the major name “SYSDSN,” minor name “data set name” are treated
as global resources (propagated across all systems in the complex) using
multisystem global resource serialization (GRS) or an equivalent function.
If the system programmer cannot insure data set integrity on any shared DASD
volumes, the system programmer need not take any further action (for instance, do
not define any profile to RACF which would cover IHJ.CHKPT.volser). You cannot
take checkpoints on shared DASD volumes.
If data set integrity is assured on all shared DASD volumes and the system
programmer wants to allow checkpointing on any of these volumes, build a facility
class generic profile with a name of IHJ.CHKPT.* with UACC of READ.
If data set integrity cannot be assured on some of the volumes, build discrete
profiles for each of these volumes with profile names of IHJ.CHKPT.volser with
UACC of NONE. These “volume-specific” profiles are in addition to the generic
profiles described above to allow checkpoints on shared DASD volumes for which
data set integrity is assured.
If the system programmer wants to allow some, but not all, users to create
checkpoints on the volumes, build the generic profiles with UACC of NONE and
permit READ access only to those specific users or groups of users.
Refer to OS/390 Security Server (RACF) Command Language Reference for details
on the RACF commands and to OS/390 Security Server (RACF) Security
Administrator's Guide for use of and planning for RACF options.
If you do not have RACF or an equivalent product, the system programmer may
write an MVS Router Exit which will be invoked by SAF and may be used to
achieve the above functions. Refer to the OS/390 MVS Authorized Assembler
Services Guide for details on how to write such an exit.
When sharing data sets, you must consider the restrictions of search direct. Search
direct can cause unpredictable results when multiple DCBs are open and the data
sets are being shared, and one of the applications is adding records. You might get
the wrong record. Also, you might receive unpredictable results if your application
has a dependency that is incompatible with the use of search direct.
With spooling, unit record devices are used at full speed if enough buffers are
available. They are used only for the time needed to read, print, or punch the data.
Without spooling, the device is occupied for the entire time it takes the job to
process. Also, because data is stored instead of being transmitted directly, output
can be queued in any order and scheduled by class and by priority within each
class.
Scheduling provides the highest degree of system availability through the orderly
use of system resources that are the objects of contention.
A SYSIN data set cannot be opened by more than one DCB at the same time; that
would result in an S013 abend.
If no record format is specified for the SYSIN data set, a record format of fixed is
supplied. Spanned records (RECFM=VS or VBS) cannot be specified for SYSIN.
The minimum record length for SYSIN is 80 bytes. For undefined records, the
entire 80-byte image is treated as a record. Therefore, a read of less than 80 bytes
results in the transfer of the entire 80-byte image to the record area specified in the
READ macro. For fixed and variable-length records, an abend results if the LRECL
is less than 80 bytes.
The logical record length value of SYSIN (JFCLRECL field in the JFCB) is filled in
with the logical record length value of the input data set. This logical record length
value is increased by 4 if the record format is variable (RECFM=V or VB).
The logical record length can be a size other than the size of the input device, if the
SYSIN input stream is supplied by an internal reader. JES supplies a value in the
JFCLRECL field of the JFCB if that field is found to be zero.
The block size value (the JFCBLKSI field in the JFCB) is filled in with the block size
value of the input data set. This block size value is increased by 4 if the record
format is variable (RECFM=V or VB). JES supplies a value in the JFCBLKSI field of
the JFCB if that field is found to be 0.
JES allows multiple opens to the same SYSOUT data set, and the records are
interspersed. However, you need to ensure that your application serializes the data
set. For more information on serialization, see Chapter 24, “Sharing Non-VSAM
Data Sets” on page 337.
From open to close of a particular data control block you should not change the
DCB indicators of the presence or type of control characters. When directed to disk
or tape, all the DCB's for a particular data set should have the same type of control
characters. For a SYSOUT data set, the DCBs can have either type of control
character or none. The result depends on the ultimate destination of the data set.
For local printers and punches, each record is processed according to its control
character.
When you use QSAM with fixed-length blocked records or BSAM, the DCB block
size parameter does not have to be a multiple of logical record length (LRECL) if
the block size is specified in the SYSOUT DD statement. Under these conditions, if
block size is greater than, but not a multiple of, LRECL, the block size is reduced to
the nearest lower multiple of LRECL when the data set is opened.
You can specify blocking for SYSOUT data sets, even though your LRECL is not
known to the system until execution. Therefore, the SYSOUT DD statement of the
go step of a compile-load-go procedure can specify a block size without the block
size being a multiple of LRECL.
You should omit the DEVD parameter in the DCB macro, or you should code
DEVD=DA.
You can use the SETPRT macro to affect the attributes and scheduling of a
SYSOUT data set.
You can supply table reference characters (TRC's) for SYSOUT data sets by
specifying OPTCD=J in the DCB. When the data set is printed, if the printer does
not support TRC's then the system discards them.
This chapter discusses the following aspects of working with sequential data sets:
Creating
Retrieving and concatenating
Modifying
Achieving device independence
Improving performance
Determining the length of a block
Writing a short block
Buffering techniques and macros
Using Hiperbatch
Characteristics of extended format data sets
Accessing HFS files.
Creating
Use either the queued or the basic sequential access method to store and retrieve
the records of a sequential data set. To create a sequential data set on magnetic
tape or DASD, you must:
Code DSORG=PS or PSU in the DCB macro.
Code a DD statement to describe the data set. (See OS/390 MVS JCL
Reference.) If SMS is implemented on your system, specify a data class in the
DD statement or allow the ACS routines to assign a data class. or Create the
data set using the TSO or access method services ALLOCATE command. (See
DFSMS/MVS Access Method Services for ICF.) If SMS is implemented on your
system, specify the DATACLAS parameter or allow the ACS routine to assign a
data class. or Call dynamic allocation (SVC 99) from your program (See
OS/390 MVS Authorized Assembler Services Guide.) If SMS is implemented on
your system, specify the data class text unit or allow the ACS routines to
assign a data class.
Process the data set with an OPEN macro (the data set is opened for
OUTPUT, OUTIN, OUTINX, or EXTEND), a series of PUT or WRITE and
CHECK macros, and then a CLOSE macro.
You can take advantage of a data class for data sets that are system-managed or
not system-managed.
OPEN (INDATA,,OUTDATA,(OUTPUT))
NEXTREC GET INDATA,WORKAREA Move mode
AP NUMBER,=P'1'
UNPK COUNT,NUMBER Record count adds 6
PUT OUTDATA,COUNT bytes to each record
B NEXTREC
ENDJOB CLOSE (INDATA,,OUTDATA)
...
COUNT DS CL6
WORKAREA DS CL5ð
NUMBER DC PL4'ð'
SAVE14 DS F
INDATA DCB DDNAME=INPUTDD,DSORG=PS,MACRF=(GM),EODAD=ENDJOB
OUTDATA DCB DDNAME=OUTPUTDD,DSORG=PS,MACRF=(PM)
...
If the record length (LRECL) does not change during processing, however only one
move is necessary; you can process the record in the input buffer segment. A
GET-locate provides a pointer to the current segment.
Retrieving
In retrieving a sequential data set on magnetic tape or direct access storage
devices, you must:
Code DSORG=PS or PSU in the DCB macro.
Tell the system where your data set is located (by coding a DD statement, or
by calling dynamic allocation (SVC 99)).
Process the data set with an OPEN macro (data set is opened for input,
INOUT, RDBACK, or UPDAT), a series of GET or READ macros, and then a
CLOSE macro.
....
OPEN (INDATA,,OUTDATA,(OUTPUT),ERRORDCB,(OUTPUT))
NEXTREC GET INDATA Locate mode
LR 2,1 Save pointer
AP NUMBER,=P'1'
UNPK ð(6,2),NUMBER Process in input area
PUT OUTDATA Locate mode
MVC ð(5ð,1),ð(2) Move record to output buffer
B NEXTREC
ENDJOB CLOSE (INDATA,,OUTDATA,,ERRORDCB)
...
NUMBER DC PL4'ð'
INDATA DCB DDNAME=INPUTDD,DSORG=PS,MACRF=(GL),EODAD=ENDJOB
OUTDATA DCB DDNAME=OUTPUTDD,DSORG=PS,MACRF=(PL)
ERRORDCB DCB DDNAME=SYSOUTDD,DSORG=PS,MACRF=(PM),RECFM=V, C
BLKSIZE=128,LRECL=124
SAVE2 DS F
...
When the change from one data set to another is made, label exits are taken as
required; automatic volume switching is also performed for multiple volume data
sets. When your program reads past the end of a data set, control passes to your
end-of-data-set (EODAD) routine only if the last data set in the concatenation has
been processed.
To save time when processing two consecutive sequential data sets on a single
tape volume, specify LEAVE in your OPEN macro, or DISP=(OLD,PASS) on the
DD statement even if you otherwise would code DISP=(OLD,KEEP).
example, if all the format-V records are the same length, you can specify format-F
when reading. If you specify format-U, you can read any format.
Your program indicates whether the system is to treat the data sets as “like” or
“unlike” by setting the bit DCBOFPPC, described in “Concatenating Unlike Data
Sets” on page 354. The DCB macro assembles this bit as 0, which indicates “like”
data sets.
To be a “like” data set, a sequential data set must meet all the following conditions:
All the data sets in a concatenation should have compatible record formats.
They are all processed with the record format of the first data set in the
concatenation (as noted in “Persistence of DCB Fields” on page 351). For
example a data set with unblocked records can be treated as having short
blocked records. A data set with fixed-blocked-standard records (format-FBS)
can be treated as having just fixed-blocked records (format-FB), but the reverse
cannot work.
Having compatible record formats does not ensure that “like” processing is
successful. For example, if the record format of the first data set in the
concatenation is fixed (format-F) and a concatenated data set has fixed-blocked
records (format-FB), then unpredictable results, such an I/O errors or open
abends, can occur, but the reverse should work.
The results of concatenating data sets of different characteristics can also be
dependent upon the actual data record size and whether or not they are SMS
managed. For example, if the first data set is format-F with a BLKSIZE and
LRECL of 80 and the concatenated data set is format-FB, not SMS managed
with a BLKSIZE of 800 and LRECL of 80, then if the actual data size of all the
records of the concatenated data set is 80 bytes, these data sets can be
processed successfully. However, if the actual data size of a record is greater
than 80 bytes, an I/O error occurs reading that record from the concatenated
data set. If, on the other hand, the concatenated data set is SMS managed, the
data from the first data set is processed but an open failure (ABEND013-60)
occurs when EOV switches to the concatenated data set, even though the
actual data size of all the records can be compatible.
If incompatible record formats are detected in the concatenation and BSAM is
being used, the system issues a warning message, see “BSAM Block Size with
Like Concatenation” on page 352.
LRECL is same as the LRECL of the preceding data set. With format-V or -VB
records, the new data set can have a smaller LRECL than is in the DCB.
All the data set block sizes should be compatible. For format-F or -FB records,
each block size should be an integral multiple of the same LRECL value.
If you code the BLKSIZE parameter in the DCB macro or on the first DD
statement, then the block size of each data set must be less than or equal to
that block size. Note that if you specify DCB parameters such as BLKSIZE,
LRECL, or BUFNO when allocating a data set after the first one, they have no
effect when your program uses “like” concatenation except as noted in “BSAM
Block Size with Like Concatenation” on page 352.
If you wish, you can specify a large BLKSIZE for the first data set to
accommodate a later data set with blocks of that size.
You can concatenate in any order of block size DASD data sets that are
accessed by QSAM or BSAM. If you are using QSAM, you must use
system-created buffers for the data set. The size of each system-created buffer
equals the block sizes rounded up to a multiple of 8. The system-created
buffers are used to process all data sets in the concatenation unless the next
data set's BLKSIZE is larger than the BLKSIZE of the data set just read. In that
case, the buffers are freed by end-of-volume processing and new
system-created buffers are obtained. This also means the buffer address
returned by GET is only guaranteed valid until the next GET or FEOV macro is
issued because the buffer pool can have been freed and a new system-created
buffer pool obtained during end-of-volume concatenation processing. For SMS
managed data set processing, see “SMS Managed Data Sets with Like
Concatenation” on page 352. For BSAM processing, see “BSAM Block Size
with Like Concatenation” on page 352.
For QSAM, if a data set after the first one is on magnetic tape and has a block
size larger than all prior specifications, the volume must have standard or ANSI
tape labels.
The device is a direct access storage device, tape, or SYSIN device, as is the
device of the preceding data set. You can concatenate a tape data set to a
DASD data set, or a DASD data set to a tape data set, for example. However,
you cannot concatenate a tape data set to a card reader.
Note: An extended format data set can be regarded as having the same
characteristics as a physical sequential data set.
If mixed tape and DASD, the POINT or CNTRL macros are not used—neither P
nor C was coded in the DCB MACRF parameter.
With “like” concatenation the system can change the following when going to
another data set:
BLKSIZE and BUFL for QSAM
Field DCBDEVT in the DCB (device type)
TRTCH (tape recording technique)
With or without concatenation the system sets LRECL in the DCB for each QSAM
GET macro when reading format-V, -D, or -U records, except with XLRI. GET
issues an ABEND if it encounters a record that is longer than LRECL was at the
completion of OPEN.
Note: If your program indicates “like” concatenation (by taking no special action
regarding DCBOFPPC) and one of the “like” concatenation rules is broken,
the results are unpredictable. A typical result is an I/O error, resulting in an
abend, or entry to the SYNAD routine. The program might even appear to
run correctly.
If the buffer pool was obtained automatically by the open routine for QSAM, the
data set transition process may free the buffer pool and obtain a new one for the
next concatenated data set. This means the buffer address returned by GET is only
guaranteed valid until the next GET or FEOV macro is issued because the buffer
pool is freed and a new system-created buffer pool obtained during end-of-volume
concatenation processing. The procedure does not free the buffer pool for the last
concatenated data set. It is also recommended that you free the system-created
buffer pool before attempting to reopen the DCB.
If you have enabled a larger block size, OPEN will search later concatenated data
sets for the largest acceptable block size and store it in the DCB. A block size is
acceptable if it comes from a source that does not also have a RECFM or LRECL
inconsistent with the RECFM or LRECL already in the DCB.
For format-F records if a data set has an LRECL that differs from the value in the
DCB, then the block size for that data set will not be considered during OPEN.
For format-V records if a data set has an LRECL that is larger than the value in the
DCB, then the block size for that data set will not be considered during OPEN.
A RECFM value of U in the DCB is consistent with any other RECFM value.
If OPEN finds an inconsistent RECFM, it will issue a warning message. OPEN does
not examine DSORG when testing consistency. It does not issue ABEND since you
might not read as far as that data set or you might later turn on the DCB
unlike-attributes bit.
Note: Even though RECFMs of concatenated data sets can be considered
compatible by BSAM (and you do not receive the expected warning
message) that does not guarantee they can be successfully processed. It
still can be necessary to treat them as “unlike.”
OPEN will test the JFCB for each data set after the one being opened. The JFCB
contains information coded when the data set was allocated and information that
OPEN can have stored there before it was dynamically reconcatenated.
All of the above processing occurs for any data set that is acceptable to BSAM.
The OPEN that you issue will not read tape labels for data sets after the first. The
system later will read those tape labels but it would be too late for the system to
discover a larger block size then.
For each data set whose JFCB contains a block size of 0 and the data set is on
permanently-resident DASD, OPEN will obtain the data set characteristics from the
data set label (DSCB). If they are acceptable and the block size is larger, OPEN
will copy the block size to the DCB.
For each JFCB or DSCB that this function of OPEN examines, if the block size
differs from the DCB block size and the DCB has fixed-standard, then OPEN will
turn off the DCB's standard bit.
If DCBBUFL, either from the DCB macro or the first DD statement, is nonzero, then
that value will be an upper limit for BLKSIZE from another data set. No block size
from a later DD statement or DSCB will be used during OPEN if it is larger than
that DCBBUFL value. OPEN will ignore that larger block size on the assumption
that you will turn on the unlike-attributes bit later, will not read to that data set, or
the data set does not actually have blocks that large.
When OPEN finds an inconsistent record format, it will issue this message:
IECð34I INCONSISTENT RECORD FORMATS rrr AND iii,ddname+cccc,dsname
where:
rrr specifies record format established at OPEN.
iii specifies record format found to be inconsistent. It is in a JFCB that has a
nonzero BLKSIZE or in a DSCB.
cccc specifies number of the DD statement after the first one, where +1 means
the second data set in the concatenation.
Unless you have some way of determining the characteristics of the next data set
before it is opened, you should not reset the DCBOFLGS field to indicate “like”
attributes during processing. When you concatenate data sets with unlike attributes
(that is, turn on the DCBOFPPC bit of the DCBOFLGS field), the EOV exit is not
taken for the first volume of any data set. If the program has a DCB OPEN exit it is
called at the beginning of every data set in the concatenation.
When a new data set is reached and DCBOFPPC is on, you must reissue the GET
or READ macro that detected the end of the data set. This is because with QSAM
the new data set can have a longer record length or with BSAM the new data set
can have a larger block size. You can have to allocate larger buffers. Figure 94 on
page 355 illustrates a possible routine for determining when a GET or READ must
be reissued.
PROBPROG DCBEXIT
Set Reread
Open Switch On
Set Bit 4
Set Reread of OFLGS
Switch Off to 1
Return to
Read and Open *
CHECK
or GET
Figure 94. Reissuing a READ or GET for Unlike Concatenated Data Sets
You might need to take special precautions if the program issues multiple READ
macros without intervening CHECK or WAIT macros for those READS. Do not
issue WAIT or CHECK macros to READ requests that were issued after the READ
that detected end-of-data. These restrictions do not apply to data set to data set
transition of “like” data sets, because no OPEN or CLOSE operation is necessary
between data sets.
Modifying
You can modify a sequential data set in three ways:
Changing the data in existing records (updates in place)
Adding new records to the end of a data set (extends the data set).
Opening for OUTPUT or OUTIN without DISP=MOD (replaces the data set's
contents). The effect of this is the same as when creating the data set. That is
described in “Creating” on page 347.
Updating in Place
When you update a data set in place, you read, process, and write records back to
their original positions without destroying the remaining records on the track. The
following rules apply:
You must specify the UPDAT option in the OPEN macro to update the data set.
To perform the update, you can use only the READ, WRITE, CHECK, NOTE,
POINT, macros or you use only GET and PUTX macros. To use PUTX, code
MACRF=(GL,PL) on the DCB macro.
You cannot delete any record or change its length
The READ and WRITE macros must be execute forms that refer to the same data
event control block (DECB). The DECB must be provided by the list forms of the
READ or WRITE macros. (The execute and list forms of the READ and WRITE
macros are described in DFSMS/MVS Macro Instructions for Data Sets.)
Note: Compressed format data sets cannot be opened with the UPDAT option,
therefore, updates in place are not allowed.
See Figure 108 on page 391 for an example of an overlap achieved by having a
read or write request outstanding while each record is being processed.
Extending
If you want to add records at the end of your data set, you must open the data set
for output with DISP=MOD specified in the DD statement, or specify the EXTEND
or OUTINX option of the OPEN macro. You can then issue PUT or WRITE macros
to the data set.
The system ensures that the data set labels on prior volumes do not have the
last-volume indicator on. The volume with the last-volume bit on is not necessarily
the last volume containing space for the data set or indicated in the catalog.
When you later extend the data set with DISP=MOD or OPEN with EXTEND or
OUTINX, OPEN must determine the volume containing the last user data.
With an system-managed data set, OPEN tests each volume from the first to the
last until it finds the last-used volume.
With a non-system-managed data set, the system follows a procedure that is less
logical. First OPEN tests the data set label on the last volume identified in the
JFCB or JFCB extension. If the last-volume indicator is on, OPEN assumes it to be
the last-used volume. If the indicator is not on, OPEN searches the volumes from
the first to the second to last. It stops when it finds a data set label with the
last-volume indicator on. This algorithm is for compatibility with older MVS levels
that supported mountable DASD volumes.
An extended format data set allocated with a single stripe may be extended to
additional volumes. They can become multi-volume data sets. All the rules which
apply to physical sequential multi-volume data sets also apply to single-striped
multi-volume extended format data sets.
Device-Dependent Macros
The following list of macros tells you which macros and macro parameters are
device-dependent. Consider only the logical layout of your data record without
regard for the type of device used. Even if your data is on a direct access volume,
treat it as if it were on a magnetic tape. For example, when updating, you must
create a new data set rather than attempt to update the existing data set.
OPEN
Specify INPUT, OUTPUT, INOUT, OUTIN, OUTINX, or EXTEND. The
parameters RDBACK and UPDAT are device-dependent and cause an
abnormal end if directed to a device of the wrong type.
READ
Specify forward reading (SF) only.
WRITE
Specify forward writing (SF) only; use only to create new records or modify
existing records.
NOTE/POINT
These macros are valid for both magnetic tape and direct access volumes. To
maintain independence of the device type and of the type of data set (physical
sequential, extended format, PDSE, and so forth), do not test or modify the
word returned by NOTE or calculate a word to pass to POINT.
BSP
This macro is valid for magnetic tape or direct access volumes. However, its
use would be an attempt to perform device-dependent action.
CNTRL/PRTOV
These are device-dependent macros.
DCB Subparameters
Coding MODE, CODE, TRTCH, KEYLEN, or PRTSP in the DCB macro makes the
program device-dependent. However, they can be specified in the DD statement.
MACRF
Specify R/W or G/P. Processing mode can also be specified.
DEVD
Specify DA if any direct access storage devices might be used. Magnetic tape
and unit-record equipment DCBs will fit in the area provided during assembly.
Specify unit-record devices only if you expect never to change to tape or direct
access storage devices.
KEYLEN
Can be specified on the DD statement if necessary.
DSORG
Specify sequential organization (PS or PSU) to get the full DCB expansion.
OPTCD
This parameter is device-dependent; specify it in the DD statement.
SYNAD
Any device-dependent error checking is automatic. Generalize your routine so
that no device-dependent information is required.
The I/O performance is improved by reducing both the processor time and the
channel start/stop time required to transfer data to or from virtual storage. Some
factors that affect performance are:
Address space type (real or virtual)
| Block size
BUFNO for QSAM
| The number of overlapped requests for BSAM (NCP=number of channel
| programs) and whether the DCB points to a DCBE that has MULTACC coded
Other activity on the processor and channel
| Device class (DASD, tape, etc.) and type (IBM 3390, 3490, etc.)
| Data set type (for example, PDSE, HFS, extended format)
| Number of stripes if extended format.
The system defaults to chained scheduling for nondirect access storage devices,
except for printers and format-U records, and for those cases in which it is not
allowed.
Chained scheduling is most valuable for programs that require extensive input and
output operations. Because a data set using chained scheduling can monopolize
available time on a channel in a V=R region, separate channels should be
assigned, if possible, when more than one data set is to be processed.
A request for exchange buffering is not honored, but defaults to move mode
and, therefore, has no effect on either a request for chained scheduling or a
default to chained scheduling. Exchange buffering is an obsolete DCB option.
A request for chained scheduling is ignored and normal scheduling used if any
of the following are met when the data set is opened:
– CNTRL macro is to be used.
– Embedded VSE 13checkpoint records on tape input are bypassed
(OPTCD=H).
– Data set is spooled (SYSIN or SYSOUT).
– NCP=1 with BSAM or BUFNO=1 with QSAM
– It is a print data set, or any associated data set for the 3525 Card Punch.
(For more information about programming the 3525, see Programming
Support for the IBM 3505 Card Reader and the IBM 3525 Card Punch.)
The number of channel program segments that can be chained is limited to the
value specified in the NCP parameter of BSAM DCBs, and to the value
specified in the BUFNO parameter of QSAM DCBs.
When the data set is a printer, chained scheduling is not supported when
channel 9 or channel 12 is in the carriage control tape or FCB.
When chained scheduling is being used, the automatic skip feature of the
PRTOV macro for the printer will not function. Format control must be achieved
by ANSI or machine control characters. (Control characters are discussed in
more detail under “Using Optional Control Characters” on page 287, and in
DFSMS/MVS Macro Instructions for Data Sets.)
When you are using QSAM under chained scheduling to read variable-length,
blocked, ASCII tape records (format-DB), you must code BUFOFF=L in the
DCB for that data set.
| If you are using BSAM with the chained scheduling option to read format-DB
| records, and have coded a value for the BUFOFF parameter other than
| BUFOFF=L, the input buffers will be converted from ASCII to EBCDIC for
| Version 3 (or to the specified character set (CCSID) for Version 4) as usual, but
| the record length returned to the DCBLRECL field will equal the block size, not
| the actual length of the record read in. The record descriptor word (RDW), if
| present, will not have been converted from ASCII to binary.
| In QSAM, the value of BUFNO determines how many buffers will be chained
| together before I/O is initiated. The default value of BUFNO is described in
| “Constructing a Buffer Pool Automatically” on page 316. When enough buffers are
| available for reading ahead or writing behind, QSAM attempts to read or write those
| buffers in successive revolutions of the disk.
In BSAM, the first READ or WRITE instruction initiates I/O unless the system is
honoring your MULTACC specification on the DCBE macro. Subsequent I/O
requests (without an associated CHECK or WAIT instruction) are put in a queue.
When the first I/O request completes normally, the queue is checked for pending
I/O requests and the system builds a channel program for as many of the pending
I/O requests as possible. The number of I/O requests that can be chained together
is limited to the number of requests that can be handled in one I/O event. This limit
is less than or equal to the NCP value.
For better performance with BSAM, use the technique described in “Using
Overlapped I/O with BSAM” on page 325 and Figure 107 on page 389. For
maximum performance, also use the MULTACC parameter on the DCBE macro.
For BSAM sequential and partitioned data sets, specifying a nonzero MULTACC
value on a DCBE macro can result in more efficient channel programs. You can
also choose to code a nonzero MULTSDN value. If MULTSDN is nonzero and
DCBNCP is zero, OPEN will determine a value for NCP and store that value in
DCBNCP before giving control to the DCB open exit. If MULTACC is nonzero and
your program uses the WAIT or EVENTS macro on a DECB or depends on a
POST exit for a DECB, then you must precede that macro or dependence by a
CHECK or TRUNC macro.
Note: For compressed format data sets, MULTACC is ignored since all buffering
is handled internally by the system.
b. Obtain the residual count that has been stored in the status area.
The residual count is in the halfword, 14 bytes from the start of the
status area.
c. Subtract this residual count from the number of data bytes
requested to be read by the READ macro. If 'S' was coded as the
length parameter of the READ macro, the number of bytes
requested is the value of DCBBLKSI at the time the READ was
issued. If the length was coded in the READ macro, this value is
the number of data bytes and it is contained in the halfword 6 bytes
from the beginning of the DECB. The result of the subtraction is the
length of the block read.
4. For undefined-length records, the actual length of the record that was
read is returned in the DCBLRECL field of the data control block.
Because of this use of DCBLRECL, you should omit LRECL. You can
use this method only with BSAM, or BPAM or after issuing a QSAM
GET macro.
Figure 95 gives an example of determining the length of the record when using
BSAM to read undefined-length records.
...
OPEN (DCB,(INPUT))
LA DCBR,DCB
USING IHADCB,DCBR
...
READ DECB1,SF,DCB,AREA1,'S'
READ DECB2,SF,DCB,AREA2,5ð
...
CHECK DECB1
LH WORK1,DCBBLKSI Block size at time of READ
L WORK2,DECB1+16 Status area address
SH WORK1,14(WORK2) WORK1 has block length
...
CHECK DECB2
LH WORK1,DECB2+6 Length requested
L WORK2,DECB2+16 Status area address
SH WORK1,14(WORK2) WORK1 has block length
...
MVC DCBBLKSI,LENGTH3 Length to be read
READ DECB3,SF,DCB,AREA3
...
CHECK DECB3
LH WORK1,LENGTH3 Block size at time of READ
L WORK2,DECB+16 Status area address
SH WORK1,14(WORK2) WORK1 has block length
...
DCB DCB ...RECFM=U,NCP=2,...
DCBD
...
Figure 95. One Method of Determining the Length of the Record When Using BSAM to
Read Undefined-Length or Blocked Records
DCBBLKSI must be changed before issuing the WRITE macro, and must be a
multiple of the LRECL parameter in the DCB. Once this is done, any subsequent
WRITE macros issued will write records with the new block length until the block
size is changed again.
This technique works for all data sets supported by BSAM. With extended format
data sets, the system actually writes all blocks in the data set as the same size, but
returns on a READ the length specified on the WRITE for the block.
Note: You can create short blocks for PDSEs but their block boundaries are not
saved when the data set is written to DASD. Therefore, if your program is
dependent on short blocks, do not use a PDSE. See “Processing
Restrictions for PDSEs” on page 402 for more information on using short
blocks with PDSEs.
Using Hiperbatch
Hiperbatch is an extension to QSAM designed to improve performance in specific
situations. Hiperbatch uses the data lookaside facility (DLF) services to provide an
alternate fast path method of making data available to many batch jobs. Through
Hiperbatch, applications can take advantage of the performance benefits of
MVS/ESA without changing existing application programs or the JCL used to run
them. For further information on using Hiperbatch, see MVS Hiperbatch Guide.
Refer to OS/390 MVS System Commands for the data lookaside facility commands,
.
Hiperbatch Striping
Uses Hiperspace Requires certain hardware
Improved performance requires multiple Performance is best with only one program
reading programs at a time
Relatively few data sets in the system can Larger number of data sets can be used at
use it at once once
QSAM only QSAM and BSAM
Large data sets with high I/O activity are the best candidates for striped data sets.
Data sets defined as extended format sequential must be accessed by means of
BSAM or QSAM and not EXCP or BDAM.
Extended format data sets have a maximum of 123 extents on each volume.
(Physical sequential data sets have a maximum of 16 extents on each volume.)
On a volume that has more than 65,535 tracks, a physical sequential data set
cannot occupy more than 65,535 tracks. An extended format data set can
occupy any number of tracks.
Extended format data sets provide improved data integrity by detecting control
unit padding. This type of data padding can occur when the processor loses
electrical power while writing a block, it can be due to an operator CANCEL
command, a time-out, or it can occur as a result of an ABEND when
PURGE=QUIESCE was not specified on the active ESTAE macro. On input,
SAM provides an I/O error instead of returning bad data when it detects an
error due to control unit padding.
The system can detect an empty extended format data set. If an extended
format data set is opened for input and that data set has never been written to,
the first read detects end-of-data and the EODAD routine is entered. You can
override this with the PASTEOD parameter on the DCBE macro.
If you specify SUL in the LABEL value when creating an extended format data
set, the system will treat it as SL. No space for user labels will be allocated.
| All physical blocks in an extended format data set are the same size but when
| a program reads a block, the access method returns the length that the writing
| program wrote. The block size of the data set is determined to be DCBBLKSI
| set at OPEN for QSAM, or the maximum of DCBBLKSI at OPEN and
| DCBBLKSI at first WRITE for BSAM (for RECFM=U where the length is coded
| in the DECB, the length is taken from the DECB instead of DCBBLKSI at first
| WRITE). The system pads short blocks passed by the user so that full blocks
| are written. See “Reading a Short Block in an Extended Format Data Set” on
| page 363 for more information. However, an attempt to write a block with a
| larger value for DCBBLKSI (or a larger length in the DECB for RECFM=U) will
| fail with abend 002-68.
Extended format data sets allocated with more than one stripe cannot be
extended to another set of volumes. They cannot become multi-volume data
sets. The term multi-volume, should not be used for a multi-striped data set
since the volumes comprising the multiple stripes appear to the user to be one
logical volume.
An extended format data set allocated with a single stripe may be extended to
additional volumes. They can become multi-volume data sets. All the rules
which apply to physical sequential multi-volume data sets also apply to
single-stripe multi-volume extended format data sets.
| Types of Compression
| Two compression techniques are available for sequential extended format data sets
| allocated in the compressed format. They are DBB based compression and tailored
| compression. These techniques determine the method used to derive a
| compression dictionary for the data sets.
| Specify the Type of Compression You Want to Use: The form of compression
| the system is to use for newly created sequential compressed format data sets can
| be specified at the data set level and/or at the installation level. At the data set
| level, you can specify TAILORED or GENERIC on the COMPACTION option in the
| data class. When you do not specify at the data set level, then it is based on the
| COMPRESS(TAILORED|GENERIC) parameter which may be found in the
| IGDSMSxx member of SYS1.PARMLIB. If you specify at the data set level, this will
| take precedence over that which is specified in SYS1.PARMLIB.
| COMPRESS(GENERIC) refers to generic DBB based compression. This is the
| default. COMPRESS(TAILORED) refers to tailored compression. When this
| member is activated via SET SMS=xx or IPL, new compressed data sets will be
| created in the form specified. The COMPRESS parameter in PARMLIB is ignored
| for VSAM KSDSs. For a complete description of this parameter, refer to
| DFSMS/MVS DFSMSdfp Storage Administration Reference.
Unless otherwise stated, all characteristics which apply to extended format data
sets continue to apply to compressed format data sets. However, due to the
differences in data format, there are some differences in characteristics.
When issued for a compressed format data set, the BSAM CHECK macro does
not ensure that data is written to DASD. However, it does ensure that the data
in the buffer has been moved to an internal system buffer, and that the user
buffer is available to be reused.
The block locator token (BLT) returned by NOTE and used as input to POINT
continues to be the relative block number (RBN) within each logical volume of
the data set. A multi-striped data set is seen by the user as a single logical
volume. Therefore, for a multi-striped data sets the RBN is relative to the
beginning of the data set and incorporates all stripes. To provide compatibility,
this RBN will refer to the logical user blocks within the data set as opposed to
the physical blocks of the data set.
However, due to the NOTE/POINT limitation of the 3-byte token, an attempt to
READ or WRITE a logical block whose RBN value exceeds 3 bytes will result
in an abend if the DCB specifies NOTE/POINT (MACRF=P).
When the data set is created, the system attempts to derive a compression
token once enough data is written to the data set (between 8K-64K for DBB
compression and much more for tailored compression). If the system is
successful in deriving a compression token, an attempt is made to compress
any additional records written to the data set. However, if an efficient
compression token could not be derived, then the data set is marked as
non-compressible and there will be no attempt to compress any records written
to the data set. However, if created with tailored compression, it is still possible
to have system blocks imbedded within the data set although a tailored
dictionary could not be derived.
If the compressed format data set is closed before the system is able to derive
a compression token, then the data set is marked as non-compressible.
Additional OPENs for output will not attempt to generate a compression token
once the data set has been marked as non-compressible.
A compressed data set can be created using the LIKE keyword and not just via
a DATACLAS.
For a partial release request on an extended sequential data set, CLOSE will do a
partial release on each stripe. After the partial release, the size of some stripes can
differ slightly from others. This difference will be, at most, only one track or cylinder.
If you use BSAM, you can set a larger NCP value or allow the system to calculate
an NCP value by means of the DCBE macro MULTSDN parameter. You can also
request accumulation by means of the DCBE macro MULTACC parameter.
DCBENSTR in the DCBE macro will tell the number of stripes for the current data
set. The DCBE and IHADCBE macros are described in DFSMS/MVS Macro
Instructions for Data Sets.
If you use QSAM, you can request more buffers via the BUFNO parameter. You
can choose for your program to calculate BUFNO according to the number of
stripes. Your program can test DCBENSTR in the DCBE during the DCB open exit
routine.
Existing programs will need to be changed and reassembled if you wish any of the
following:
To use 31-bit mode SAM and BPAM.
To ask the system to determine an appropriate NCP value. Use the MULTSDN
parameter of the DCBE macro.
To get maximum benefit from BSAM performance chaining. You must change
the program by adding the DCBE parameter to the DCB macro and including
the DCBE macro with the MULTACC parameter. If the program uses WAIT or
EVENTS or a POST exit (instead of, or in addition to, the CHECK macro), then
your program must issue the TRUNC macro whenever the WAIT or EVENTS
macro is about to be issued or the POST exit is depended upon to get control.
| Extended format data sets can use more than 64K tracks on each volume. They
| use DS1TRBAL in conjunction with DS1LSTAR to represent a TTR which can
| exceed 64K tracks. Thus, for extended format data sets, DS1TRBAL does not
| reflect the amount of space remaining on the last track written. Programs that rely
| on DS1TRBAL to determine the amount of free space must first check whether the
| data set is an extended format data set.
HFS files are POSIX conforming files which reside in HFS data sets. They are
byte-oriented rather than record-oriented, as are MVS files. They are identified and
accessed by specifying the “path” leading to them.
When using BSAM or QSAM, the system attempts to compatibly support a HFS
File such that the application program sees the file as if it were a single-volume
physical sequential data set residing on DASD. Transparent to the application is the
fact that the file is treated internally by the system as a subsystem data set.
Since HFS files are not actually stored as physical sequential data sets, the system
cannot simulate all the characteristics of a sequential data set. Because of this,
there are certain macros and services which have incompatibilities or restrictions
when dealing with HFS files.
Record Processing
Block boundaries are not maintained within the file. If you write a short block
other than at the end of the file, a later read at that point will return a full block
(except for RECFM=VB, which always returns a single record).
Record boundaries are not maintained within binary files except with
fixed-length records.
Text files are presumed to be EBCDIC.
Repositioning functions (such as POINT, BSP, CLOSE TYPE=T) are not
allowed for FIFO or character special files.
The default record format (DCBRECFM) is U for input and output.
The default blocksize (DCBBLKSI) on input is 80. There is no default for output.
The default LRECL (DCBLRECL) on input is 80. There is no default for output.
When RECFM=F(B(S))
– and the file is accessed as binary, if the last record in the file is smaller
than LRECL bytes, it will be padded with zeros when read.
– and the file is accessed as text, if any record in the file is smaller than
LRECL bytes, it will be padded with blanks when read. If any record is
longer than LRECL bytes, it will result in an I/O error due to incorrect length
when read.
When RECFM=V(B)
– and the file is accessed as binary, each record will be returned as length
LRECL, except, possibly, for the last one.
– and the file is accessed as text, if any record in the file consists of zero
bytes (that is, a text delimiter is followed by another text delimiter), the
record returned will consist of an RDW and no data bytes. If any record is
longer than LRECL bytes, it will result in an I/O error due to incorrect length
when read.
When RECFM=U
– and the file is accessed as binary, each record will be returned as length
blocksize, except, possibly, for the last one.
– and the file is accessed as text, if any record in the file consists of zero
bytes (that is, a text delimiter is followed by another text delimiter), the
record returned will consist of one blank. If any record is longer than block
size bytes, it will result in an I/O error due to incorrect length when read.
Restrictions
The following are restrictions associated with this new function:
OPEN for UPDAT is not supported.
EXCP is not supported.
DCB RECFM=V(B)S (spanned record format) is not supported.
DCB MACRF=P (NOTE/POINT) is not supported for FIFO or character special
files or if PATHOPTS=OAPPEND is specified.
NOTE/POINT does not support a file that contains more than 16 megarecords
minus two. A NOTE after 16 megarecords minus two (16,777,214) returns an
invalid value of x'FFFFFF'. A POINT to an invalid value causes the next
READ/WRITE to fail with an I/O error, unless preceded by another POINT.
In a binary file with RECFM=V(B) or RECFM=U, a POINT to other than the first
block in the file results in an abend.
BSP can only be issued following a successful CHECK (for READ or WRITE),
NOTE, or CLOSE TYPE=T LEAVE request.
The following services and utilities do not support HFS files. Unless stated
otherwise, they will return an error or unpredictable value when issued for an HFS
file.
OBTAIN
SCRATCH
RENAME
TRKCALC
Sector Convert Routine
PARTREL
PURGE by DSID is ignored.
The above require a DSCB or UCB. HFS files do not have DSCBs or valid UCBs.
Sequential Concatenation
As with physical sequential data sets, when an HFS file is found within a sequential
concatenation (where the unlike attributes bit is not set), the system will force the
use of the LRECL, RECFM, and BUFNO from the previous data set. Also, as with
current BSAM sequential concatenation, the system will also force the same NCP
and BLKSIZE. For QSAM, the BLKSIZE of each data set will be used.
When reading an HFS file in a concatenation and you are reading in a task
different from the one that issued the open, the transition out of the file must be in
the same task as the task that made the transition into the file.
See DFSMS/MVS Macro Instructions for Data Sets, for additional information
regarding BSAM and QSAM interfaces and HFS files.
Directory
Entry for Entry for Entry for Entry for
Records
Member A Member B Member C Member K Space from
Member C Deleted
Member
Member B Member K
Member K
Member K Member A
Available
Member A Area
Each member has a unique name, 1 to 8 characters long, stored in a directory that
is part of the data set. The records of a given member are written or retrieved
sequentially. See DFSMS/MVS Macro Instructions for Data Sets for the macros
used with partitioned data sets.
The main advantage of using a partitioned data set is that, without searching the
entire data set, you can retrieve any individual member after the data set is
opened. For example, in a program library that is always a partitioned data set,
each member is a separate program or subroutine. The individual members can be
added or deleted as required. When a member is deleted, the member name is
removed from the directory, but the space used by the member cannot be reused
until the data set is reorganized; that is, compressed using the IEBCOPY utility.
The directory, a series of 256-byte records at the beginning of the data set,
contains an entry for each member. Each directory entry contains the member
name and the starting location of the member within the data set, as shown in
Figure 96. Also, you can specify as many as 62 bytes of information in the entry.
The directory entries are arranged by name in alphanumeric collating sequence.
The starting location of each member is recorded by the system as a relative track
address (from the beginning of the data set) rather than as an absolute track
address. Thus, an entire data set that has been compressed can be moved without
changing the relative track addresses in the directory. The data set can be
considered as one continuous set of tracks regardless of where the space was
actually allocated.
If there is not sufficient space available in the directory for an additional entry, or
not enough space available within the data set for an additional member, or no
room on the volume for additional extents, no new members can be stored. A
directory cannot be extended and a partitioned data set cannot cross a volume
boundary.
Partitioned data set member entries vary in length and are blocked into 256-byte
blocks. Each block contains as many complete entries as will fit in a maximum of
254 bytes. Any remaining bytes are left unused and are ignored. Each directory
block contains a 2-byte count field that specifies the number of active bytes in a
block (including the count field). In Figure 97, each block is preceded by a
hardware-defined key field containing the name of the last member entry in the
block, that is, the member name with the highest binary value. Figure 97 shows the
format of the block returned when using BSAM to read the directory.
Bytes 8 2 254
┌─────────────────────┬─────────┬─────┬────────────────────────────────-------------───────────┐
│ │ │ │ Optional User Data │
│ Member │ TTR │ C │ │
│ Name │ │ │ │ │ │ │
│ │ │ │ TTRN │ TTRN │ TTRN │ │
└─────────────────────┴─────────┼─────┼──────────┴─────────┴──────────┴-------------───────────┘
8 3 | | └──────────────────────────────────────────────────────┘
& | | ð-31 halfwords
│ | | (Maximum 62 bytes)
│ | |
───┐ │ | └---------------------------------------------------┐
Pointer to │ │ | |
First Record ├──┘ | |
of Member │ | |
──┘ | |
| |
| |
| |
| |
├───────────────┬───────────────┬─────────────────────────┤
│ 1 if │ Number of │ Number of User │
│ Name is an │ User Data │ Data Halfwords │
│ Alias │ TTRNs │ │
└───────────────┴───────────────┴─────────────────────────┘
Bits ð 1-2 3-7
Member Name
specifies the member name or alias. It contains as many as 8 alphanumeric
characters, left-justified, and padded with blanks if necessary.
TTR
is a pointer to the first block of the member. TT is the number of the track,
starting from the beginning of the data set, and R is the number of the block,
starting from the beginning of that track.
C specifies the number of halfwords contained in the user data field. It can also
contain additional information about the user data field, as shown below:
Bits ð 1─2 3─7
┌───┬─────────┬──────────────┐
│ │ │ │
└───┴─────────┴──────────────┘
You can use the user data field to provide variable data as input to the STOW
macro. If pointers to locations within the member are provided, they must be 4
bytes long and placed first in the user data field. The user data field format is as
follows:
User Data
┌──────┬──────┬──────┬──────────────┐
│ TTRN │ TTRN │ TTRN │ Optional │
└──────┴──────┴──────┴──────────────┘
TT is the relative track address of the note list or the area to which you are
pointing.
R is the relative block number on that track.
N is a binary value that shows the number of additional pointers contained in a
note list pointed to by the TTR. If the pointer is not to a note list, N=0.
If a note list exists, as shown above, the list can be updated automatically when the
data set is moved or copied by a utility program such as IEHMOVE. Each 4-byte
entry in the note list has the following format:
┌────────┐
│ TTRX │
└────────┘
TT is the relative track address of the area to which you are pointing.
R is the relative block number on that track.
X is available for any use.
To place the note list in the partitioned data set, you must use the WRITE macro.
After checking the write operation, use the NOTE macro to determine the address
of the list and place that address in the user data field of the directory entry.
The linkage editor builds a note list for the load modules in overlay format. The
addresses in the note list point to the overlay segments that are read into the
system separately.
Note: Note lists are not supported for PDSEs. If a partitioned data set is to be
converted to a PDSE, the partitioned data set should not use note lists.
If your data set will be large, or if you expect to update it extensively, it might be
best to allocate a large data set. A partitioned data set cannot extend beyond one
volume. If it will be small or seldom be changed, let the system calculate the space
requirements to avoid wasted space or wasted time used for re-creating the data
set.
| VSAM, extended format, HFS, and PDSE data sets can be greater than 64K tracks.
You can calculate the block size yourself and specify it in the BLKSIZE parameter
of the DCB. For example, if the average record length is close to or less than the
track length, or if the track length exceeds 32,760 bytes the most efficient use of
the direct access storage space can be made with a block size of one-third or
one-half the track length.
For a 3380 DASD, you might then ask for either 75 tracks, or 5 cylinders, thus
allowing for 3,480,000 bytes of data Assuming the allocation size of 3,480,000
bytes and an average length of 70,000 bytes for each member, you need space for
at least 50 directory entries. If each member also has an average of three aliases,
space for an additional 150 directory entries is required.
Each member in a data set and each alias need one directory entry apiece. If you
expect to have 10 members (10 directory entries) and an average of 3 aliases for
each member (30 directory entries), allocate space for at least 40 directory entries.
Space for the directory is expressed in 256-byte blocks. Each block contains from 3
to 21 entries, depending on the length of the user data field. If you expect 200
directory entries, request at least 10 blocks. Any unused space on the last track of
the directory is wasted unless there is enough space left to contain a block of the
first member.
Any of the following space specifications would allocate approximately the same
amount of space for a 3380 DASD. Ten blocks have been allocated for the
directory. The first two examples would not allocate a separate track for the
directory. The third example would result in allocation of 75 tracks for data, plus 1
track for directory space.
SPACE=(CYL,(5,,1ð))
SPACE=(TRK,(75,,1ð))
SPACE=(232ðð,(15ð,,1ð))
SPACE=(8ð,(435ðð,,1ð)),AVGREC=U
For more information on using the SPACE and AVGREC parameters, see
Chapter 3, “Allocating Space on Direct Access Volume” on page 31 in this manual,
and also see OS/390 MVS JCL Reference and OS/390 MVS JCL User's Guide.
Process the member with an OPEN macro, a series of PUT or WRITE macros,
and then a CLOSE macro. A STOW macro is issued automatically when the
data set is closed.
These steps create the data set and its directory, write the records of the member,
and make a 12-byte entry in the directory.
//PDSDD DD ---,DSNAME=MASTFILE(MEMBERK),SPACE=(TRK,(1ðð,5,7)),
DISP=(NEW,CATLG),DCB=(RECFM=FB,LRECL=8ð,BLKSIZE=8ð)---
...
OPEN (OUTDCB,(OUTPUT))
...
PUT OUTDCB,OUTAREA Write record to member
...
CLOSE (OUTDCB) Automatic STOW
...
OUTAREA DS CL8ð Area to write from
OUTDCB DCB ---,DSORG=PS,DDNAME=PDSDD,MACRF=PM
Adding Members
To add additional members to the partitioned data set, follow the procedure
described in Figure 99. However, a separate DD statement (with the space request
omitted) is required for each member. The disposition should be specified as
modify (DISP=MOD). The data set must be closed and reopened each time a new
member is specified on the DD statement.
You can use the basic partitioned access method to process more than one
member without closing and reopening the data set. Use the STOW, BLDL, and
FIND macros to provide additional information with each directory entry, as follows:
Request space in the DD statement for the entire data set and the directory.
Define DSORG=PO or DSORG=POU in the DCB macro.
Use WRITE and CHECK to write and check the member records.
Use NOTE to note the location of any note list written within the member, if
there is a note list, or to note the location of any subgroups. A note list is used
to point to the beginning of each subgroup in a member.
When all the member records have been written, issue a STOW macro to enter
the member name, its location pointer, and any additional data in the directory.
The STOW macro writes an end-of-file mark after the member.
Continue to use the WRITE, CHECK, NOTE, and STOW macros until all the
members of the data set and the directory entries have been written.
//PDSDD DD ---,DSN=MASTFILE,DISP=MOD,SPACE=(TRK,(1ðð,5,7))
...
OPEN (OUTDCB,(OUTPUT))
LA STOWREG,STOWLIST Load address of STOW list
...
Figure 100 (Part 1 of 2). Creating Members of a Partitioned Data Set Using STOW
Figure 100 (Part 2 of 2). Creating Members of a Partitioned Data Set Using STOW
Note: Figure 100 is an example that uses note list processing, which should not
be used for PDSEs. If your installation plans to convert partitioned data sets
to PDSEs, we recommend you follow the procedure described in
Figure 117 on page 409.
| BLDL will also search a concatenated series of directories when (1) a DCB is
| supplied that is opened for a concatenated partitioned data set or (2) a DCB is not
| supplied, in which case the search order begins with the TASKLIB, then proceeds
| to the JOBLIB or STEPLIB (themselves perhaps concatenated) followed by
| LINKLIB.
You can improve retrieval time by directing a subsequent FIND macro to the BLDL
list rather than to the directory to locate the member to be processed.
By specifying the BYPASSLLA option, you can direct BLDL to search PDS and
PDSE directories on DASD only. If BYPASSLLA is coded, the BLDL code will not
call LLA to search for member names.
The BLDL list, as shown in Figure 101 on page 382, must be preceded by a
4-byte list description that specifies the number of entries in the list and the length
of each entry (12 to 76 bytes).
Programmer Supplies:
FF Number of member entries in list.
LL Even number giving byte length of each entry (minimum of 12).
Member Name Eight bytes, left-justified.
BLDL Supplies:
TTR Member starting location.
K If single data set = ð. If concatenation = number.
Not required if no user data.
Z Source of directory entry. Private library = ð.
Link library = 1. Job or step library = 2.
Not required if no user data.
C Same C field from directory. Gives number of user data halfwords.
User Data As much as will fit in entry.
Figure 101. BLDL List Format
The first 8 bytes of each entry contain the member name or alias. The next 6 bytes
contain the TTR, K, Z, and C fields. If there is no user data entry, only the TTR and
C fields are required. If additional information is to be supplied from the directory,
as many as 62 bytes can be reserved.
DESERV
FUNC=GET
DESERV GET returns SMDEs for specific members of opened PDS or PDSEs, or
a concatenation of PDSs and PDSEs. The data set can be opened for either input,
output, or update. System Managed Directory Entry (SMDE) contain the PDS or
PDSE directory. The SMDE is mapped by the macro IGWSMDE and contains a
superset of the information that is mapped by IHAPDS. The SMDE returned can be
selected by name or by BLDL directory entry.
Input by name list: If SMDEs are selected by name, a list of names can be
supplied but must be sorted in ascending order, without duplicates. Each name is
comprised of a two byte length field followed by the characters of the name. When
searching for names with less than eight characters, the names are padded on the
right with blanks to eight characters. Names greater than eight characters will have
trailing blanks and nulls stripped (to a minimum length of eight) before the search.
All connections made through a single call to GET are associated with a single
unique connect identifier. The connect identifier may be used to release all the
AREA
*****************
*****************
NAME_LIST (DESL) *****************
*****************
NAME1 *****************
NAME2
*****************
*****************
NAME3
*****************
*****************
*****************
*****************
*****************
OUTPUT BLOCK STRUCTURE:
AREA
BUFFER HEADER
(DESB)
NAME_LIST (DESL)
NAME1 SMDE_1
NAME2 SMDE_3
NAME3 SMDE_2
*****************
UNUSED
*****************
*****************
Figure 102. DESERV GET by NAME_LIST control block structure.
AREA
┌─────────────────┐
│\\\\\\\\\\\\\\\\\│
PDSDE │\\\\\\\\\\\\\\\\\│
┌──────┬─────┬───┬─────┐ │\\\\\\\\\\\\\\\\\│
│ NAME │ TTR │ C │ ... │ │\\\\\\\\\\\\\\\\\│
└──────┴─────┴───┴─────┘ │\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
└─────────────────┘
AREA
┌─────────────────┐
│ BUFFER HEADER │
PDSDE │ (DESB) │
┌──────┬─────┬───┬─────┐ ├─────────────────┤
│ NAME │ TTR │ C │ ... │ │ SMDE_1 │
└──────┴─────┴───┴─────┘ ├─────────────────┤
│\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
│\\\\ UNUSED \\\\\│
│\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
└─────────────────┘
FUNC=GET_ALL
The GET_ALL function returns SMDEs for all the member names in a PDS, a
PDSE, or a concatenation of PDSs and PDSEs. Member level connections can be
established for each member found in a PDSE. A caller uses the CONCAT
parameter to indicate which data set in the concatenation is to be processed, or
whether all of the data sets in the concatenation are to be processed.
If the caller requests that DESERV GET_ALL return all the SMDE directory entries
for an entire concatenation, the SMDEs are returned in sequence as sorted by the
SMDE_NAME field without returning duplicate names. As with the GET function, all
connections can be associated with a single connect identifier established at the
time of the call. This connect identifier can then be used to release all the
connections in a single invocation of the RELEASE function. See Figure 104 on
page 385 for an overview of control blocks related to the GET_ALL function.
AREAPTR
┌───────┐
│\\\\\\\│
└───────┘
AREAPTR
┌───────┐ AREA
│ ────┼───────────────5┌─────────────┐
└───────┘ │ (DESB) │
┌───────│ BUFFER_NEXT │
└──5┌───│-------------│
┌───────│ │ SMDE_1 │
└──5┌───│ │ SMDE_2 │
│ │ │ SMDE_3 │
│ │ └─────────────┘
│ │ │
│ │ │
│ └─────────────┘
│ │
│ │
└─────────────┘
There are two ways you can direct the system to the right member when you use
the FIND macro. Specify the address of an area containing the name of the
member, or specify the address of the TTR field of the entry in a BLDL list you
have created, by using the BLDL macro. In the first case, the system searches the
directory of the data set for the relative track address. In the second case, no
search is required, because the relative track address is in the BLDL list entry.
The system will also search a concatenated series of directories when a DCB is
supplied that is opened for a concatenated partitioned data set.
If you want to process only one member, you can process it as a sequential data
set (DSORG=PS) using either BSAM or QSAM. You specify the name of the
member you want to process and the name of the partitioned data set in the
DSNAME parameter of the DD statement. When you open the data set, the system
places the starting address in the data control block so that a subsequent GET or
READ macro begins processing at that point. You cannot use the FIND, BLDL, or
STOW macro when you are processing one member as a sequential data set.
Because the DCBRELAD address in the data control block is updated when the
FIND macro is used, you should not issue the FIND macro after WRITE and STOW
processing without first closing the data set and reopening it for INPUT processing.
You can also use the STOW macro to delete, replace, or change a member name
in the directory and store additional information with the directory entry. Because an
alias can also be stored in the directory the same way, you should be consistent in
altering all names associated with a given member. For example, if you replace a
member, you must delete related alias entries or change them so that they point to
the new member. An alias cannot be stored in the directory unless the member is
present.
Although you can use any type of DCB with STOW, it is intended to be used with a
BPAM DCB. If you use a BPAM DCB, you can issue several writes to create a
member followed by a STOW to write the directory entry for the member.
Following this STOW, your application can write and stow another member.
If you add only one member to a partitioned data set, and specify the member
name in the DSNAME parameter of the DD statement, it is not necessary for you to
use BPAM and a STOW macro in your program. If you want to do so, you can use
BPAM and STOW, or BSAM or QSAM. If you use a sequential access method, or if
you use BPAM and issue a CLOSE macro without issuing a STOW macro, the
system will issue a STOW macro using the member name you have specified on
the DD statement.
Note that no checks are made in STOW to ensure that a stow with a BSAM or
QSAM DCB came from CLOSE. When the system issues the STOW, the directory
entry that is added is the minimum length (12 bytes). This automatic STOW macro
will not be issued if the CLOSE macro is a TYPE=T or if the TCB indicates the task
is being abnormally ended when the DCB is being closed. The DISP parameter on
the DD statement determines what directory action parameter will be chosen by the
system for the STOW macro.
If DISP=NEW or MOD was specified, a STOW macro with the add option will be
issued. If the member name on the DD statement is not present in the data set
directory, it will be added. If the member name is already present in the directory,
the task will be abnormally ended.
If DISP=OLD was specified, a STOW macro with the replace option will be issued.
The member name will be inserted into the directory, either as an addition, if the
name is not already present, or as a replacement, if the name is present.
Thus, with an existing data set, you should use DISP=OLD to force a member into
the data set; and you should use DISP=MOD to add members with protection
against the accidental destruction of an existing member.
//PDSDD DD ---,DSN=MASTFILE(MEMBERK),DISP=OLD
...
OPEN (INDCB) Open for input, automatic FIND
...
GET INDCB,INAREA Read member record
...
CLOSE (INDCB)
...
When your program is executed, OPEN searches the directory automatically and
places the location of the member in the DCB.
The system will supply a value for NCP during OPEN. For performance reasons,
the program in Figure 107 will automatically take advantage of the NCP value
calculated in OPEN or set by the user on the DD statement. If the FIND macro is
omitted and DSORG on DCB changed to PS, the example in Figure 107 will work
to read a sequential data set with BSAM. The logic to do that is summarized in
“Using Overlapped I/O with BSAM” on page 325.
Note: If your installation plans to convert partitioned data sets to PDSEs, use the
procedure described in Figure 131 on page 425. Figure 106 on page 388
is an example that uses note lists, which should not be used with PDSEs.
Code DSORG=PO or POU in the DCB macro.
Specify in the DD statement the data set name of the partitioned data set by
coding DSNAME=name and DISP=OLD.
Issue the BLDL macro to get the list of member entries you need from the
directory.
Use the FIND or POINT macro to prepare for reading the member records.
The records can be read from the beginning of the member, or a note list can
be read first, to obtain additional locations that point to subcategories within the
member.
Read (and check) the records until all those required have been processed.
//PDSDD DD ---,DSN=MASTFILE,DISP=OLD
...
OPEN (INDCB) Open for input, no automatic FIND
...
LA BLDLREG,BLDLLIST Load address of BLDL list
BLDL INDCB,BLDLLIST Build a list of selected member
\ names in virtual storage
LA BLDLREG,4(BLDLREG) Point to the first entry
...
INAREA DS CL8ð
INDCB DCB ---,DSORG=PO,DDNAME=PDSDD,MACRF=R
TTRN DS F TTRN of the NOTE list to point at
NOTEREG EQU 4 Register to address NOTE list entries
NOTELIST DS ðF NOTE list
DS F NOTE list entry (4 byte TTRN)
DS 19F one entry per subgroup
BLDLREG EQU 5 Register to address BLDL list entries
BLDLLIST DS ðF List of member names for BLDL
DC H'1ð' Number of entries (1ð for example)
DC H'18' Number of bytes per entry
DC CL8'MEMBERA' Name of member
DS CL3 TTR of first record (created by BLDL)
DS X K byte, concatenation number
DS X Z byte, location code
DS X C byte, flag and user data length
DS CL4 TTRN of NOTE list
... one list entry per member (18 bytes each)
Figure 106. Retrieving Several Members and Subgroups of a Partitioned Data Set
Figure 107 (Part 1 of 2). Reading a Member of a Partitioned Data Set or PDSE Using
BPAM
Figure 107 (Part 2 of 2). Reading a Member of a Partitioned Data Set or PDSE Using
BPAM
Updating in Place
When you update a member in place, you read records, process them, and write
them back to their original positions without destroying the remaining records on the
track. The following rules apply:
You must specify the UPDAT option in the OPEN macro to update the data set.
To perform the update, you can use only the READ, WRITE, CHECK, NOTE,
POINT, FIND, and BLDL macros or you can use the GET and PUTX macros.
You cannot update concatenated partitioned data sets.
You cannot delete any record or change its length; you cannot add new
records.
With Overlapped Operations: To overlap I/O and processor activity, you can
start several read or write operations before checking the first for completion. You
cannot overlap read and write operations. However, as operations of one type must
be checked for completion before operations of the other type are started or
resumed. Note that each outstanding read or write operation requires a separate
DECB. If a single DECB were used for successive read operations, only the last
record read could be updated.
In Figure 108 on page 391, overlap is achieved by having a read or write request
outstanding while each record is being processed.
//PDSDD DD DSNAME=MASTFILE(MEMBERK),DISP=OLD,---
...
UPDATDCB DCB DSORG=PS,DDNAME=PDSDD,MACRF=(R,W),NCP=2,EODAD=FINISH
READ DECBA,SF,UPDATDCB,AREAA,MF=L Define DECBA
READ DECBB,SF,UPDATDCB,AREAB,MF=L Define DECBB
AREAA DS --- Define buffers
AREAB DS ---
...
OPEN (UPDATDCB,UPDAT) Open for update
LA 2,DECBA Load DECB addresses
LA 3,DECBB
READRECD READ (2),SF,MF=E Read a record
NEXTRECD READ (3),SF,MF=E Read the next record
CHECK (2) Check previous read operation
\ Must issue CHECK for the other outstanding READ before switching to WRITE.
\ Unfortunately this CHECK can send us to EODAD.
Note the use of the execute and list forms of the READ and WRITE macros,
identified by the parameters MF=E and MF=L.
With QSAM
You can update a member of a partitioned data set using the locate mode of
QSAM (DCB specifies MACRF=GL,PL) and using the PUTX macro. The DD
statement must specify the data set and member name in the DSNAME parameter.
This method allows only the updating of the member specified in the DD statement.
Rewriting a Member
There is no actual update option that can be used to add or extend records in a
partitioned data set. If you want to extend or add a record within a member, you
must rewrite the complete member in another area of the data set. Because space
is allocated when the data set is created, there is no need to request additional
space. Note, however, that a partitioned data set must be contained on one
volume. If sufficient space has not been allocated, the data set must be
reorganized by the IEBCOPY utility program.
When you rewrite the member, you must provide two DCBs, one for input and one
for output. Both DCB macros can refer to the same data set, that is, only one DD
statement is required.
Sequential Concatenation
Sequentially concatenated data sets are processed with a DCB that has
DSORG=PS, and can include sequential data sets, and members of partitioned
data sets and PDSEs. See “Sequential Concatenation of Data Sets” on page 349
for the rules for concatenating “like” and “unlike” data sets.
Partitioned Concatenation
Concatenated partitioned data sets are processed with a DSORG=PO in the DCB.
When partitioned data sets are concatenated, the system treats the group as a
single data set. A partitioned concatenation can contain a mixture of partitioned
data sets and PDSEs. Each partitioned data set can hold up to 16 extents. Each
PDSE is treated as if it had one extent, although it might have multiple extents on
DASD.
The result of this calculation cannot exceed 2040 or the system will be unable to
open the PDS concatenation.
Partitioned concatenation is supported only when the DCB is open for input.
Concatenated partitioned data sets are always treated as having like attributes,
except for block size. They use the attributes of the first data set only, except for
the block size. BPAM OPEN uses the largest block size among the concatenated
data sets. All attributes of the first data set are used, even if they conflict with the
block size parameter specified. For concatenated format-F data sets (blocked or
unblocked), the LRECL for each data set must be equal.
You process a concatenation of partitioned data sets the same way you process a
single partitioned data set, with one exception: you must use the FIND macro to
begin processing a member. You cannot use the POINT (or NOTE) macro until
after the FIND macro has been issued for the appropriate member. If two members
of different data sets in the concatenation have the same name, the FIND macro
determines the address of the first one in the concatenation. You would not be able
to process the second one in the concatenation. The BLDL macro provides the
concatenation number of the data set to which the member belongs in the K field of
the BLDL list. (See “BLDL—Constructing a Directory Entry List” on page 381.)
This technique works when partitioned data sets and PDSEs are concatenated. The
system considers this to be a LIKE sequential concatenation. See “Reading a
PDSE Directory” on page 431 for more information.
Advantages of PDSEs
A PDSE is a system-managed data set divided into multiple members, each
described by one or more directory entries. PDSEs must be system-managed and
are stored only on direct access storage devices. In appearance, a PDSE is similar
to a partitioned data set. For accessing a partitioned data set directory or member,
most PDSE interfaces are indistinguishable from partitioned data set interfaces.
However, PDSEs have a different internal format, which gives them increased
usability. Each member name can be 8 bytes long. Primary name for program
objects can be 8 bytes long. Alias names for program objects can be up to 1024
bytes long. The records of a given member of a PDSE are written or retrieved
sequentially.
You can use a PDSE in place of a PDS to store data, or to store programs in the
form of program objects. You can also use a PDSE to store program objects. A
program object is similar to a load module in a PDS. One PDSE cannot contain a
mixture of program objects and data members. For further information about load
modules and program objects, See DFSMS/MVS Program Management.
PDSEs and partitioned data sets are processed using the same access methods
(BSAM, QSAM, BPAM) and macros. See “Processing a Member of a PDSE” on
page 410 for the macros used with PDSEs, and DFSMS/MVS Macro Instructions
for Data Sets for more information on macros. You can use RACF to protect
PDSEs (see Chapter 5, “Protecting Data Sets” on page 47).
PDSEs have several features that can improve both your productivity and system
performance. The main advantage of using a PDSE over a PDS is that PDSEs
reuse space within the data set. The size of a PDS directory is fixed regardless of
the number of members in it, while the size of a PDSE directory is flexible and
expands to fit the members stored in it. Also, the system reclaims space
automatically whenever a member is deleted or replaced, and returns it to the pool
of space available for allocation to other members of the same PDSE. The space
can be reused without having to do an IEBCOPY compress. Figure 109 on
page 396 illustrates these advantages.
Figure 109. A partitioned Data Set Extended (PDSE). When member B is deleted, the space it occupied becomes
available for reuse by new members D and E.
Note: Alias names for program objects, in a PDSE, can be up to 1024 bytes long.
To access these names the DESERV interfaces must be used. Primary
names for program objects are restricted to 8 bytes. All names for PDS
member must be 8 bytes long.
Structure of a PDSE
When accessed sequentially, via BSAM or QSAM, the PDSE directory appears to
be constructed of 256-byte blocks containing sequentially ordered entries. The
PDSE directory looks like a PDS directory even though its internal structure and
block size are different. PDSE directory entries vary in length. Each directory entry
contains the member name or an alias, the starting location of the member within
the data set and optionally user data. The directory entries are arranged by name
in alphanumeric collating sequence.
You can use BSAM or QSAM to read the directory sequentially. The directory is
searched and maintained by the BLDL, DESERV, FIND, and STOW macros. If you
use BSAM or QSAM to read the directory of a PDSE which contains program
objects with names longer than 8 bytes, directory entries for these names will not
be returned. If you need to be able to view these names, you must use the
DESERV FUNC=GET_ALL interface instead of BSAM or QSAM. Similarly, the
BLDL, FIND, and STOW macro interfaces allow specification of only 8 byte
member names. These are analogous DESERV functions for each of these
interfaces to allow for processing names greater than 8 bytes. See “Processing the
Partitioned Data Set Directory” on page 374 for a description of the fields in a
PDSE directory entry.
The PDSE directory is indexed, permitting more direct searches for members.
Hardware-defined keys are not used to search for members. Instead, the name and
the relative track address of a member are used as keys to search for members.
The TTRs in the directory can change if you move the PDSE, since for PDSE
members the TTRs are not relative track and record numbers but rather pseudo
randomly generated aliases for the PDSE member. These TTRs may sometimes be
referred to as Member Locator Tokens (MLTs).
The limit for the number of members in a PDSE directory is 522,236. The PDSE
directory is expandable—you can keep adding entries up to the directory's size limit
or until the data set runs out of space. The system uses the space it needs for the
directory entries from storage available to the data set.
Note: For a PDS, the size of the directory is determined when the data set is
initially allocated. There can be fewer members in the data set than the
directory can contain, but when the preallocated directory space is full, the
partitioned data set must be copied to a new data set before new members
can be added.
Reuse of Space
When a PDSE member is updated or replaced, it is written in the first available
space. This is either at the end of the data set or in a space in the middle of the
data set marked for reuse. This space need not be contiguous. The objective of the
space reuse algorithm is not to extend the data set unnecessarily.
With the exception of UPDATE, a member is never immediately written back to its
original space. The old data in this space is available to programs that had a
connection to the member before it was rewritten. The space is marked for reuse
only when all connections to the old data are dropped. However, once they are
dropped, there are no pointers to the old data, so no program can access it. A
connection may be established at the time a PDSE is opened, or by BLDL, FIND,
POINT, or DESERV. These connections remain in effect until the program closes
the PDSE or the connections are explicitly released by issuing DESERV
FUNC=RELEASE, STOW disconnect, or (under certain cases) another POINT or
FIND. Pointing to the directory can also release connections. Connections are
dropped when the data set is closed. For additional information about connections,
see DFSMS/MVS Macro Instructions for Data Sets.
Directory Structure
Logically, a PDSE directory looks the same as a PDS directory. It consists of a
series of directory records in a block. Physically, it is a set of pages at the front of
the data set, plus additional pages interleaved with member pages. Five directory
pages are initially created at the same time as the data set. New directory pages
are added, interleaved with the member pages, as new directory entries are
required. A PDSE always occupies at least five pages of storage.
Figure 111 and Figure 112 on page 400 show examples of TTRs for unblocked
and blocked records.
TTR of Member A = X'ððððð2'
┌─────────┐ ┌─────────┐
│ │ │ │ ...
└─────────┘ └─────────┘
& &
│ │
Block 1 Block 2
X'1ðððð1'14 X'1ðððð2'
Figure 111. TTRs for a PDSE Member (Unblocked Records)
In both examples, PDSE member “A” has a TTR of X'000002'. In Figure 111, the
records are unblocked, the record number14 for logical record 1 is X'100001' and
logical record 2 is X'100002'.
In Figure 112, the records are fixed-length, blocked with LRECL=80 and
BLKSIZE=800. The first block is identified by the member TTR, the second block
by a TTR of X'10000B', and the third block by a TTR of X'100015'. Note that the
TTRs of the blocks differ by an amount of 10, which is the blocking factor.
To position to a member, use the TTR obtained from the BLDL, or NOTE macro, or
a BSAM read of the directory, or DESERV FUNC=GET or FUNC=GET_ALL. To
locate the TTR of a record within a member, use the NOTE macro (see
“NOTE—Provide Relative Position” on page 421).
14 The first record in a member can be pointed to using the TTR for the member (in the above examples, X'000002').
Figure 113 shows an example of how the records are reblocked when the PDSE
member is read.
Writing the PDSE Member (LRECL=8ð, BLKSIZE=16ð, and five short blocks)
│ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │
└─┴─┘ └─┴─┘ └─┴─┘ └─┴─┘ └─┴─┘ └─┘ └─┘ └─┘ └─┘ └─┘
│ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │
└─┴─┘ └─┴─┘ └─┴─┘ └─┴─┘ └─┴─┘ └─┴─┘ └─┴─┘ └─┘
Figure 113. Example of How PDSE Records are Reblocked
Suppose you create a PDSE member which has a logical record length of 80-bytes,
such that you write five blocks with a block size of 160 (blocking factor of 2) and
five short blocks with a block size of 80. Five blocks with a block size of 160
(blocking factor of 2) and five short blocks with a block size of 80. When you read
back the PDSE member, the logical records are reblocked into seven 160-byte
simulated blocks and one short block. Note that the short block boundaries are not
saved on output.
You also can change the block size of records when reading the data set.
Figure 114 shows how the records are reblocked when read.
│ │ │ │ │ │ │ │ │ │ │ │ │ │ │
└─┴─┴─┴─┘ └─┴─┴─┴─┘ └─┴─┴─┴─┘
│ │ │ │ │ │ │ │ │ │ │ │ │ │ │
└─┴─┴─┴─┴─┘ └─┴─┴─┴─┴─┘ └─┴─┘
Figure 114. Example of Reblocking When the Block Size Has Been Changed
Suppose you write three blocks with blocksize of 320, and the logical record length
is 80-bytes. Then if you read this member with a block size of 400 (blocking factor
of 5), the logical records are reblocked into two 400-byte simulated blocks and one
160-byte simulated block.
If the data set was a partitioned data set, you would not be able to change the
block size when reading the records.
A checkpoint data set cannot be a PDSE. Checkpoint will fail if the checkpoint
data set is a PDSE. Also, a failure will occur if a checkpoint is requested when
a DCB is opened to a PDSE, or if a partitioned data set was opened at
checkpoint time but was changed to a PDSE by restart time.
Do not use the TRKCALC macro because results could be inaccurate.
(However, no error indication is returned.)
A library containing cataloged procedures must be either a PDS or PDSE. The
system procedure library, SYS1.PROCLIB, must be a PDS. The installation can
have other procedure libraries which can be PDSEs.
When a cataloged procedure library is used from one system the library is
opened for input and stays open for extended periods. If any other user or
system attempts to open the procedure library for output in a
non-XCF(non-parallel-sysplex) environment, the second user will receive an
ABEND. This could prevent procedure library updates for extended periods.
This section shows how to use the SPACE JCL keyword to allocate primary and
secondary storage space amounts for a PDSE. The PDSE directory can extend into
secondary space. A PDSE can have a maximum of 123 extents. A PDSE cannot
extend beyond one volume. Note that a fragmented volume might use up extents
more quickly because you get less space with each extent. With a
SPACE=(CYL,(1,1,1)) specification, the data set can extend to 123 cylinders (if
space is available).
See “Allocating Space for a Partitioned Data Set” on page 377 for examples of
the SPACE keyword.
See Chapter 3, “Allocating Space on Direct Access Volume” on page 31 and
OS/390 MVS JCL User's Guide and OS/390 MVS JCL Reference for more
information on how to allocate space.
expandable directory so that space planning is less critical, and it can be shared
more efficiently.
The following are some areas to consider when determining the space
requirements for a PDSE.
Integrated Directory
All PDSE space is available for either directory or member use. Within a data set,
there is no difference between pages used for the directory and pages used for
members. As the data set grows, the members and directory have the same space
available for use. The directory, or parts of it, can be in secondary extents.
The format of a PDSE allows the directory to contain more information. This
information can take more space than a PDS directory block.
The PDSE directory contains keys (member names) in a compressed format. The
insertion or deletion of new keys may cause the compression of other directory
keys to change. Therefore the change in the directory size may be different than
the size of the inserted or deleted record.
Studies show that a typical installation has 18% to 30% of its PDS space in the
form of gas. This space is unusable to the data set until it has been compressed. A
PDSE dynamically reuses all the allocated space according to a first-fit algorithm.
You do not need to make any allowance for gas when you allocate space for a
PDSE.
ABEND-D37 can occur on a PDSE indicating it is FULL, but another member can
still be saved in the data set. Recovery processing from an ABEND-D37 in ISPF
closes and reopens the data set. This new open of the data set allows PDSE code
to reclaim space so a member can now be saved.
Data set compression is not necessary with a PDSE. Since there is no gas
accumulation in a PDSE, there is no need for compression.
Extent Growth
A PDSE can have up to 255 extents. Since a PDSE can have more secondary
extents, you can get the same total space allocation with a smaller secondary
allocation. A PDS requires a secondary extent about eight times larger than a
PDSE to have the same total allocation. Conversely, for a given secondary extent
value, PDSEs can grow about eight times larger before needing to be condensed.
Defragmenting is the process by which multiple small extents are consolidated into
fewer large extents. This operation can be performed directly from the ISMF data
set list.
Although the use of smaller extents can be more efficient from a space
management standpoint, to achieve the best performance you should avoid
fragmenting your data sets whenever possible.
Applications can use different logical block sizes to access the same PDSE. The
block size in the DCB is logical and has no effect on the physical block (page) size
being used.
The DCB block size does not affect the physical block size and all members of a
PDSE are assumed to be reblockable. If you code your DCB with BLKSIZE=6160,
the data set is physically reblocked into 4KB pages, but your program still sees
6160-byte logical blocks.
Free Space
The space for any library can be overallocated. This excess space can be released
manually with the FREE command in ISPF. Or you could code the release (RLSE)
parameter on your JCL.
Fragmentation
Most allocation units are approximately the same size. This is because of the way
members are buffered and written in groups of multiple pages. There is very little, if
any, fragmentation in a PDSE.
If there is fragmentation, copy the data set with IEBCOPY or DFSMSdss. The
fragmented members are recombined in the new copy.
Defining a PDSE
This section shows how to define a PDSE. The DSNTYPE keyword defines either a
PDSE or partitioned data set. The DSNTYPE values are:
LIBRARY (defines a PDSE)
PDS (defines a partitioned data set).
First SMS checks the DSNTYPE in the JCL16, then in the LIKE keyword, the data
class, and finally in the installation default in SYS1.PARMLIB.
If DSNTYPE=LIBRARY is specified in the JCL and the installation default is
PDS, the data set would be allocated as a PDSE.
If DSNTYPE is not specified in either the JCL or data class, the installation
default coded in SYS1.PARMLIB would determine which type of data set is
allocated. If no installation default is coded in SYS1.PARMLIB, the data set
would be allocated as a PDS.
If a DSNTYPE of PDS is processed first, the data set would be allocated as a
PDS.
If DSNTYPE=LIBRARY is specified in either the JCL or data class definition
and a release before MVS/DFP 3.2 is installed, the data set would be allocated
as a PDS.
An error condition exists and the job is ended with appropriate messages if:
DSNTYPE=LIBRARY was specified but the data is not system-managed.
The DSNTYPE keyword was specified in the JCL, but the job runs on a
processor with a release of MVS that does not support the JCL keyword
DSNTYPE. Message IEF630I is issued.
//PDSEDD DD DSNAME=MASTFILE(MEMBERK),SPACE=(TRK,(1ðð,5,7)),
DISP=(NEW,CATLG),DCB=(RECFM=FB,LRECL=8ð,BLKSIZE=8ð),
DSNTYPE=LIBRARY,STORCLAS=S1Pð1Sð1,---
...
OPEN (OUTDCB,(OUTPUT))
...
PUT OUTDCB,OUTAREA Write record to member
...
CLOSE (OUTDCB) Automatic STOW
...
OUTAREA DS CL8ð Area to write from
OUTDCB DCB ---,DSORG=PS,DDNAME=PDSEDD,MACRF=PM
This procedure allows you to use the same program to allocate either a sequential
data set or a member of a partitioned data set with only a change to the JCL.
The PDSE must be system-managed (specify a STORCLAS in the DD
statement for the PDSE, or allow the ACS routines to direct the data set to
system-managed storage.)
Code DSORG=PS in the DCB macro.
As a result of these steps, the data set and its directory are created, the records of
the member are written, and an entry is automatically made in the directory with no
user data.
A PDSE becomes a PDSE program library when its first member is stored by the
Binder.
The example in Figure 117 shows how to process more than one PDSE or
partitioned data set member without closing and reopening the data set.
//PDSEDD DD ---,DSN=MASTFILE,DISP=MOD,SPACE=(TRK,(1ðð,5,7))
...
OPEN (OUTDCB,(OUTPUT))
...
\\ WRITE MEMBER RECORDS
Repeat from label “MEMBER” for each additional member, changing the
member name in the “STOWLIST” for each member
...
CLOSE (OUTDCB) (NO automatic STOW)
...
OUTAREA DS CL8ð Area to write from
OUTDCB DCB ---,DSORG=PO,DDNAME=PDSEDD,MACRF=W
STOWLIST DS ðF List of member names for STOW
DC CL8'MEMBERA' Name of member
DS CL3 TTR of first record (created by STOW)
DC X'ðð' C byte, no user TTRNs, no user data
...
OPEN (DCB1,(OUTPUT),DCB2,(OUTPUT))
WRITE DECB1,SF,DCB1,BUFFER Write record to 1st member
CHECK DECB1
...
WRITE DECB2,SF,DCB2,BUFFER Write record to 2nd member
CHECK DECB2
...
STOW DECB1,PARML1,R Enter 1st member in the directory
STOW DECB2,PARML2,R Enter 2nd member in the directory
...
DCB1 DCB DSORG=PO,DDNAME=X, ... Both DCBs open to the
DCB2 DCB DSORG=PO,DDNAME=X, ... same PDSE
Open two DCBs to the same PDSE, write the member records, and STOW them in
the PDSE directory. Code different names for the parameter list in the STOW
macro for each member written in the PDSE directory.
PDSEs are designed to automatically reuse data set storage when a member is
replaced. Partitioned data sets do not reuse space. If a member is deleted or
replaced, the partitioned data set remains available to applications that were
accessing that member's data before it was deleted or replaced.
All connections established to members while a data set was opened are released
when the data set is closed. If the connection was established by FIND by name,
the connection is released when another member is connected through FIND or
POINT. The system reclaims the space used when all connections for a specific
member have been released.
If deleting or replacing a member, the old version of the member is still accessible
by those applications connected to it. Any application connecting to the member by
name (through BLDL, FIND, or OPEN), following the replace operation, accesses
the new version. (The replaced version cannot be accessed using a FIND by TTR
or POINT unless a connection already exists for it.)
Connections established by OPEN, BLDL, FIND, and POINT are for use by BSAM,
QSAM and BPAM for reading and writing member data. Connections established
by DESERV are primarily for use by program management. While all connections
are dropped when the data set is closed, selected connections may be
disconnected via STOW DISC or DESERV FUNC=RELEASE. STOW may only
disconnect from connections established by OPEN, BLDL, FIND and POINT.
DESERV FUNC=RELEASE may disconnect from only connections established by
the DESERV functions.
| BLDL will also search a concatenated series of directories when (1) a DCB is
| supplied that is opened for a concatenated partitioned data set or (2) a DCB is not
| supplied, in which case the search order begins with the TASKLIB, then proceeds
| to the JOBLIB or STEPLIB (themselves perhaps concatenated) followed by
| LINKLIB.
You can improve retrieval time by directing a subsequent FIND macro to the BLDL
list rather than to the directory to locate the member to be processed.
Figure 101 on page 382 shows the BLDL list, which must be preceded by a 4-byte
list description that shows the number of entries in the list and the length of each
entry (12 to 76 bytes). If any options such as NOCONNECT or BYPASSLLA are
specified, the 4 byte list descriptor must be preceded by an 8 byte BLDL LIST
prefix. The first 8 bytes of each entry contain the member name or alias. The next
6 bytes contain the TTR, K, Z, and C fields. The minimum directory length is 12
bytes.
Like a BSAM or QSAM read of the directory, the BLDL NOCONNECT option does
not connect the PDSE members. The BLDL NOCONNECT option causes the
system to use less virtual storage. The NOCONNECT option would be appropriate
if BLDLs are issued for many members that might not be processed.
Do not use the NOCONNECT option if two applications will process the same
member. For example, if an application deletes or replaces a version of a member
and NOCONNECT was specified, that version is inaccessible to any application
that is not connected.
For PDSE program libraries you can direct BLDL to search the LINKLST, JOBLIB,
and STEPLIB be searched. Directory entries for load modules located in the Link
Pack Area (LPA) cannot be accessed by the BLDL macro.
For variable blocked spanned (RECFM=VBS) records, the BSP macro backspaces
to the start of the first record in the buffer just read. The system does not
backspace within record segments. In other words, issuing the BSP macro followed
by a read always begins the block with the first record segment or complete
segment. (A block can contain more than one record segment.)
Attention: When you create a PDSE member, then issue the BSP macro, followed
by a WRITE macro, you will destroy all the data of the member beyond the record
just written.
All functions will return results to the caller in areas provided by the invoker or
areas returned by DE Services. All functions provide status information in the form
of return and reason codes. The macro IGWDES maps all the parameter areas.
FUNC=GET
DESERV GET returns SMDEs for members of opened PDS or PDSEs. The data
set can be opened for either input, output or update. The SMDE contains the PDS
or PDSE directory. The SMDE is mapped by macro IGWSMDE and contains a
super-set of the information that is mapped by IHAPDS. The SMDE returned can
be selected by name or by BLDL directory entry.
Input by name list: If SMDEs are selected by name, a list of names is supplied
and it must be sorted in ascending order without duplicates. Each name is
comprised of a two byte length field followed by the characters of the name. When
searching for names with less than eight characters, the names are padded on the
right with blanks to eight characters. Names greater than eight characters will have
trailing blanks and nulls stripped (to a minimum length of eight) before the search.
For each length field that contains a value greater than eight, DE services ignores
trailing blanks and nulls when doing the search.
removed from the system until the connection is released. The connection intent is
specified by the CONN_INTENT parameter, CONN_INTENT=HOLD must be
specified.
All connections made through a single call to GET are associated with a single
unique connect identifier. The connect identifier may be used to release all the
connections in a single invocation of the RELEASE function. An example of
DESERV GET is shown in Figure 121.
AREA
┌─────────────────┐
│\\\\\\\\\\\\\\\\\│
NAME_LIST (DESL) │\\\\\\\\\\\\\\\\\│
┌───────┬───────┐ │\\\\\\\\\\\\\\\\\│
NAME1 %───┼──── │ │ │\\\\\\\\\\\\\\\\\│
├───────┼───────┤ │\\\\\\\\\\\\\\\\\│
NAME2 %───┼──── │ │ │\\\\\\\\\\\\\\\\\│
├───────┼───────┤ │\\\\\\\\\\\\\\\\\│
NAME3 %───┼──── │ │ │\\\\\\\\\\\\\\\\\│
└───────┴───────┘ │\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
└─────────────────┘
AREA
┌─────────────────┐
│ BUFFER HEADER │
NAME_LIST (DESL) │ (DESB) │
┌───────┬───────┐ ├─────────────────┤
NAME1 %───┼──── │ ────┼────────5 │ SMDE_1 │
├───────┼───────┤ ├─────────────────┤
NAME2 %───┼──── │ ────┼─┐ ┌───5 │ SMDE_3 │
├───────┼───────┤ └──│──┐ ├─────────────────┤
NAME3 %───┼──── │ ────┼────┘ └5 │ SMDE_2 │
└───────┴───────┘ ├─────────────────┤
│\\\\\\\\\\\\\\\\\│
│\\\\ UNUSED \\\\\│
│\\\\\\\\\\\\\\\\\│
└─────────────────┘
AREA
┌─────────────────┐
│\\\\\\\\\\\\\\\\\│
PDSDE │\\\\\\\\\\\\\\\\\│
┌──────┬─────┬───┬─────┐ │\\\\\\\\\\\\\\\\\│
│ NAME │ TTR │ C │ ... │ │\\\\\\\\\\\\\\\\\│
└──────┴─────┴───┴─────┘ │\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
└─────────────────┘
AREA
┌─────────────────┐
│ BUFFER HEADER │
PDSDE │ (DESB) │
┌──────┬─────┬───┬─────┐ ├─────────────────┤
│ NAME │ TTR │ C │ ... │ │ SMDE_1 │
└──────┴─────┴───┴─────┘ ├─────────────────┤
│\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
│\\\\ UNUSED \\\\\│
│\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
│\\\\\\\\\\\\\\\\\│
└─────────────────┘
FUNC=GET_ALL
The GET_ALL function returns SMDEs for all the member names in a PDS, a
PDSE, or a concatenation of PDSs and PDSEs. Member level connections can be
established for each member found in a PDSE. A caller uses the CONCAT
parameter to indicate which data set in the concatenation is to be processed, or
whether all of the data sets in the concatenation are to be processed.
If the caller requests that DESERV GET_ALL return all the SMDE directory entries
for an entire concatenation, the SMDEs are returned in sequence as sorted by the
SMDE_NAME field without returning duplicate names. As with the GET function, all
connections can be associated with a single connect identifier established at the
time of the call. This connect identifier can then be used to release all the
connections in a single invocation of the RELEASE function. See Figure 123 on
page 416 for an overview of control blocks related to the GET_ALL function.
AREAPTR
┌───────┐
│\\\\\\\│
└───────┘
AREAPTR
┌───────┐ AREA
│ ────┼───────────────5┌─────────────┐
└───────┘ │ (DESB) │
┌───────│ BUFFER_NEXT │
└──5┌───│-------------│
┌───────│ │ SMDE_1 │
└──5┌───│ │ SMDE_2 │
│ │ │ SMDE_3 │
│ │ └─────────────┘
│ │ │
│ │ │
│ └─────────────┘
│ │
│ │
└─────────────┘
FUNC=GET_NAMES
The GET_NAMES function will obtain a list of all names and associated application
data for a member of a new PDSE. This function does not support PDSs.
The caller provides a name or its alias name for the member as input to the
function. The buffer is mapped by the DESB structure and is formatted by
GET_NAMES. This function will return data in a buffer obtained by GET_NAMES.
The data structure returned in the DESB is the member descriptor structure
(DESD). The DESD_NAME_PTR field points to the member or alias name. The
DESD_DATA_PTR points to the application data. For a data member, the
application data is the user data from the directory entry. For a primary member
name of a program object, the application data is mapped by the PMAR and
PMARL structures of the IGWPMAR macro. For an alias name of a program object,
the application data is mapped by the PMARA structure of the IGWPMAR macro.
The DESB_COUNT field indicates the number of entries in the DESD, which is
located at the DESB_DATA field. The buffer is obtained in a subpool as specified
by the caller and must be released by the caller. If the caller is in key 0 and
subpool 0 is specified, the DESB will be obtained in subpool 250.
See Figure 124 on page 417 for an overview of control blocks related to the
GET_NAMES function.
AREAPTR
┌───────┐
│\\\\\\\│
└───────┘
AREAPTR
┌───────┐ AREA
│ ────┼───────────────5┌─────────────┐
└───────┘ │ (DESB) │
┌───────│ BUFFER_NEXT │
└──5┌───│-------------│
┌───────│ │ PMAR_1 │
└──5┌───│ │ PMAR_2 │
│ │ │ PMAR_3 │
│ │ └─────────────┘
│ │ │
│ │ │
│ └─────────────┘
│ │
│ │
└─────────────┘
FUNC=RELEASE
The RELEASE function can remove connections established through the GET and
GET_ALL functions. The caller must specify the same DCB which was passed to
DESERV to establish the connections. The connections established by BLDL, FIND
or POINT macros are unaffected.
The caller can specify which connections are to be removed in one of two ways,
either a connect id or a list of SMDEs by supplying. The function removes all
connects from a single request of the GET or GETALL functions if the caller passes
a connect identifier. Alternatively, if provided with a list of SMDEs, the function
removes the connections associated with the versions of the member names in the
SMDEs.
Note: The SMDEs as returned by GET and GETALL contain control information
used to identify the connection, this information should not be modified prior
to issuing the RELEASE.
If all connections of a connect identifier are released based on SMDEs, the connect
identifier is not freed or reclaimed. Only release by connect identifier will cause DE
Services to reclaim the connect id for future use. It is not an error to include
SMDEs for PDS data sets even though connections can't be established. It is an
error to release an used connect identifier. It is also an error to release a PDSE
SMDE for which there is no connection.
The DE Services user does not need to issue the RELEASE function to release
connections as all connections not explicitly released can be released by closing
the DCB. See Figure 125 on page 418 for an overview of control blocks related to
the RELEASE function.
CONN_ID
┌───────┐
│\4BYTE\│
└───────┘
NAME_LIST (DESL)
┌───────┬───────┐
│ │ ────┼────────5SMDE_1
├───────┼───────┤
│ │ ────┼─┐ ┌───5SMDE_2
├───────┼───────┤ └──│──┐
│ │ ────┼────┘ └5SMDE_3
└───────┴───────┘
FUNC=UPDATE
You can update selected fields of the directory entry for a PDSE program object
using the DESERV UPDATE function. This allows the caller to update selected
fields of the PMAR, The caller must supply a DCB that is open for output or update.
The caller must also supply a DESL that points to the list of SMDEs to be updated.
The DESL can be processed in sequence and a code can indicate successful and
unsuccessful update. The SMDE (as produced by the GET function) contains the
MLT and concatenation number of the member as well as an item number. These
fields will be used to find the correct directory record to be updated. The
DESL_NAME_PTR is ignored. The caller should issue a DESERV GET function
call to obtain the SMDEs; modify the SMDEs as required; and issue a DESERV
UPDATE function call to pass the updated DESL.
The UPDATE function will not affect the directory entry imbedded in the program
object. This has implications for a Binder include of a PDSE program object as a
sequential file, where the Binder can use the directory entry in the program object
rather than the one in the directory.
There are two ways you can direct the system to the right member when you use
the FIND macro. Specify the address of an area containing the name of the
member, or specify the address of the TTRk field of the entry in a BLDL list you
have created, by using the BLDL macro. k is the concatenation number of the data
set containing the member. In the first case, the system searches the directory of
the data set to connect to the member. In the second case, no search is required,
because the relative track address is in the BLDL list entry.
If the data set is open for output, close it and reopen it for input or update
processing before issuing the FIND macro.
Note: If you have insufficient access authority (RACF execute authority), or if the
share options are violated, the FIND macro will fail. See “Sharing PDSEs”
on page 426 for a description of the share options allowed for PDSEs.
If you are testing a single data set, use the CONCAT default, which is 0. The
CONCAT parameter is used only for partitioned concatenation, not sequential
concatenation. For sequential concatenation, the current data set is tested. The
return code in register 15 shows whether the function failed or is not supported on
the system.
ISITMGD can also be used to determine the type of library, data or program.
Specifying the DATATYPE option on the ISITMGD macro will set the library type in
the parameter list. Refer to constants ISMDTREC, ISMDTPGM, and ISMDTUNK in
macro IGWCISM for the possible data type settings.
The TTRz returned from a NOTE for the first directory record is the only valid TTRz
that can be used for positioning by POINT while processing within the PDSE
directory.
Here are some examples of results when using NOTE with PDSEs. A NOTE:
Immediately following an OPEN returns an invalid address (X'00000000').
Also, if a member is being pointed to using a FIND macro or by the member
name in the JCL, but no READ, WRITE, or POINT has been issued, NOTE
returns an invalid address of (X'00000000').
Immediately following a STOW ADD or STOW REPLACE returns the TTRz of
the logical end-of-file mark for the member stowed. If the member is empty (no
writes done), the value returned is the starting TTRz of the member stowed.
Following any READ after an OPEN returns the starting TTRz of the PDSE
directory if no member name is in the JCL, or the TTRz of the member if the
member name is in the JCL.
Following the first READ after a FIND or POINT (to the first record of a
member) returns the TTRz of the member.
Following the first WRITE of a member returns the TTRz of the member.
Following a later READ or WRITE returns the TTRz of the first logical record in
the block just read or written.
Issued while positioned in the middle of a spanned record returns the TTRz of
the beginning of that record.
Issued immediately following a POINT operation (where the input to the POINT
was in the form “TTR1”) will return a note value of “TTR0.”
Issued immediately following a POINT operation (where the input to the POINT
was in the form “TTR0”) will return an invalid note value X'00000000').
POINT—Position to a Block
The POINT macro starts the next READ or WRITE operation at the beginning of a
PDSE member, or anywhere within a PDSE member. The POINT macro uses the
track record address (TTRz), which can be obtained by NOTE, BLDL or a BSAM
read of the directory, to position to the correct location. If positioning to the
beginning of a member, the z byte in the TTR must be zero. The POINT macro
establishes a connection to the PDSE member (unless the connection already
exists).
The POINT macro positions to the first segment of a spanned record even if the
NOTE was done on another segment. If the current record spans blocks, setting
the z byte of the TTRz field to one will allow you to access the next record (not the
next segment).
If you have insufficient access authority (RACF execute authority) or if the share
options are violated, the POINT macro will fail with an I/O error. See “Sharing
PDSEs” on page 426 for more information.
Figure 128. Using NOTE and FIND to Switch Between Members of a Concatenated PDSE
This example uses FIND by TTR. Note that when your program resumes reading a
member, that member might have been replaced by another program. See “Sharing
PDSEs” on page 426.
You can also use the STOW macro to add, delete, replace, or change a member
name in the directory. The add and replace options also store additional information
in the directory entry. When using STOW REPLACE to replace a primary member
name, any existing aliases are deleted. When using STOW DELETE to delete a
primary member name, any existing aliases are deleted. STOW ADD and
REPLACE are not permitted against PDSE program libraries.
The STOW INITIALIZE function allows you to clear, or reset to empty, a PDSE
directory, as shown in Figure 129:
//PDSEDD DD ---,DSN=MASTFILE(MEMBERK),DISP=OLD
...
OPEN (INDCB) Open for input, automatic FIND
...
GET INDCB,INAREA Read member record
...
CLOSE (INDCB)
...
INAREA DS CL8ð Area to read into
INDCB DCB ---,DSORG=PS,DDNAME=PDSEDD,MACRF=GM
When your program is executed, the directory is searched automatically and the
location of the member is placed in the DCB.
To retrieve several PDSE or partitioned data set members without closing and
reopening the data set, use this procedure:
1. Code DSORG=PO in the DCB macro.
2. Specify the name of the PDSE in the DD statement by coding DSNAME=name.
3. Issue the BLDL macro to get the list of member entries you need from the
directory.
//PDSEDD DD ---,DSN=MASTFILE,DISP=OLD
...
OPEN (INDCB) Open for input, no automatic FIND
...
BLDL INDCB,BLDLLIST Retrieve the relative disk locations
\ of several user-supplied names in
\ virtual storage.
LA BLDLREG,BLDLLIST+4 Point to the first entry in the list
...
INAREA DS CL8ð
INDCB DCB ---,DSORG=PO,DDNAME=PDSEDD,MACRF=R,EODAD=EODRTN
TTRN DS F TTRN of the start of the member
BLDLREG EQU 5 Register to address BLDL list entries
BLDLLIST DS ðF List of member names for BLDL
DC H'1ð' Number of entries (1ð for example)
DC H'14' Number of bytes per entry
DC CL8'MEMBERA' Name of member, supplied by user
DS CL3 TTR of first record (set by BLDL)
\ The following 3 fields are set by BLDL
DS X K byte, concatenation number
DS X Z byte, location code
DS X C byte, flag and user data length
... one list entry per member (14 bytes each)
Sharing PDSEs
PDSE data sets and members can be shared. If allocated with a DISP=SHR, the
PDSE directory can be shared by multiple writers and readers, and each PDSE
member can be shared by a single writer or multiple readers. Any number of
systems can have the same PDSE open for input. If one system has a PDSE open
for output, that PDSE can be opened on other systems only if the systems are
using the PDSE extended sharing protocol. If one system has a PDSE open for
update, that PDSE cannot be opened by any other system.
Any number of DCBs can be open to a PDSE for input or output at the same time.
INPUT
A version of a member can be accessed by any number of DCBs open for input.
UPDATE
You cannot have any DCBs reading a version of a member while another DCB is
updating the same version of the member.
OUTPUT
Any number of DCBs open for output can create members at the same time. The
members are created in separate areas of the PDSE. If the members being created
have the same name (specified in the STOW done after the data is written), the
last version stowed is the version that is seen by users, and the storage occupied
by the first version is added to the available space for the PDSE.
You can have multiple DCBs reading and creating new versions of the same
member at the same time. Readers continue to see the “old” version until they
do a new BLDL or FIND by name.
You can have a single DCB updating a version of a member while multiple
DCBs are creating new versions of the member. The user updating the data set
continues to access the “old” version until the application does a new BLDL or
FIND by name.
Sharing Violations
Violation of the sharing rules, either within a computer system or across several
computer systems, can result in OPEN failing with a system ABEND.
Under some conditions, using the FIND or POINT macro might violate sharing
rules:
The share options allow only one user to update at a time. Suppose you are
updating a PDSE and are using the FIND or POINT macros to access a
specific member. If someone else is reading that member at the same time, the
first WRITE or PUTX issued after the FIND or POINT will fail.17However, your
FIND or POINT would succeed if the other user is reading a different member
of the same PDSE at the same time. A POINT error will simulate an I/O error.
If the calling program has insufficient RACF access authority, the FIND or
POINT will fail. For example, if the calling program opens a PDSE for input but
only has RACF execute authority, the FIND will fail. See OS/390 Security
Server (RACF) Security Administrator's Guide for more information.
A sharer of a PDSE can read existing members and create new members or new
copies of existing members concurrently with other sharers on the same system
and on other systems. Shared access to a PDSE, when update-in-place of a
member is being done, is restricted to a single system.
The update-in-place access restriction exists from the time the updater is positioned
to the member (for example, by means of FIND or POINT), and continues until the
updater positions the directory, or closes the PDSE. To avoid unexpected failures
due to sharing of a PDSE, it is recommended that the PDSE update-in-place user
specify DISP=OLD (or MOD) at allocation to obtain exclusive access to the PDSE.
If exclusive access is not ensured by means of the allocation disposition processing
(or some user serialization process), the results of the PDSE update-in-place will be
unpredictable, and dependent on whether the PDSE is OPEN, or not, on another
system, at the time the member is positioned to OPEN for UPDAT.
Figure 132 on page 428 and Figure 133 on page 428 show the results of OPEN.
17 The failure does not occur until the WRITE or PUTX because you could be open for update but only reading the member.
──────────────────────────────────────────────────────────────────────────────────────────────────
│ System 1 - Current PDSE OPEN Status
├─────────────────┬──────────────────┬─────────────────────
│ │ OPEN for UPDAT │
System 2 - attempting to: │ OPEN for UPDAT │ and NOT posi- │
│ and positioned │ tioned to a │ OPEN for INPUT or
│ to a member │ member │ OUTPUT
──────────────────────────────────────┼─────────────────┼──────────────────┼─────────────────────
OPEN for Input or Output │ ABEND 213-7ð │ Successful │ Successful
────────────────┬─────────────────────┼─────────────────┼──────────────────┼─────────────────────
│ BSAM │ │ │
│ BSAM/QSAM - │ │ │
│ OPEN to directory, │ ABEND 213-7ð │ Successful │ Successful
│ no member name │ │ │
OPEN for UPDAT │ in JCL │ │ │
├─────────────────────┼─────────────────┼──────────────────┼─────────────────────
│ BSAM/QSAM - │ │ │
│ member name in │ ABEND 213-7ð │ ABEND 213-74 │ ABEND 213-74
│ JCL │ │ │
────────────────┴─────────────────────┴─────────────────┴──────────────────┴─────────────────────
Note: If no other system is OPEN to the PDSE, then any type of OPEN will be successful.
─────────────────────────────────────────────────────────────────────────────────────────────────
────────────────────────────────────┬──────────────────────────────────
System 2 - OPEN for UPDAT, but NOT │
YET positioned to a member, and │ System 1 - Currently OPEN
attempting to: │ (INPUT, OUTPUT, or UPDAT)
────────────────────────────────────┼──────────────────────────────────
│ I/O error reported on CHECK:
POINT to member │ 1. ABEND ðð1
READ │ or
CHECK │ 2. entry to SYNAD (If defined
│ in DCB)
────────────────────────────────────┼──────────────────────────────────
FIND to member │ RC=4 RSN=8
────────────────────────────────────┴──────────────────────────────────
Note: If no other system is OPEN to the PDSE, then a member can be
positioned to without error.
───────────────────────────────────────────────────────────────────────
Figure 133. OPEN for UPDAT and Positioning to a Member Decision Table
Before using program packages which change the VTOC and the data on the
volume (for example, DSS full volume, and tracks RESTORE), it is recommended
that the volume be VARIED OFFLINE to all other systems. Applications which
perform these modifications to data residing on volumes without using the PDSE
API, should specify in their usage procedure, that the volume being modified should
be OFFLINE to all other systems, to help ensure there are no active connections to
PDSEs residing on the volume while performing the operation.
For further information on the DFP share attributes callable service, see
DFSMS/MVS DFSMSdfp Advanced Services.
Updating in Place
A member of a PDSE can be updated in place. Only one user can update at a
time. When you update in place, you read records, process them, and write them
back to their original positions without destroying the remaining records. The
following rules apply:
You must specify the UPDAT option in the OPEN macro to update the data set.
To perform the update, you can use only the READ, WRITE, GET, PUTX,
CHECK, NOTE, POINT, FIND, BLDL, and STOW macros.
You cannot update concatenated PDSEs.
You cannot delete any record or change its length; you cannot add new
records.
With Overlapped Operations: See Figure 108 on page 391 for an example of
overlap achieved by having a read or write request outstanding while each record is
being processed.
With QSAM
You can update a member of a PDSE using the locate mode of QSAM (DCB
specifies MACRF=(GL,PL)) and using the GET and PUTX macro. The DD
statement must specify the data set and member name in the DSNAME parameter.
This method allows updating of only the member specified in the DD statement.
Extending a Member
You cannot extend a PDSE member by opening the PDSE for output and
positioning to that member. If you used POINT for positioning, the next write would
result in an I/O error. If you used FIND for positioning, the FIND will fail with an
error return code. To extend the member, rewrite it while open for output and issue
a STOW REPLACE.
When you rewrite the member, you must provide two DCBs, one for input and one
for output. Both DCB macros can refer to the same data set; that is, only one DD
statement is required.
Because space is allocated when the data set is created, you do not need to
request additional space. You do not need to compress the PDSE after rewriting a
member because the system automatically reuses the member's space whenever a
member is replaced or deleted.
Deleting a Member
This section describes the two interfaces used to delete members: STOW and
DESERV DELETE. DESERV only supports PDSEs but it does support deleting
names longer than 8 bytes.
When the primary name is deleted, all aliases are also deleted. If an alias is
deleted, only the alias name and its directory entry are deleted.
A PDSE member is not actually deleted while in use. Any program connected to
the member when the delete occurs can continue to access the member until the
data set is closed. This is called a deferred delete. Any program not connected to
the member at the time it is deleted cannot access the member. It appears as
though the member does not exist in the PDSE.
Renaming a Member
This section describes the two ways to rename a member: STOW and DESERV
RENAME. DESERV RENAME only supports PDSEs but it supports names longer
than 8 characters.
You can use the above technique to sequentially read the directories of a
concatenation of partitioned data sets and PDSEs. This is considered to be a “like”
sequential concatenation. To proceed to each successive data set you can rely on
the system's EOV function or you can issue the FEOV macro.
Concatenating PDSEs
Two or more PDSEs can be automatically retrieved by the system and processed
successively as a single data set. This technique is known as concatenation. There
are two types of concatenation: sequential and partitioned. You can concatenate
PDSEs with sequential and partitioned data sets.
Sequential Concatenation
Sequentially concatenated data sets are processed with a DCB that has
DSORG=PS, and can include sequential data sets, and members of partitioned
data sets and PDSEs. See “Sequential Concatenation of Data Sets” on page 349
for the rules for concatenating “like” and “unlike” data sets.
Partitioned Concatenation
Concatenated partitioned data sets are processed with a DCB that has
DSORG=PO, and can include complete partitioned data sets and PDSEs. The
system treats the group as a single partitioned-organization data set. For
concatenation, the system regards each PDSE as taking only one extent regardless
of the actual number of DASD extents taken. The limit on the number of extents for
a concatenation is 120. However, there is also a limit of 114 on the total number of
data sets in a partitioned concatenation.
If all the data sets in the concatenation are PDSEs, you can concatenate up to 114
PDSEs. You can concatenate PDSEs and partitioned data sets when the number of
DASD extents in the partitioned data sets plus the number of PDSEs does not
exceed the concatenation limit of 120. For example, you can concatenate 7
partitioned data sets of 16 extents each with 8 PDSEs (7 x 16 + 8 = 120 extents).
See “Concatenating Partitioned Data Sets” on page 392 for more information on
partitioned concatenation.
To copy one or more specific members using IEBCOPY, use the SELECT control
statement. In this example, IEBCOPY copies members A, B, and C from
USER.PDS.LIBRARY to USER.PDSE.LIBRARY.
//INPDS DD DSN=USER.PDS.LIBRARY,DISP=SHR
//OUTPDSE DD DSN=USER.PDSE.LIBRARY,DISP=OLD
//SYSIN DD DD \
COPY OUTDD=OUTPDSE
INDD=INPDS
SELECT MEMBER=(A,B,C)
This DFSMSdss COPY example converts all partitioned data sets with the
high-level qualifier of “MYTEST” on volume SMS001 to PDSEs with the high-level
qualifier of “MYTEST2” on volume SMS002. The original partitioned data sets are
then deleted. If you use dynamic allocation, specify INDY and OUTDY for the input
and output volumes. However, if you define the ddnames for the volumes, use the
INDD and OUTDD parameters.
COPY DATASET(INCLUDE(MYTEST.\\) -
BY(DSORG = PDS)) -
INDY(SMSðð1) -
OUTDY(SMSðð2) -
CONVERT(PDSE(\\)) -
RENAMEU(MYTEST2) -
DELETE
If you want the PDSEs to retain the original partitioned data set names, use the
TSO RENAME command to rename each PDSE individually. (You cannot use
pattern-matching characters, such as asterisks, with TSO RENAME.)
When copying members from a PDSE program library into a partitioned data set,
certain restrictions must be considered. Program objects which exceed the
limitations of load modules, such as total module size or number of external names,
cannot be correctly converted to load module format.
Improving Performance
You can either calculate the block size yourself and specify it in the BLKSIZE
parameter of the DCB, or let the system determine the block size for you. Although
the block size makes no difference in space usage for PDSEs, it affects the buffer
size of a job using the PDSE.
After many adds and deletes, the PDSE members might become fragmented. This
can affect performance. To reorganize the PDSE, use IEBCOPY or DFSMSdss
COPY to back up all the members. You can either delete all members and then
restore them, or delete and reallocate the PDSE. It is preferable to delete and
reallocate the PDSE because it usually used less processor time and does less I/O
than deleting every member.
Generation data sets can be sequential, direct, or indexed sequential. They cannot
be partitioned, HFS, or VSAM.
There are advantages to grouping related data sets. For example, the catalog
management routines can refer to the information in a special index called a
generation index in the catalog. Thus:
All of the data sets in the group can be referred to by a common name.
The operating system is able to keep the generations in chronological order.
Outdated or obsolete generations can be automatically deleted by the operating
system.
Generation data sets have sequentially ordered absolute and relative names that
represent their age. The catalog management routines use the absolute generation
name. Older data sets have smaller absolute numbers. The relative name is a
signed integer used to refer to the latest (0), the next to the latest (−1), and so
forth, generation. For example, a data set name LAB.PAYROLL(0) refers to the
most recent data set of the group; LAB.PAYROLL(−1) refers to the second most
recent data set; and so forth.
The relative number can also be used to catalog a new generation (+1).
See DFSMS/MVS Access Method Services for ICF for information on how to define
and catalog generation data sets in an integrated catalog facility or VSAM catalog.
See DFSMS/MVS Utilities for information on how to define and catalog generation
data sets in a CVOL.
Notes:
1. CVOLs and VSAM catalogs are not recommended.
2. A generation data group base that is to be system-managed must be created in
an integrated catalog facility catalog. Generation data sets that are to be
system-managed must also be cataloged in an integrated catalog facility
catalog.
3. Both system- and non-system-managed generation data sets can be contained
in the same generation data group. However, if the catalog of a generation data
group is on a volume that is system-managed, the model DSCB cannot be
defined.
4. You can add new non-system-managed generation data sets to the generation
data group by using cataloged data sets as models without needing a model
DSCB on the catalog volume.
The number of generations and versions is limited by the number of digits in the
absolute generation name; that is, there can be 9,999 generations. Each generation
can have 100 versions.
The version number allows you to perform normal data set operations without
disrupting the management of the generation data group. For example, if you want
to update the second generation in a 3-generation group, replace generation 2,
version 0, with generation 2, version 1. Only one version is kept for each
generation.
You can catalog a generation using either absolute or relative numbers. When a
generation is cataloged, a generation and version number is placed as a low-level
entry in the generation data group. To catalog a version number other than V00,
you must use an absolute generation and version number.
The value of the specified integer tells the operating system what generation
number to assign to a new generation, or it tells the system the location of an entry
representing a previously cataloged generation.
When you use a relative generation number to catalog a generation, the operating
system assigns an absolute generation number and a version number of V00 to
represent that generation. The absolute generation number assigned depends on
the number last assigned and the value of the relative generation number that you
are now specifying. For example if, in a previous job generation, A.B.C.G0005V00
was the last generation cataloged, and you specify A.B.C(+1), the generation now
cataloged is assigned the number G0006V00.
Though any positive relative generation number can be used, a number greater
than 1 can cause absolute generation numbers to be skipped. For example, if you
have a single step job, and the generation being cataloged is a +2, one generation
number is skipped. However, in a multiple step job, one step might have a +1 and
a second step a +2, in which case no numbers are skipped.
Note: For new non-system-managed data sets, if you do not specify a volume and
the data set is not opened, the system does not catalog the data set. New
system-managed data sets are always cataloged when allocated, with the
volume assigned from a storage group.
Also, in a multiple step job, you should catalog or uncatalog data sets using JCL
rather than IEHPROGM or a user program. Because data set allocation and
unallocation monitors data sets during job execution and is not aware of the
functions performed by IEHPROGM or user programs, data set orientation might be
lost or conflicting functions might be performed in subsequent job steps.
When you use a relative generation number to refer to a generation that was
previously cataloged, the relative number has the following meaning:
A.B.C(0) refers to the latest existing cataloged entry.
A.B.C(−1) refers to the next-to-the-latest entry, and so forth.
When cataloging is requested using JCL, all actual cataloging occurs at step
termination, but the relative generation number remains the same throughout the
job. Because this is so:
A relative number used in the JCL refers to the same generation throughout a
job.
A job step that ends abnormally can be deferred for a later step restart. If the
step cataloged a generation data set successfully in its generation data group
via JCL, you must change all relative generation numbers in the next steps
using JCL before resubmitting the job.
For example, if the next steps contained the following relative generation numbers:
A.B.C(+1) refers to the entry cataloged in the terminated job step, or
A.B.C(0) refers to the next to the latest entry, or
A.B.C(−1) refers to the latest entry, before A.B.C(0).
| For Version 3 or Version 4 labels, you must observe the following specifications
| created by the generation data group naming convention.
| Data set names whose last 9 characters are of the form .GnnnnVnn (n is 0
| through 9) can only be used to specify GDG data sets. When a name ending in
| .GnnnnVnn is found, it is automatically processed as a GDG. The generation
| number Gnnnn and the version number Vnn are separated from the rest of the
| data set name and placed in the generation number and version number fields.
| Tape data set names for GDG files are expanded from a maximum of 8
| user-specified characters to 17 user-specified characters. (The tape label file
| identifier field has space for 9 additional user-specified characters because the
| generation number and version number are no longer contained in this field.)
| A generation number of all zeros is not valid, and will be treated as an error
| during label validation. The error appears as a “RANG” error in message
| IEC512I (IECIEUNK) during the label validation installation exit.
| In an MVS system-created GDG name, the version number will always be 0.
| (MVS will not increase the version number by 1 for subsequent versions.) To
| obtain a version number other than 0, you must explicitly specify the version
| number (for example, A.B.C.G0004V03) when the data set is allocated. You
| must also explicitly specify the version number to retrieve a GDG with a version
| number other than 0.
| Because the generation number and version number are not contained on the
| identifier of HDR1, generations of the same GDG will have the same name.
| Therefore, an attempt to place more than one generation of a GDG on the
| same volume will result in an ISO/ANSI standards violation in a system
| supporting Version 3 and MVS will enter the validation installation exit.
If you are using absolute generation and version numbers, DCB attributes for a
generation can be supplied directly in the DD statement defining the generation to
be created and cataloged.
If you are using relative generation numbers to catalog generations, DCB attributes
can be supplied:
1. By referring to a cataloged data set for the use of its attributes.
2. By creating a model DSCB on the volume on which the index resides (the
volume containing the catalog). Attributes can be supplied before you catalog a
generation, when you catalog it, or at both times.
Note: You cannot use a model DSCB for system-managed generation data
sets.)
3. By using the DATACLAS and LIKE keywords in the DD statement for both
system-managed and non-system-managed generation data sets. The
generation data sets can be on either tape or DASD.
4. Through the assignment of a data class to the generation data set by the data
class ACS routine.
To refer to a cataloged data set for the use of its attributes, you can specify one of
the following on the DD statement that creates and catalogs your generation:
DCB=(dsname), where dsname is the name of the cataloged data set
LIKE=dsname, where dsname is the name of the cataloged data set
REFDD=ddname, where ddname is the name of a DD statement that allocated
the cataloged data set.
You can also refer to an existing model DSCB for which you can supply overriding
attributes.
Note: You cannot use a model DSCB with system-managed generation data
sets.)
The DSNAME is the common name by which each generation is identified; xxxxxx
is the serial number of the volume containing the catalog. If no DCB subparameters
are wanted initially, you need not code the DCB parameter.
18 Only one model DSCB is necessary for any number of generations. If you plan to use only one model, do not supply DCB
attributes when you create the model. When you subsequently create and catalog a generation, include necessary DCB attributes
in the DD statement referring to the generation. In this manner, any number of generation data groups can refer to the same
model. Note that the catalog and model data set label are always located on a direct access volume, even for a magnetic tape
generation data group.
Notes:
1. The model DSCB must reside on the catalog volume. If you move a catalog to
a new volume, you also will need to move or create a new model DSCB on this
new volume.
2. If you split or merge an integrated catalog facility catalog and the catalog
remains on the same volume as the existing model DSCB, you will not have to
move or create a new model DSCB.
The LIKE keyword specifies the allocation attributes of a new data set by copying
the attributes of a cataloged model data set. The cataloged data set referred to in
LIKE=dsname must be on DASD. Note that model DSCBs are still used if present
on the volume, even if LIKE and DATACLAS are also used for a
non-system-managed generation data set. This method is recommended, because
you would not need to change the JCL (to scratch the model DSCB) when
migrating the data to system-managed storage or migrating from system-managed
storage. If you do not specify DATACLAS and LIKE in the JCL for a non-system
managed generation data set, and there is no model DSCB, the allocation will fail.
//DDNAME DSN=HLQ.----.LLQ(+1),DISP=(NEW,CATLG),LIKE=dsn
For more information on the JCL keywords used to allocate a generation data set,
see OS/390 MVS JCL Reference.
The new generation data set is cataloged at allocation time, and rolled into the
generation data group at end-of-job step. If your job ends after allocation but before
end-of-job step, the generation data set is cataloged in a deferred roll-in state. You
can re-submit your job to roll the new generation data set into the generation data
group. For more information on rolling in generation data sets, see “Rolling In a
Generation Data Set” on page 442.
the passed generation data set. If that data set was later cataloged, a bad
generation would be cataloged and a good one lost.
The attributes specified for the generation data group determines what happens to
the older generations when a new generation is rolled. The access method services
command DEFINE GENERATIONDATAGROUP creates a generation data group. It
also specifies the limit (the maximum number of active generation data sets) for a
generation data group, and specifies whether all or only the oldest generation data
sets should be rolled off when the limit is reached.
When a generation data group contains its maximum number of active generation
data sets, and a new generation data set is rolled in at end-of-job step, the oldest
generation data set is rolled off and is no longer active. If a generation data group
is defined using DEFINE GENERATIONDATAGROUP EMPTY, and is at its limit,
then, when a new generation data set is rolled in, all the currently active generation
data sets are rolled off.
The access method services command ALTER LIMIT can increase or reduce the
limit for an existing generation data group. If a limit is reduced, the oldest active
generation data sets are automatically rolled off as needed to meet the decreased
limit. If a change in the limit causes generations to be rolled off, then the rolled off
data sets are listed with their disposition (uncataloged, recataloged, or deleted). If a
limit is increased, and there are generation data sets in a deferred roll-in state,
these generation data sets are not rolled into the generation data group. The
access method services command ALTER ROLLIN can be used to roll the
generation data sets into the generation data group in active status.
For more information on how to use the access method services commands
DEFINE GENERATIONDATAGROUP and ALTER, see DFSMS/MVS Access
Method Services for ICF.
In integrated catalog facility catalogs and in VSAM catalogs, use access method
services commands to catalog the data set. In CVOL, use the RENAME function to
rename the data set. Then use the CATLG function to catalog the data set. For
example, if MASTER is the name of the generation data group, and GggggVvv is
the absolute generation name, you would code the following:
RENAME DSNAME=ISAM,VOL=338ð=SCRTCH,NEWNAME=MASTER.GggggVvv
CATLG DSNAME=MASTER.GggggVvv,VOL=338ð=SCRTCH
You can retrieve a generation data set by using either relative generation numbers
or absolute generation and version numbers.
Note: Refer to generation data sets that are in a deferred roll-in state by their
relative number, such as (+1), within the job that allocates it. Refer to
generation data sets that are in a deferred roll-in state by their absolute
generation number (GxxxxVyy) in subsequent jobs.
Because two or more jobs can compete for the same resource, be careful when
updating generation data groups, as follows:
No two jobs running concurrently should refer to the same generation data
group. As a partial safeguard against this situation, use absolute generation
and version numbers when cataloging or retrieving a generation in a
multiprogramming environment. If you use relative numbers, a job running
concurrently could update the generation data group index, perhaps cataloging
a new generation, which you will then retrieve in place of the one you wanted.
Even when using absolute generation and version numbers, a job running
concurrently might catalog a new version of a generation, or perhaps delete the
generation you wanted to retrieve. So, some degree of control should be
maintained over the execution of job steps referring to generation data groups.
Examples of how to build a generation data group index are found in DFSMS/MVS
Access Method Services for ICF under DEFINE GENERATIONDATAGROUP, and
in DFSMS/MVS Utilities under IEHPROGM.
Note: CVOLs and VSAM catalogs are not recommended.
When you use the queued access method, only unit record equipment can be
controlled directly. When using the basic access method, limited device
independence can be achieved between magnetic tape and direct access storage
devices. You must check all read or write operations before issuing a device control
macro.
Backspacing moves the tape toward the load point; forward spacing moves the
tape away from the load point.
Note that the CNTRL macro cannot be used with an input data set containing
variable-length records on the card reader.
If you specify OPTCD=H in the DCB parameter field of the DD statement, you can
use the CNTRL macro to position VSE19 tapes even if they contain embedded
checkpoint records. The CNTRL macro cannot be used to backspace VSE 7-track
tapes that are written in data convert mode and contain embedded checkpoint
records.
If the data set specified in the DCB is not for a printer, no action is taken.
For printers that are allocated to your program, the SETPRT macro is used to
initially set or dynamically change the printer control information. For additional
information on how to use the SETPRT macro with the 3800 printer, see IBM 3800
Printing Subsystem Programmer’s Guide.
For printers that have a universal character set (UCS) buffer and optionally, a forms
control buffer (FCB), the SETPRT macro is used to specify the UCS or FCB
images to be used. Note that universal character sets for the various printers are
not compatible. The three formats of FCB images (the FCB image for the 3800
Printing Subsystem, the 4248 format FCB, and the 3211 format FCB) are
incompatible. The 3211 format FCB is used by the 3203, 3211, 4248, 3262 Model
5, and 4245 printers.
IBM-supplied UCS images, UCS image tables, FCB images, and character
arrangement table modules are included in the SYS1.IMAGELIB at system
initialization time. For 1403, 3203, 3211, 3262 Model 5, 4245, and 4248 printers,
user-defined character sets can be added to SYS1.IMAGELIB.
For a description of how images are added to SYS1.IMAGELIB and how band
names/aliases are added to image tables, see DFSMS/MVS DFSMSdfp
Advanced Services.
For the 3800, user-defined character arrangement table modules, FCB
modules, graphic character modification modules, copy modification modules,
and library character sets can be added to SYS1.IMAGELIB as described in
DFSMS/MVS Utilities.
For information on building a 4248 format FCB (which can also be used for the
IBM 3262 Model 5 printer), see DFSMS/MVS Utilities.
The FCB contents can be selected from the system library (or an alternate library if
you are using a 3800), or defined in your program through the exit list of the DCB
macro. For information on the DCB exit list, see “DCB Exit List” on page 468.
For a non-3800 printer, the specified UCS or FCB image should be found in one of
the following:
SYS1.IMAGELIB
Image table (UCS Image only)
DCB exit list for an FCB
If the image is not found, the operator is asked to specify an alternate image name
or cancel the request.
For a printer that has no carriage control tape, you can use the SETPRT macro to
select the FCB, to request operator verification of the contents of the buffer, or to
allow the operator to align the paper in the printer.
For a SYSOUT data set, the specified images must be available at the destination
of the data set, which can be JES2, JES3, VM, or other type of system.
The direction of movement is toward the load point or the beginning of the extent.
You can not use the BSP macro if the track overflow option was specified or if the
CNTRL, NOTE, or POINT macro is used. The BSP macro should be used only
when other device control macros could not be used for backspacing.
Any attempt to backspace across a file mark will result in a return code of X'04' in
register 15, and your tape or direct access volume will be positioned after the file
mark. This means you cannot issue a successful backspace command after your
EODAD routine is entered unless you first reposition the tape or direct access
volume into your data set. (CLOSE TYPE=T can position you at the end of your
data set.)
You can use the BSP macro to backspace VSE tapes containing embedded
checkpoint records. If you use this means of backspacing, you must test for and
bypass the embedded checkpoint records. You cannot use the BSP macro for VSE
7-track tapes written in translate mode.
If a NOTE macro is issued after an automatic volume switch occurs, and before a
READ or WRITE macro is issued to the next volume, NOTE returns a relative block
address of zero except for extended format data sets.
For magnetic tape, the address is in the form of a 4-byte relative block address. If
TYPE=REL is specified or defaults, the address provided by the operating system
is returned in register 1. If TYPE=ABS is specified, the physical block identifier of a
data block on tape is returned in register 0. The relative block address or the block
identifier can later be used as a search argument for the POINT macro.
For non-extended format data sets on direct access storage devices, the address is
in the form of a 4-byte relative track record address (TTR). For extended format
data sets, the address is in the form of a block locator token (BLT). The BLT is
essentially the relative block number (RBN) within the current logical volume of the
data set where the first block has an RBN of 1. A multi-striped data set is seen by
the user as a single logical volume. Therefore, for a multi-striped data set, the
RBN is relative to the beginning of the data set and incorporates all stripes. For
PDSEs, the address is in the form of a record locator token. The address provided
by the operating system is returned in register 1. For non-extended format data
sets and partitioned data sets, NOTE returns the track balance in register 0 if the
last I/O operation was a WRITE, or the track capacity if the NOTE follows a READ
or POINT. For PDSEs and extended format data sets, 0 is returned in register 0.
See “NOTE—Provide Relative Position” on page 421 for information on using the
NOTE macro to process PDSEs.
POINT—Positioning to a Block
The POINT macro repositions a magnetic tape or direct access volume to a
specified block. The next read or write operation begins at this block. See
“POINT—Position to a Block” on page 421 for information on using the POINT
macro to process PDSEs.
In a multivolume sequential data set you must ensure that the volume referred to is
the volume currently being processed. A multi-striped extended format data set is
seen by the user as a single logical volume. Therefore, no special positioning is
needed. However, a single-striped multi-volume extended format data set does
require you to be positioned at the correct volume. For disk, if a write operation
follows the POINT macro, all of the track following the write operation is erased,
unless the data set is opened for UPDAT. POINT is not meant to be used before a
WRITE macro when a data set is opened for UPDAT. If you specify OPTCD=H in
the DCB parameter field of the DD statement, you can use the POINT macro to
position VSE tapes even if they contain embedded checkpoint records. The POINT
macro cannot be used to backspace VSE 7-track tapes that are written in data
convert mode and that contain embedded checkpoint records.
If you specify TYPE=ABS, you can use the physical block identifier as a search
argument to locate a data block on tape. The identifier can be provided from the
output of a prior execution of the NOTE macro.
When using the POINT macro for a direct access storage device that is opened for
OUTPUT, OUTIN, OUTINX, or INOUT, and the record format is not standard, the
number of blocks per track might vary slightly.
SYNCDEV—Synchronizing Data
Data still in the buffer might not yet reside on the final recording medium. This is
called data that is not synchronized. Data synchronization is the process by which
the system ensures that data previously given to the system via WRITE, PUT, and
PUTX macros is written to the storage medium.
PDSEs to DASD
Compressed format data sets to DASD.
You can do the following for a magnetic tape cartridge device:
Request information regarding synchronization, or
Demand that synchronization occur based on a specified number of data blocks
that are allowed to be buffered. If zero is specified, synchronization will always
occur.
When SYNCDEV completes successfully (return code 0), a value is returned that
shows the number of data blocks remaining in the control unit buffer. For PDSEs
and compressed format data sets, the value returned is always zero. For PDSEs
and compressed format data sets, requests for synchronization information or for
partial synchronization cause complete synchronization. Specify Guaranteed
Synchronous Write through storage class to ensure that data is synchronized to
DASD at the completion of each CHECK macro. However, this degrades
performance. This produces the same result as issuing the SYNCDEV macro after
the CHECK macro. See DFSMS/MVS DFSMSdfp Storage Administration Reference
for how to specify Guaranteed Synchronous Write.
General Guidance
Non-VSAM macros can be used to identify user-written exit routines. These
user-written exit routines can perform a variety of functions for non-VSAM data
sets, including error analysis, requesting user totaling, performing I/O operations for
data sets, and creating your own data set labels. These functions are not for use
with VSAM data sets.
The exit addresses can be specified in the DCB or DCBE macro or you can
complete the DCB or DCBE fields before they are needed. Figure 134 summarizes
the exits that you can specify either explicitly in the DCB or DCBE, or implicitly by
specifying the address of an exit list in the DCB.
Programming Considerations
During any of the exit routines described in this chapter, other than SYNAD and
EODAD, the system might hold an enqueue on the SYSZTIOT resource. The
resource represents the TIOT and DSAB chain and holding it is the only way to
ensure that dynamic unallocation in another task does not eliminate those control
blocks while they are being examined. Being open to a data set ensures only that
the data set's TIOT entry and DSAB continue to exist. If the system holds the
SYSZTIOT resource, your exit routine cannot use certain system functions that
might need the resource. Those functions include LOCATE, OBTAIN, SCRATCH,
CATALOG, OPEN, CLOSE, FEOV and dynamic allocation. Whether the system
holds that resource is part of system logic and IBM might change it in a future
release. IBM recommends that your exit routine not depend on the system holding
or not holding SYSZTIOT.
Most exit routines described in this chapter must return to their caller. The only two
exceptions are the end-of-data and error analysis routines.
Exception codes are provided in the data control block (QISAM), or in the data
event control block (BISAM and BDAM). The data event control block is described
below, and the exception code lies within the block as shown in the illustration for
the data event control block. If a DCBD macro instruction is coded, the exception
code in a data control block can be addressed as two 1-byte fields, DCBEXCD1
and DCBEXCD2. The exception codes can be interpreted by referring to
Figure 136 on page 453, Figure 142 on page 461, and Figure 138 on page 457.
| Status indicators are available only to the error analysis routine designated by the
| SYNAD entry in the data control block or the data control block extension. Or, they
| are available after I/O completion from BSAM or BPAM until the next WAIT or
| CHECK for the DCB. A pointer to the status indicators is provided either in the
| data event control block (BSAM, BPAM, and BDAM), or in register 0 (QISAM and
| QSAM). The contents of registers on entry to the SYNAD exit routine are shown in
| Figure 144 on page 464, Figure 143 on page 464, and Figure 145 on page 465.
| The status indicators are shown in Figure 140 on page 459.
For BISAM, exception codes are returned by the control program after the
corresponding WAIT or CHECK macro instruction is issued, as indicated in
Figure 136.
Offset in
status indicator
area
Byte Bit Meaning Name
Figure 140. Status Indicators for the SYNAD Routine—BDAM, BPAM, BSAM, and QSAM
Note: If the sense bytes are X'10FE', the control program has set them to this
invalid combination because sense bytes could not be obtained from the
device because of recurrence of unit checks.
Register Contents
Figure 141 shows the contents of the registers when control is passed to the
EODAD routine.
Programming Considerations
You can treat your EODAD routine as a subroutine (and end by branching on
register 14) or as a continuation of the routine that issued the CHECK, GET, or
FEOV macro.
The EODAD routine generally is not regarded as being a subroutine. After control
passes to your EODAD routine, you can continue normal processing, such as
repositioning and resuming processing of the data set, closing the data set, or
processing another data set.
For BSAM, you must first reposition the data set that reached end-of-data if you
want to issue a BSP, READ, or WRITE macro. You can reposition your data set by
issuing a CLOSE TYPE=T macro instruction. If a READ macro is issued before the
data set is repositioned, unpredictable results occur.
For BPAM, you may reposition the data set by issuing a FIND or POINT macro.
(CLOSE TYPE=T with BPAM results in no operation performed.)
For QISAM, you can continue processing the input data set that reached
end-of-data by first issuing an ESETL macro to end the sequential retrieval, then
issuing a SETL macro to set the lower limit of sequential retrieval. You can then
issue GET macros to the data set.
Your task will abnormally end under either of the following conditions:
No exit routine is provided.
A GET macro is issued in the EODAD routine to the DCB that caused this
routine to be entered (unless the access method is QISAM).
For BSAM, BPAM, and QSAM your EODAD routine is entered with the
addressability (24- or 31-bit) of when you issued the macro that caused entry to
EODAD. This typically is a CHECK or GET macro. DCB EODAD identifies a routine
that resides below the line (RMODE is 24). DCBE EODAD identifies a routine that
may reside above the line. If it resides above the line, then all macros that might
detect an end-of-data must be issued in 31-bit mode. If both the DCB and DCBE
specify EODAD, the DCBE routine will be used.
For BDAM, BSAM, BPAM, and QSAM, the control program provides a pointer to
the status indicators shown in Figure 140 on page 459. The block being read or
written can be accepted or skipped, or processing can be terminated.
and register 0 contains the address of a work area containing the first 16 bytes
of the IOB; for other errors, the contents of register 1 are meaningless. After
appropriate analysis, the SYNAD exit routine should close the data set or end
the job step. If records are to be subsequently added to the data set using the
queued indexed sequential access method (QISAM), the job step should be
terminated by issuing an abend macro instruction. (Abend closes all open data
sets. However, an ISAM data set is only partially closed, and it can be
reopened in a later job to add additional records by using QISAM.) Subsequent
execution of a PUT macro instruction would cause reentry to the SYNAD exit
routine, because an attempt to continue loading the data set would produce
unpredictable results.
If a data set is being processed (scan mode), the address of the output buffer
in error is placed in register 1, the address of a work area containing the first
16 bytes of the IOB is placed in register 0, and the SYNAD exit routine is given
control when the next GET macro instruction is issued. Buffer scheduling is
suspended until the next GET macro instruction is reissued.
7. Block Could Not Be Reached (Input) condition is reported if the control
program's error recovery procedures encounter an uncorrectable error in
searching an index or overflow chain. The SYNAD exit routine is given control
when a GET macro instruction is issued for the first logical record of the
unreachable block.
8. Block Could Not Be Reached (Update) condition is reported if the control
program's error recovery procedures encounter an uncorrectable error in
searching an index or overflow chain.
If the error is encountered during closing of the data control block, bit 2 of
DCBEXCD2 is set to 1 and the SYNAD exit routine is given control
immediately. Otherwise, the SYNAD exit routine is given control when the next
GET macro instruction is issued.
9. Sequence Check condition is reported if a PUT macro instruction refers to a
record whose key has a smaller numeric value than the key of the record
previously referred to by a PUT macro instruction. The SYNAD exit routine is
given control immediately; the record is not transferred to secondary storage.
10. Duplicate Record condition is reported if a PUT macro instruction refers to a
record whose key duplicates the record previously referred to by a PUT macro
instruction. The SYNAD exit routine is given control immediately; the record is
not transferred to secondary storage.
11. Data Control Block Closed When Error Routine Entered condition is reported if
the control program's error recovery procedures encounter an uncorrectable
output error during closing of the data control block. Bit 5 or 7 of DCBEXCD1 is
set to 1, and the SYNAD exit routine is immediately given control. After
appropriate analysis, the SYNAD routine must branch to the address in return
register 14 so that the control program can finish closing the data control block.
12. Overflow Record condition is reported if the input record is an overflow record.
13. Incorrect Record Length condition is reported if the length of the record as
specified in the record-descriptor word (RDW) is larger than the value in the
DCBLRECL field of the data control block.
Register Contents
For a description of the register contents on entry to your SYNAD routine, see
“Status Information Following an Input/Output Operation” on page 452.
Figure 143 shows the register contents on entry to the SYNAD routine for BISAM.
Figure 144 shows the register contents on entry to the SYNAD routine for BDAM,
BPAM, BSAM, and QSAM.
Figure 144 (Page 1 of 2). Register Contents on Entry to SYNAD Routine—BDAM, BPAM, BSAM,
Register Bits Meaning
0 0-7 Value to be added to the status indicator's address to provide the address of the first
CCW (QSAM only).
| 8-31 Address of the associated data event control block for BDAM, BPAM, and BSAM unless
| bit 3 of register 1 is on; address of the status indicators shown in Figure 140 on
| page 459 for QSAM. If bit 3 of register 1 is on, the failure occurred in CNTRL, NOTE,
| POINT, or BSP and this field contains the address on an internal BSAM ECB.
1 0 Bit is on for error caused by input operation.
1 Bit is on for error caused by output operation.
2 Bit is on for error caused by BSP, CNTRL, or POINT macro instruction (BPAM AND
BSAM only).
3 Bit is on if error occurred during update of existing record or if error did not prevent
reading of the record. Bit is off if error occurred during creation of a new record or if error
prevented reading of the record.
4 Bit is on if the request was invalid. The status indicators pointed to in the data event
control block are not present (BDAM, BPAM, and BSAM only).
5 Bit is on if an invalid character was found in paper tape conversion (BSAM and QSAM
only).
6 Bit is on for a hardware error (BDAM only).
7 Bit is on if no space was found for the record (BDAM only).
8-31 Address of the associated data control block.
2-13 0-31 Contents that existed before the macro instruction was issued.
Figure 144 (Page 2 of 2). Register Contents on Entry to SYNAD Routine—BDAM, BPAM, BSAM,
Register Bits Meaning
14 0-7 Reserved.
8-31 Return address.
15 0-31 Address of the error analysis routine.
Figure 145 shows the register contents on entry to the SYNAD routine for QISAM.
Programming Considerations
For BSAM, BPAM, and QSAM your SYNAD routine is entered with the
addressability (24- or 31-bit) of when you issued the macro that caused entry to
SYNAD. This typically is a CHECK, GET, or PUT macro. DCB SYNAD identifies a
routine that resides below the line (RMODE is 24). DCBE SYNAD identifies a
routine that may reside above the line. If it resides above the line, then all macros
that might detect an I/O error must be issued in 31-bit mode. If both the DCB and
DCBE specify SYNAD, the DCBE routine will be used.
You can write a SYNAD routine to determine the cause and type of error that
occurred by examining:
The contents of the general registers
The data event control block (see “Status Information Following an Input/Output
Operation” on page 452)
The exceptional condition code
You can use the SYNADAF macro to perform this analysis automatically. This
macro produces an error message that can be printed by a later PUT, WRITE, or
WTO macro.
After completing the analysis, you can return control to the operating system or
close the data set. If you close the data set, note that you cannot use the
temporary close (CLOSE TYPE=T) option in the SYNAD routine. To continue
processing the same data set, you must first return control to the control program
by a RETURN macro. The control program then transfers control to your
processing program, subject to the conditions described below. Never attempt to
reread or rewrite the record, because the system has already attempted to recover
from the error.
You should not use the FEOV macro against the data set for which the SYNAD
routine was entered, within the SYNAD routine.
EROPT
When you are using GET and PUT to process a sequential data set, the operating
system provides three automatic error options (EROPT) to be used if there is no
SYNAD routine or if you want to return control to your program from the SYNAD
routine:
ACC—accept the erroneous block
SKP—skip the erroneous block
ABE—abnormally terminate the task.
These options are applicable only to data errors, because control errors result in
abnormal termination of the task. Data errors affect only the validity of a block of
data. Control errors affect information or operations necessary for continued
processing of the data set. These options are not applicable to output errors,
except output errors on the printer. If the EROPT and SYNAD fields are not
completed, ABE is assumed.
Because EROPT applies to a physical block of data, and not to a logical record,
use of SKP or ACC may result in incorrect assembly of spanned records.
ISAM
If the error analysis routine receives control from the close routine when indexed
sequential data sets are being created (the DCB is opened for QISAM load mode),
bit 3 of the IOBFLAGS field in the load mode buffer control table (IOBBCT) is set to
1. The DCBWKPT6 field in the DCB contains an address of a list of work area
pointers (ISLVPTRS). The pointer to the IOBBCT is at offset 8 in this list as shown
in the following diagram:
Work Area
DCB Pointers
(ISLVPTRS) IOBBCT
0 1
4
If the error analysis routine receives control from the CLOSE routine when indexed
sequential data sets are being processed using QISAM scan mode, bit 2 of the
DCB field DCBEXCD2 is set to 1.
Figure 146 gives the contents of registers 0 and 1 when a SYNAD routine specified
in a DCB gets control while indexed sequential data sets are being processed.
Figure 146 (Page 1 of 2). Register Contents for DCB-Specified ISAM SYNAD Routine
Register BISAM QISAM
0 Address 0, or, for a sequence check, the address of a field containing the
of the higher key involved in the check
DECB
Figure 146 (Page 2 of 2). Register Contents for DCB-Specified ISAM SYNAD Routine
Register BISAM QISAM
1 Address 0
of the
DECB
For information on QISAM error conditions and the meaning they have when the
ISAM interface to VSAM is being used, see Appendix E, “Using ISAM Programs
with VSAM Data Sets” on page 541.
The DCB exit list must begin on a fullword boundary and each entry in the list
requires one fullword. Each exit list entry is identified by a code in the high-order
byte, and the address of the routine, image, or area is specified in the 3 low-order
bytes. Codes and addresses for the exit list entries are shown in Figure 147.
Figure 147 (Page 1 of 2). DCB Exit List Format and Contents
Entry Type Hex Code 3-Byte Address—Purpose
Inactive entry 00 Ignore the entry; it is not active.
Input header label exit 01 Process a user input header label.
Output header label exit 02 Create a user output header label.
Input trailer label exit 03 Process a user input trailer label.
Output trailer label exit 04 Create a user output trailer label.
Data control block open exit 05 Take an exit during open processing.
End-of-volume exit 06 Take an end-of-volume exit.
JFCB exit 07 JFCB address for RDJFCB and OPEN TYPE=J macros.
08 Reserved.
09 Reserved.
User totaling area 0A Address of beginning of user's totaling area.
Block count unequal exit 0B Process tape block count discrepancy.
Defer input trailer label 0C Defer processing of a user input trailer label from
end-of-data until closing.
Defer nonstandard input trailer label 0D Defer processing a nonstandard input trailer label on
magnetic tape unit from end-of-data until closing (no exit
routine address).
0E-0F Reserved.
FCB image 10 Define an FCB image.
Figure 147 (Page 2 of 2). DCB Exit List Format and Contents
Entry Type Hex Code 3-Byte Address—Purpose
DCB abend exit 11 Examine the abend condition and select one of several
options.
QSAM parallel input 12 Address of the PDAB for which this DCB is a member.
Allocation retrieval list 13 Retrieve allocation information for one or more data sets
with the RDJFCB macro.
14 Reserved.
JFCBE exit 15 Take an exit during OPEN to allow user to examine
JCL-specified setup requirements for a 3800 printer.
16 Reserved.
OPEN/EOV nonspecific tape volume 17 Option to specify a tape volume serial number.
mount
OPEN/EOV volume 18 Verify a tape volume and some security checks.
security/verification
1A-7F Reserved.
Last entry 80 Treat this entry as the last entry in the list. This code can
be specified with any of the above but must always be
specified with the last entry.
You can activate or deactivate any entry in the list by placing the required code in
the high-order byte. Care must be taken, however, not to destroy the last entry
indication. The operating system routines scan the list from top to bottom, and the
first active entry found with the proper code is selected.
You can shorten the list during execution by setting the high-order bit to 1, and
extend it by setting the high-order bit to 0.
Exit routines identified in a DCB exit list are entered in 24-bit mode even if the rest
of your program is executing in 31-bit mode.
Register Contents
0 Variable; see exit routine description.
1 The 3 low-order bytes contain the address of the DCB currently being processed, except when
the user-label exits (X'01' - X'04' and X'0C'), user totaling exit (X'0A'), DCB abend exit
(X'11'), nonspecific tape volume mount exit (X'17'), or the tape volume security/verification exit
(X'18') is taken, when register 1 contains the address of a parameter list. The contents of the
parameter list are described in the explanation of each exit routine.
2-13 Contents before execution of the macro.
14 Return address (must not be altered by the exit routine).
15 Address of exit routine entry point.
The conventions for saving and restoring register contents are as follows:
The exit routine must preserve the contents of register 14. It need not preserve
the contents of other registers. The control program restores the contents of
registers 2 to 13 before returning control to your program.
The exit routine must not use the save area whose address is in register 13,
because this area is used by the control program. If the exit routine calls
another routine or issues supervisor or data management macros, it must
provide the address of a new save area in register 13.
The exit routine must not issue an access method macro that refers to the DCB
for which the exit routine was called, unless otherwise specified in the individual
exit routine descriptions that follow.
Programming Conventions
The allocation retrieval list must be below the 16 MB line, but the allocation return
area can be above the 16 MB line.
When you are finished obtaining information from the retrieval areas, free the
storage with a FREEMAIN macro or storage.
You can use the IHAARL macro to generate and map the allocation retrieval list.
For more information on the IHAARL macro see DFSMS/MVS DFSMSdfp
Advanced Services.
Restrictions
When OPEN TYPE=J is issued, the X'13' exit has no effect. The JFCB exit at
X'07' can be used instead (see “JFCB Exit” on page 479).
| Length
| Offset Bit Pattern Description
| Input to exit:
2. To delay the abend until all the DCBs in the same OPEN or CLOSE macro are
opened or closed
3. To ignore the abend condition and continue processing without making
reference to the DCB on which the abend condition was encountered, or
4. To try to recover from the error.
Not all of these options are available for each abend condition. Your DCB abend
exit routine must determine which option is available by examining the contents of
the option mask byte (byte 3) of the parameter list. The address of the parameter
list is passed in register 1. Figure 148 shows the contents of the parameter list and
the possible settings of the option mask when your routine receives control.
When your DCB abend exit routine returns control to the system control program
(this can be done using the RETURN macro), the option mask byte must contain
the setting that specifies the action you want to take. These actions and the
corresponding settings of the option mask byte are in Figure 149.
Your exit routine must inspect bits 4, 5, and 6 of the option mask byte (byte 3 of
the parameter list) to determine which options are available. If a bit is set to 1, the
corresponding option is available. Indicate your choice by inserting the appropriate
value in byte 3 of the parameter list, overlaying the bits you inspected. If you use a
value that specifies an option that is not available, the abend is issued immediately.
If the contents of bits 4, 5, and 6 of the option mask are 0, you must not change
the option mask. This unchanged option mask will result in a request for an
immediate abend.
If bit 5 of the option mask is set to 1, you can ignore the abend by placing a value
of 4 in byte 3 of the parameter list. Processing on the current DCB stops. If you
subsequently attempt to use this DCB other than to issue CLOSE or FREEPOOL,
the results are unpredictable. If you ignore an error in end-of-volume, the DCB is
closed and control is returned to your program at the point that caused the
end-of-volume condition (unless the end-of-volume routines were called by the
close routines). If the end-of-volume routines were called by the close routines, an
ABEND macro will be issued even though the ignore option was selected.
If bit 6 of the option mask is set to 1, you can delay the abend by placing a value of
8 in byte 3 of the parameter list. All other DCBs being processed by the same
OPEN or CLOSE invocation will be processed before the abend is issued. For
end-of-volume, however, you can't delay the abend because the end-of-volume
routine never has more than one DCB to process.
If bit 4 of the option mask is set to 1, you can attempt to recover. Place a value of
12 in byte 3 of the parameter list and provide information for the recovery attempt.
Figure 150 lists the abend conditions for which recovery can be attempted.
Recovery Requirements
For most types of recoverable errors, you should supply a recovery work area (see
Figure 151 on page 474) with a new volume serial number for each volume
associated with an error.
| Length or
| Offset Bit Pattern Description
If no new volumes are supplied for such errors, recovery will be attempted with the
existing volumes, but the likelihood of successful recovery is greatly reduced.
If you request recovery for system completion code 117, return code 3C, or system
completion code 214, return code 0C, or system completion code 237, return code
0C, you do not need to supply new volumes or a work area. The condition that
caused the abend is disagreement between the DCB block count and the
calculated count from the hardware. To permit recovery, this disagreement is
ignored and the value in the DCB will be used.
If you request recovery for system completion code 237, return code 04, you don't
need to supply new volumes or a work area. The condition that caused the abend
is the disagreement between the block count in the DCB and that in the trailer
label. To permit recovery, this disagreement is ignored.
If you request recovery for system completion code 717, return code 10, you don't
need to supply new volumes or a work area. The abend is caused by an I/O error
during updating of the DCB block count. To permit recovery, the block count is not
updated. So, an abnormal termination with system completion code 237, return
code 04, may result when you try to read from the tape after recovery. You may
attempt recovery from the abend with system completion code 237, return code 04,
as explained in the preceding paragraph.
System completion codes and their associated return codes are described in
OS/390 MVS System Codes.
The work area that you supply for the recovery attempt must begin on a halfword
boundary and can contain the information described in Figure 151. Place a pointer
to the work area in the last 3 bytes of the parameter list pointed to by register 1
and described in Figure 148 on page 471.
If you acquire the storage for the work area by using the GETMAIN macro, you can
request that it be freed by a FREEMAIN macro after all information has been
extracted from it. Set the high-order bit of the option byte in the work area to 1 and
place the number of the subpool from which the work area was requested in byte 3
of the recovery work area.
Only one recovery attempt per data set is allowed during OPEN, CLOSE, or
end-of-volume processing. If a recovery attempt is unsuccessful, you can not
request another recovery. The second time through the exit routine you may
request only one of the other options (if allowed): Issue the abend immediately,
ignore the abend, or delay the abend. If at any time you select an option that is not
allowed, the abend is issued immediately.
Note that, if recovery is successful, you still receive an abend message on your
listing. This message refers to the abend that would have been issued if the
recovery had not been successful.
When opening a data set for output, you can force the system to determine the
block size by setting the block size in the DCB to zero before returning from this
exit. If the zero value you supply is not changed by the DCB OPEN installation exit,
OPEN will determine a block size when OPEN takes control after return from the
DCB OPEN installation exit. See “System-Determined Block Size” on page 298.
This exit is mutually exclusive with the JFCBE exit. If you need both the JFCBE
and data control block OPEN exits, you must use the JFCBE exit to pass control to
your routines.
The DCB OPEN exit is intended for modifying or updating the DCB. System
functions should not be attempted in this exit before returning to OPEN processing;
in particular, dynamic allocation, OPEN, CLOSE, EOV, and DADSM functions
should not be invoked because of an existing OPEN enqueue on the SYSZTIOT
resources.
For an end-of-volume (EOV) condition, the EOV routine passes control to your
installations's nonstandard input trailer label routine, whether or not this exit code is
specified. For an end-of-data condition when this exit code is specified, the EOV
routine does not pass control to your installation's nonstandard input trailer label
routine. Instead, the close routine passes control to your installation's nonstandard
input trailer label.
The routine is entered during EOV processing. The trailer label block count is
passed in register 0. You may gain access to the count field in the DCB by using
the address passed in register 1 plus the proper displacement, which is given in
DFSMS/MVS Macro Instructions for Data Sets. If the block count in the DCB differs
from that in the trailer label when no exit routine is provided, the task is abnormally
terminated. The routine must terminate with a RETURN macro and a return code
that indicates what action is to be taken by the operating system, as shown in
Figure 152.
As with other exit routines, the contents of register 14 must be saved and restored
if any macros are used.
When you concatenate data sets with unlike attributes, no EOV exits are taken
when beginning each data set.
The system treats the volumes of a striped extended format data set as if they
were one volume. For such a data set your EOV exit is called only when the end of
the data set is reached and it is part of a like sequential concatenation.
When the EOV routine is entered, register 0 contains 0 unless user totaling was
specified. If you specified user totaling in the DCB macro (by coding OPTCD=T) or
in the DD statement for an output data set, register 0 contains the address of the
user totaling image area. The routine is entered after a new volume has been
mounted and all necessary label processing has been completed. If the volume is a
reel of magnetic tape, the tape is positioned after the tape mark that precedes the
beginning of the data.
You can use the EOV exit routine to take a checkpoint by issuing the CHKPT
macro, which is discussed in DFSMS/MVS Checkpoint/Restart. If a checkpointed
job step terminates abnormally, it can be restarted from the EOV checkpoint. When
the job step is restarted, the volume is mounted and positioned as on entry to the
routine. Restart becomes impossible if changes are made to the link pack area
(LPA) library between the time the checkpoint is taken and the time the job step is
restarted. When the step is restarted, pointers to EOV modules must be the same
as when the checkpoint was taken.
The EOV exit routine returns control in the same manner as the data control block
exit routine. The contents of register 14 must be preserved and restored if any
macros are used in the routine. Control is returned to the operating system by a
RETURN macro; no return code is required.
Multiple exit list entries in the exit list can define FCBs. The OPEN and SETPRT
routines search the exit list for requested FCBs before searching SYS1.IMAGELIB.
The first 4 bytes of the FCB image contain the image identifier. To load the FCB,
this image identifier is specified in the FCB parameter of the DD statement, by the
SETPRT macro, or by the system operator in response to message IEC127D or
IEC129D.
For an IBM 3203, 3211, 3262, 4245, or 4248 Printer, the image identifier is followed
by the FCB image described in DFSMS/MVS DFSMSdfp Advanced Services. For a
3800 FCB image, see IBM 3800 Printing Subsystem Programmer’s Guide. For a
3800 Model 3 FCB image, see IBM 3800 Printing Subsystem Model 3
Programmer’s Guide: Compatibility.
The system searches the DCB exit list for an FCB image only when writing to a
printer that is allocated to the job step. The system does not search the DCB exit
list with a SYSOUT data set. Figure 153 on page 479 illustrates one way the exit
list can be used to define an FCB image.
...
DCB ..,EXLST=EXLIST
...
EXLIST DS ðF
DC X'1ð' Flag code for FCB image
DC AL3(FCBIMG) Address of FCB image
DC X'8ððððððð' End of EXLST and a null entry
FCBIMG DC CL4'IMG1' FCB identifier
DC X'ðð' FCB is not a default
DC AL1(67) Length of FCB
DC X'9ð' Offset print line
\ 16 line character positions to the right
DC X'ðð' Spacing is 6 lines per inch
DC 5X'ðð' Lines 2-6, no channel codes
DC X'ð1' Line 7, channel 1
DC 6X'ðð' Lines 8-13, no channel codes
DC X'ð2' Line (or Lines) 14, channel 2
DC 5X'ðð' Line (or Lines) 15-19, no channel codes
DC X'ð3' Line (or Lines) 2ð, channel 3
DC 9X'ðð' Line (or Lines) 21-29, no channel codes
DC X'ð4' Line (or Lines) 3ð, channel 4
DC 19X'ðð' Line (or Lines) 31-49, no channel codes
DC X'ð5' Line (or Lines) 5ð, channel 5
DC X'ð6' Line (or Lines) 51, channel 6
DC X'ð7' Line (or Lines) 52, channel 7
DC X'ð8' Line (or Lines) 53, channel 8
DC X'ð9' Line (or Lines) 54, channel 9
DC X'ðA' Line (or Lines) 55, channel 1ð
DC X'ðB' Line (or Lines) 56, channel 11
DC X'ðC' Line (or Lines) 57, channel 12
DC 8X'ðð' Line (or Lines) 58-65, no channel codes
DC X'1ð' End of FCB image
...
END
//ddname DD UNIT=3211,FCB=(IMG1,VERIFY)
/\
JFCB Exit
This exit list entry does not define an exit routine. It is used with the RDJFCB
macro and OPEN TYPE=J. The RDJFCB macro uses the address specified in the
DCB exit list entry at X'07' to place a copy of the JFCB for each DCB specified by
the RDJFCB macro.
The area is 176 bytes and must begin on a fullword boundary. It must be located in
the user's region. Users running in 31-bit addressing mode must ensure that this
area is located below 16 megabytes virtual. The DCB can be either open or closed
when the RDJFCB macro is executed.
If RDJFCB fails while processing a DCB associated with your RDJFCB request,
your task is abnormally terminated. You cannot use the DCB abend exit to recover
from a failure of the RDJFCB macro. For more information about the RDJFCB
macro see DFSMS/MVS DFSMSdfp Advanced Services.
JFCBE Exit
JCL-specified setup requirements for the IBM 3800 Printing Subsystem cause a
JFCB extension (JFCBE) to be created to reflect those specifications. A JFCBE
exists if BURST, MODIFY, CHARS, FLASH, or any copy group is coded on the DD
statement. The JFCBE exit can be used to examine or modify those specifications
in the JFCBE. (Note: Although use of the JFCBE exit is still supported, its use is
not recommended.) The address of the routine should be placed in an exit list. (The
device allocated does not have to be a 3800.) This exit is taken during OPEN
processing and is mutually exclusive with the data control block OPEN exit. If you
need both the JFCBE and data control block OPEN exits, you must use the JFCBE
exit to pass control to your routines. Everything that you can do in a DCB OPEN
exit routine can also be done in a JFCBE exit. When you issue the SETPRT macro
to a SYSOUT data set, the JFCBE is further updated from the information in the
SETPRT parameter list.
When control is passed to your exit routine, the contents of register 1 will be the
address of the DCB being processed.
The area pointed to by register 0 will contain a 176-byte JFCBE followed by the
4-byte FCB identification that is obtained from the JFCB. If the FCB operand was
not coded on the DD statement, this FCB field will be binary zeros.
If your copy of the JFCBE is modified during an exit routine, you should indicate
this fact by turning on bit JFCBEOPN (X'80' in JFCBFLAG) in the JFCBE copy.
On return to OPEN, this bit indicates whether the system copy is to be updated.
The 4-byte FCB identification in your area will be used to update the JFCB
regardless of the bit setting. Checkpoint/restart also interrogates this bit to
determine which version of the JFCBE will be used at restart time. If this bit is not
on, the JFCBE generated by the restart JCL will be used.
The physical location of the labels on the data set depends on the data set
organization. For direct (BDAM) data sets, user labels are placed on a separate
user label track in the first volume. User label exits are taken only during execution
of the OPEN and CLOSE routines. Thus you can create or examine as many as
eight user header labels only during execution of OPEN and as many as eight
trailer labels only during execution of CLOSE. Because the trailer labels are on the
same track as the header labels, the first volume of the data set must be mounted
when the data set is closed.
For physical sequential (BSAM or QSAM) data sets, you can create or examine as
many as eight header labels and eight trailer labels on each volume of the data set.
For ASCII tape data sets, you can create an unlimited number of user header and
trailer labels. The user label exits are taken during OPEN, close, and EOV
processing.
To create or verify labels, you must specify the addresses of your label exit routines
in an exit list as shown in Figure 147 on page 468. Thus you can have separate
routines for creating or verifying header and trailer label groups. Care must be
taken if a magnetic tape is read backward, because the trailer label group is
processed as header labels and the header label group is processed as trailer
labels.
When your routine receives control, the contents of register 0 are unpredictable.
Register 1 contains the address of a parameter list. The contents of registers 2 to
13 are the same as when the macro instruction was issued. However, if your
program does not issue the CLOSE macro, or abnormally ends before issuing
CLOSE, the CLOSE macro will be issued by the control program, with
control-program information in these registers.
ð ┌────────────────┬─────────────────────────────────────────────────────┐
│ │ Address of 8ð-byte label buffer area │
4 ├────────────────┼───────────────┴─────────────────────┴───────────────┤
│ EOF flag │ Address of DCB being processed │
8 ├────────────────┼───────────────┴─────────────────────┴───────────────┤
│ Error flags │ Address of status information │
12 ├────────────────┼───────────────┴─────────────────────┴───────────────┤
│ │ Address of user totaling image area │
└────────────────┴───────────────┴─────────────────────┴───────────────┘
Figure 154. Parameter List Passed to User Label Exit Routine
The first address in the parameter list points to an 80-byte label buffer area. For
input, the control program reads a user label into this area before passing control to
the label routine. For output, the user label exit routine builds labels in this area and
returns to the control program, which writes the label. When an input trailer label
routine receives control, the EOF flag (high-order byte of the second entry in the
parameter list) is set as follows:
Bit ð = ð: Entered at EOV
Bit ð = 1: Entered at end-of-file
Bits 1-7: Reserved
When a user label exit routine receives control after an uncorrectable I/O error has
occurred, the third entry of the parameter list contains the address of the standard
status information. The error flag (high-order byte of the third entry in the parameter
list) is set as follows:
Bit ð = 1: Uncorrectable I/O error
Bit 1 = 1: Error occurred during writing of updated label
Bits 2-7: Reserved
The fourth entry in the parameter list is the address of the user totaling image area.
This image area is the entry in the user totaling save area that corresponds to the
last record physically written on the volume. (The image area is discussed further
under “User Totaling for BSAM and QSAM” on page 488.)
Each routine must create or verify one label of a header or trailer label group, place
a return code in register 15, and return control to the operating system. The
operating system responds to the return code as shown in Figure 155 on
page 482.
You can create user labels only for data sets on magnetic tape volumes with IBM
standard labels or ISO/ANSI labels and for data sets on direct access volumes.
When you specify both user labels and IBM standard labels in a DD statement by
specifying LABEL=(,SUL) and there is an active entry in the exit list, a label exit is
always taken. Thus, a label exit is taken even when an input data set does not
contain user labels, or when no user label track has been allocated for writing
labels on a direct access volume. In either case, the appropriate exit routine is
entered with the buffer area address parameter set to 0. On return from the exit
routine, normal processing is resumed; no return code is necessary.
Figure 155. System Response to a User Label Exit Routine Return Code
Routine Type Return Code System Response
Input header or 0 (X'00') Normal processing is resumed. If there are any
trailer label remaining labels in the label group, they are
ignored.
4 (X'04') The next user label is read into the label buffer
area and control is returned to the exit routine. If
there are no more labels in the label group,
normal processing is resumed.
8ñ (X'08') The label is written from the label buffer area and
normal processing is resumed.
12ñ (X'0C') The label is written from the label area, the next
label is read into the label buffer area, and
control is returned to the label processing routine.
If there are no more labels, processing is
resumed.
Output header or 0 (X'00') Normal processing is resumed; no label is written
trailer label from the label buffer area.
4 (X'04') User label is written from the label buffer area.
Normal processing is resumed.
8 (X'08') User label is written from the label buffer area. If
fewer than eight labels have been created,
control is returned to the exit routine, which then
creates the next label. If eight labels have been
created, normal processing is resumed.
Label exits are not taken for system output (SYSOUT) data sets, or for data sets on
volumes that do not have standard labels. For other data sets, exits are taken as
follows:
When an input data set is opened, the input header label exit 01 is taken. If
the data set is on tape being opened for RDBACK, user trailer labels will be
processed.
When an output data set is opened, the output header label exit 02 is taken.
However, if the data set already exists and DISP=MOD is coded in the DD
statement, the input trailer label exit 03 is taken to process any existing trailer
labels. If the input trailer label exit 03 does not exist, then the deferred input
trailer label exit 0C is taken if it exists; otherwise, no label exit is taken. For
tape, these trailer labels will be overwritten by the new output data or by EOV
or close processing when writing new standard trailer labels. For direct access
devices, these trailer labels will still exist unless rewritten by EOV or close
processing in an output trailer label exit.
When an input data set reaches EOV, the input trailer label exit 03 is taken. If
the data set is on tape opened for RDBACK, header labels will be processed.
The input trailer label exit 03 is not taken if you issue an FEOV macro. If a
defer input trailer label exit 0C is present, and an input trailer label exit 03 is
not present, the 0C exit is taken. After switching volumes, the input header
label exit 01 is taken. If the data set is on tape opened for RDBACK, trailer
labels will be processed.
When an output data set reaches EOV, the output trailer label exit 04 is taken.
After switching volumes, output header label exit 02 is taken.
When an input data set reaches end-of-data, the input trailer label exit 03 is
taken before the EODAD exit, unless the DCB exit list contains a defer input
trailer label exit 0C.
When an input data set is closed, no exit is taken unless the data set was
previously read to end-of-data and the defer input trailer label exit 0C is
present. If so, the defer input trailer label exit 0C is taken to process trailer
labels, or if the tape is opened for RDBACK, header labels.
When an output data set is closed, the output trailer label exit 04 is taken.
To process records in reverse order, a data set on magnetic tape can be read
backward. When you read backward, header label exits are taken to process trailer
labels, and trailer label exits are taken to process header labels. The system
presents labels from a label group in ascending order by label number, which is the
order in which the labels were created. If necessary, an exit routine can determine
label type (UHL or UTL) and number by examining the first four characters of each
label. Tapes with IBM standard labels and direct access devices can have as many
as eight user labels. Tapes with ISO/ANSI labels can have an unlimited number of
user labels.
If an uncorrectable error occurs during reading or writing of a user label, the system
passes control to the appropriate exit routine, with the third word of the parameter
list flagged and pointing to status information.
After an input error, the exit routine must return control with an appropriate return
code (0 or 4). No return code is required after an output error. If an output error
occurs while the system is opening a data set, the data set is not opened (DCB is
flagged) and control is returned to your program. If an output error occurs at any
other time, the system attempts to resume normal processing.
Open or EOV calls this exit when either must issue mount message IEC501A or
IEC501E to request a scratch tape volume. Open issues the mount message if you
specify the DEFER parameter with the UNIT option, and you either didn't specify a
volume serial number in the DD statement or you specified 'VOL=SER=SCRTCH'.
EOV always calls this exit for a scratch tape volume request.
This user exit gets control in the key and state of the program that issued the
OPEN or EOV, and no locks are held. After you are in control, you must provide a
return code in register 15.
If OPEN or EOV finds that the volume pointed to by register 0 is being used either
by this or by another job (an active ENQ on this volume), it takes this exit again
and continues to do so until you either specify another volume serial number or
request a scratch volume. If the volume you specify is available but is rejected by
OPEN or EOV for some reason (I/O errors, expiration date, password check, and
so forth), this exit is not taken again.
When this exit gets control, register 1 points to the parameter list described by the
IECOENTE macro. Figure 156 shows this parameter list.
OENTOEOV
set to 0 if OPEN called this exit; set to 1 if EOV called this exit.
OENTNTRY
set to 1 if this is not the first time this exit was called because the requested
tape volume is being used by this or any other job.
OENTOPTN
contains the OPEN options from the DCB parameter list (OUTPUT, INPUT,
OUTIN, INOUT, and so forth). For EOV processing, the options byte in the
DCB parameter list indicates how EOV is processing this volume. For example,
if you open a tape volume for INOUT and EOV is called during an input
operation on this tape volume, the DCB parameter list and OENTOPTN are set
to indicate INPUT.
OENTVSRA
points to the last volume serial number you requested in this exit but was in
use either by this or another job. OENTVSRA is set to 0 the first time this exit
is called.
OENTJFCB
points to the OPEN or EOV copy of the JFCB. The high order bit is always on,
indicating that this is the end of the parameter list.
OENTREGS
starts the register save area used by OPEN or EOV. Do not use this save area
in this user exit.
You do not have to preserve the contents of any register other than register 14.
The operating system restores the contents of registers 2 through 13 before it
returns to OPEN or EOV and before it returns control to the original calling
program.
Do not use the save area pointed to by register 13; the operating system uses it. If
you call another routine, or issue a supervisor or data management macro in this
user exit, you must provide the address of a new save area in register 13.
This user exit gets control in the key and state of the program that issued the
OPEN or EOV request, and no locks are held. After you are in control, you must
provide a return code in register 15.
When this exit gets control, register 1 points to the parameter list described by the
IECOEVSE macro. The parameter list is shown in Figure 158 on page 487.
.....
+ OEVSID DS CL4 ID FIELD = OEVS
+ OEVSFLG DS X FLAGS BYTE
+ OEVSEOV EQU X'8ð' ð=OPEN, 1=EOV
+ OEVSFILE EQU X'ð1' ð=1ST FILE, 1=SUBSEQ FILE
\ BITS 1 THROUGH 6 RESERVED
+ OEVSOPTN DS X OPEN OPTION (OUTPUT/INPUT/...)
+ OEVSMASK EQU X'ðF' MASK
+ OEVSRSVD DS XL2 RESERVED
+ OEVSDCBA DS A ADDRESS OF USER DCB
+ OEVSVSRA DS A ADDRESS OF 6-BYTE VOLSER
+ OEVSHDR1 DS A ADDRESS OF HDR1/EOF1
+ OEVSJFCB DS A ADDRESS OF O/C/E COPY OF JFCB
+ OEVSLENG EQU \-OEVSID PLIST LENGTH
+ OEVSREGS DS 6F REGISTER SAVE AREA
+ OEVSAREA EQU \-OEVSID MACRO LENGTH
.....
OEVSFLG
a flag field containing two flags.
OEVSEOV is set to 0 if OPEN called this exit; set to 1 if EOV called this exit.
OEVSFILE is set to 0 if the first data set on the volume is to be written; set to 1
if this is not the first data set on the volume to be written. This bit is always 0
for INPUT processing.
OEVSOPTN
a 1-byte field containing the OPEN options from the DCB parameter list
(OUTPUT, INPUT, INOUT, and so forth). For EOV processing, this byte
indicates how EOV is processing this volume. For example, if you opened a
tape volume for OUTIN and EOV is called during an output operation on the
tape volume, the DCB parameter list and OEVSOPTN are set to indicate
OUTPUT.
OEVSVSRA
a pointer to the current volume serial number that OPEN or EOV is processing.
OEVSHDR1
a pointer to a HDR1 label, if one exists; or an EOF1 label, if you are creating
other than the first data set on this volume.
OEVSJFCB
a pointer to the OPEN, CLOSE, or EOV copy of the JFCB. The high-order bit is
always on, indicating that this is the end of the parameter list.
OEVSREGS
a register save area used by OPEN or EOV. Do not use this save area in this
user exit.
Register Contents
0 Variable
1 Address of the parameter list for this exit.
2-13 Contents of the registers before the OPEN or EOV was issued
14 Return address (you must preserve the contents of this register in this user exit)
15 Entry point address to this user exit
You do not have to preserve the contents of any register other than register 14.
The operating system restores the contents of registers 2 through 13 before it
returns to OPEN or EOV and before it returns control to the original calling
program.
Do not use the save area pointed to by register 13; the operating system uses it. If
you call another routine or issue a supervisor or data management macro in this
user exit, you must provide the address of a new save area in register 13.
To request user totaling, you must specify OPTCD=T in the DCB macro instruction
or in the DCB parameter of the DD statement. The area in which you collect the
control data (the user totaling area) must be identified to the control program by an
entry of X'0A' in the DCB exit list. OPTCD=T cannot be specified for SYSIN or
SYSOUT data sets.
The user totaling area, an area in storage that you provide, must begin on a
halfword boundary and be large enough to contain your accumulated data plus a
2-byte length field. The length field must be the first 2 bytes of the area and specify
the length of the complete area. A data set for which you have specified user
totaling (OPTCD=T) will not be opened if either the totaling area length or the
address in the exit list is 0, or if there is no X'0A' entry in the exit list.
The control program establishes a user totaling save area, where the control
program preserves an image of your totaling area, when an I/O operation is
scheduled. When the output user label exits are taken, the address of the save
area entry (user totaling image area) corresponding to the last record physically
written on a volume is passed to you in the fourth entry of the user label parameter
list. (This parameter list is described in “Open/Close/EOV Standard User Label Exit”
on page 480.) When an EOV exit is taken for an output data set and user totaling
has been specified, the address of the user totaling image area is in register 0.
When using user totaling for an output data set, that is, when creating the data set,
you must update your control data in your totaling area before issuing a PUT or a
WRITE macro. The control program places an image of your totaling area in the
user totaling save area when an I/O operation is scheduled. A pointer to the save
area entry (user totaling image area) corresponding to the last record physically
written on the volume is passed to you in your label processing routine. Thus you
can include the control total in your user labels.
When subsequently using this data set for input, you can collect the same
information as you read each record and compare this total with the one previously
stored in the user trailer label. If you have stored the total from the preceding
volume in the user header label of the current volume, you can process each
volume of a multivolume data set independently and still maintain this system of
control.
When variable-length records are specified with the totaling function for user labels,
special considerations are necessary. Because the control program determines
whether a variable-length record will fit in a buffer after a PUT or a WRITE has
been issued, the total you have accumulated can include one more record than is
really written on the volume. For variable-length spanned records, the accumulated
total will include the control data from the volume-spanning record although only a
segment of the record is on that volume. However, when you process such a data
set, the volume-spanning record or the first record on the next volume will not be
available to you until after the volume switch and user label processing are
completed. Thus the totaling information in the user label cannot agree with that
developed during processing of the volume.
One way you can resolve this situation is to maintain, when you are creating a data
set, control data about each of the last two records and include both totals in your
user labels. Then the total related to the last complete record on the volume and
the volume-spanning record or the first record on the next volume would be
available to your user label routines. During subsequent processing of the data set,
your user label routines can determine if there is agreement between the generated
information and one of the two totals previously saved.
When the totaling function for user labels is selected with DASD devices and
secondary space is specified, the total accumulated can be one less than the actual
written.
Only standard label formats are used on direct access volumes. Volume, data set,
and optional user labels are used (see Figure 159). In the case of direct access
volumes, the data set label is the data set control block (DSCB).
IPL Records
Cylinder
Volume Label
Cylinder 0
Additional Labels
Tracks
Track 0 (Optional)
VTOC DSCB
Free Space DSCB
DSCB VTOC
DSCB
DCSB
All Remaining
Tracks of Volume
Unused Storage
Area for Data Sets
Volume-Label Group
The volume-label group immediately follows the first two initial program loading
(IPL) records on track 0 of cylinder 0 of the volume. It consists of the initial volume
label at record 3 plus a maximum of seven additional volume labels. The initial
volume label identifies a volume and its owner, and is used to verify that the correct
volume is mounted. It can also be used to prevent use of the volume by
unauthorized programs. The additional labels can be processed by an installation
routine that is incorporated into the system.
The format of the data portion of the direct access volume label group is shown in
Figure 160 on page 494.
(As many as Seven Additional Volume Labels)
8ð─Byte Physical Record
┌────────────┐
Field 1 │ (3) │ Volume Label Identifier (VOL)
├────────────┤
2 │ (1) │ Volume Label Number (1)
├────────────┤
│ │
3 │ (6) │ Volume Serial Number
│ │
├────────────┤
4 │ (1) │ Volume Security
├────────────┤
│ │
5 │ (5) │ VTOC Pointer
│ │
├────────────┤
│ │
6 │ (21) │ Reserved (Blank)
│ │
├────────────┤
│ │
7 │ (14) │ Owner Identification
│ │
├────────────┤
│ │
8 │ (29) │ Blank
│ │
└────────────┘
Figure 160. Initial Volume Label Format
The operating system identifies an initial volume label when, in reading the initial
record, it finds that the first 4 characters of the record are VOL1. That is, they
contain the volume label identifier and the volume label number. The initial volume
label is 80 bytes. The format of an initial volume label are described in the following
text.
Field 2 identifies the relative position of the volume label in a volume label group. It
must be written as X'F1'.
Field 3 contains a unique identification code assigned when the volume enters the
system. You can place the code on the external surface of the volume for visual
identification. The code is normally numeric (000000 through 999999), but can be
any 1 to 6 alphanumeric or national (#, $, @) characters, or a hyphen (X'60'). If
this field is fewer than 6 characters, it is padded on the right with blanks.
Volume Security:
Field 4 is reserved for use by installations that want to provide security for volumes.
Make this field an X'C0' unless you have your own security processing routines.
VTOC Pointer:
Field 5 of direct access volume label 1 contains the address of the VTOC in the
form of CCHHR.
Reserved:
Field 6 is reserved for possible future use, and should be left blank.
Field 8:
Each group can include as many as eight labels, but the space required for both
groups must not be more than one track on a direct access storage device. The
current minimum track size allows a maximum of eight labels, including both header
and trailer labels. So, a program becomes device-dependent (among direct access
storage devices) when it creates more than eight labels.
User Header Label Group: The operating system writes these labels as directed
by the processing program recording the data set. The first 4 characters of the user
header label must be UHL1,..., UHL8; you can specify the remaining 76 characters.
When the data set is read, the operating system makes the user header labels
available to the problem program for processing.
User Trailer Label Group: These labels are recorded (and processed) as
explained in the preceding text for user header labels, except that the first 4
characters must be UTL1,...,UTL8.
Label Identifier:
Field 1 shows the kind of user header label. UHL means a user header label; UTL
means a user trailer label.
Label Number:
Field 2 identifies the relative position (1 to 8) of the label within the user label
group.
User-Specified:
DBCS support is not used to create characters; it is used to print and copy DBCS
characters already in the data set. To print and copy DBCS characters, use the
access method services commands PRINT and REPRO. See DFSMS/MVS Access
Method Services for ICF for information on using PRINT and REPRO with DBCS
data.
When the data has a mixture of DBCS and SBCS strings, you must use two special
delimiters, SO (shift out) and SI (shift in), which designate where a DBCS string
begins and where it ends. SO tells you when you are leaving an SBCS string, and
SI tells you when you are returning to an SBCS string. Use the PRINT and REPRO
commands to insert the SO and SI characters around the DBCS data.
Fixed-Length Records
Because inserting of SO and SI characters increases the output record length, you
must define the output data set with enough space in the output record. The record
length of the output data set must be equal to the input data set's record length
plus the additional number of bytes necessary to insert the SO and SI pairs. Each
SO and SI pair consists of 2 bytes. In the following example for a fixed-length
record, the input record length is 80 bytes and consists of one DBCS string
surrounded by an SO and SI pair. The output record length would be 82 bytes,
which is correct.
Input record length = 8ð; number of SO and SI pairs = 1
Output record length = 82 (correct length)
An output record length of 84 bytes, for example, would be too large and would
result in an error. An output record length of 80 bytes, for example, would be too
small because there would not be room for the SO and SI pair. If the output record
length is too small or too large, an error message will be issued, a return code of
12 will be returned from IEBGENER, and the command will be ended.
Variable-Length Records
Because insertion of SO and SI characters increases the output record length, you
must define the output data set with enough space in the output record. The input
data set's record length plus the additional number of bytes necessary to insert the
SO and SI pairs must not exceed the maximum record length of the output data
set. Each SO and SI pair consists of 2 bytes. If the output record length is too
small, an error message will be issued, a return code of 12 will be returned from
IEBGENER, and the command will be ended.
In the following example for a variable-length record, the input record length is 50
bytes and consists of four DBCS string surrounded by SO and SI pairs. The output
record length is 50 bytes which is too small because the SO and SI pairs add eight
extra bytes to the record length. The output record length should be at least 58
bytes.
Input record length = 5ð; number of SO and SI pairs = 4
Output record length = 5ð (too small; should be at least 58 bytes)
Use of BDAM is not recommended; use VSAM instead. If you use BDAM be aware
of the following:
Keyed blocks use hardware keys which are less efficient than VSAM keys.
BDAM does not support extended format data sets.
The use of relative track addressing or actual device addresses easily can lead
to program logic that is dependent on characteristics that are unique to certain
device types.
Updating R0 can be inefficient.
Load mode does not support 31–bit addressing mode.
The problem program must synchronize all I/O operations with a CHECK or a WAIT
macro.
The problem program must block and unblock its own input and output records.
(BDAM only reads and writes data blocks.)
You can find data blocks within a data set with one of the following addressing
techniques:
Actual device addresses.
Relative track address technique, which locates a track on a direct access
storage device starting at the beginning of the data set.
Relative block address technique, which locates a fixed-length data block
starting from the beginning of the data set.
Processing
Although you can process a direct data set sequentially using either the queued
access method or the basic access method, you cannot read record keys using the
queued access method. When you use the basic access method, each unit of data
transmitted between virtual storage and an I/O device is regarded by the system as
a record. If, in fact, it is a block, you must perform any blocking or deblocking
required. For that reason, the LRECL field is not used when processing a direct
data set. Only BLKSIZE must be specified when you read, add, or update records
on a direct data set.
If dynamic buffering is specified for your direct data set, the system will provide a
buffer for your records. If dynamic buffering is not specified, you must provide a
buffer for the system to use.
The discussion of direct access storage devices shows that record keys are
optional. If they are specified, they must be used for every record and must be of a
fixed length.
Organizing
In a direct data set, there is a relationship between a control number (or
identification of each record) and its location on the direct access volume.
Therefore, you can access a record without an index search. You determine the
actual organization of the data set. If the data set has been carefully organized,
location of a particular record takes less time than with an indexed sequential data
set.
You can use direct addressing to develop the organization of your data set. When
you use direct addresses, the location of each record in the data set is known.
By Range of Keys
If format-F records with keys are being written, the key of each record can be used
to identify the record. For example, a data set with keys ranging from 0 to 4999
should be allocated space for 5000 records. Each key relates directly to a location
that you can refer to as a relative record number. Therefore, each record should
be assigned a unique key.
If identical keys are used, it is possible, during periods of high processor and
channel activity, to skip the desired record and retrieve the next record on the track.
The main disadvantage of this type of organization is that records might not exist
for many of the keys, even though space has been reserved for them.
By Number of Records
Space could be allocated based on the number of records in the data set rather
than on the range of keys. Allocating space based on the number of records
requires the use of a cross-reference table. When a record is written in the data
set, you must note the physical location as a relative block number, an actual
address, or as a relative track and record number. The addresses must then be
stored in a table that is searched when a record is to be retrieved. Disadvantages
are that cross-referencing can be used efficiently only with a small data set; storage
is required for the table, and processing time is required for searching and updating
the table.
Creating
After the organization of a direct data set has been determined, the process of
creating it is almost identical to creating a sequential data set. The BSAM DCB
macro should be used with the WRITE macro (the form used to allocate a direct
data set). Issue the WRITE and CHECK macros in 24-bit mode. The following
parameters must be specified in the DCB macro:
DSORG=PS or PSU
DEVD=DA or omitted
MACRF=WL
The DD statement must specify direct access (DSORG=DA or DAU). If keys are
used, a key length (KEYLEN) must also be specified. Record length (LRECL) need
not be specified, but can provide compatibility with sequential access method
processing of a direct data set.
Note: DSORG and KEYLEN can be specified through data class. For more
information on data class, see Chapter 22, “Specifying and Initializing a
Data Control Block” on page 293.
If a direct data set is created and updated or read within the same job step, and the
OPTCD parameter is used in the creation, updating, or reading of the data set,
different DCBs and DD statements should be used.
Format-F records are written sequentially as they are presented. When a track is
filled, the system automatically writes the capacity record and advances to the next
track.
Note: Because of the form in which relative track addresses are recorded, direct
data sets whose records are to be identified by means other than actual
address must be limited in size to no more than 65536 tracks for the entire
data set.
Example
In the example problem in Figure 162, a tape containing 204-byte records arranged
in key sequence is used to allocate a direct data set. A 4-byte binary key for each
record ranges from 1000 to 8999, so space for 8000 records is requested.
//DAOUTPUT DD DSNAME=SLATE.INDEX.WORDS,DCB=(DSORG=DA, C
// BLKSIZE=2ðð,KEYLEN=4,RECFM=F),SPACE=(2ð4,8ððð),---
//TAPINPUT DD ---
...
DIRECT START
...
L 9,=F'1ððð'
OPEN (DALOAD,(OUTPUT),TAPEDCB)
LA 1ð,COMPARE
NEXTREC GET TAPEDCB
LR 2,1
COMPARE C 9,ð(2) Compare key of input against
\ control number
BNE DUMMY
WRITE DECB1,SF,DALOAD,(2) Write data record
CHECK DECB1
AH 9,=H'1'
B NEXTREC
DUMMY C 9,=F'8999' Have 8ððð records been written?
BH ENDJOB
WRITE DECB2,SD,DALOAD,DUMAREA Write dummy
CHECK DECB2
AH 9,=H'1'
BR 1ð
INPUTEND LA 1ð,DUMMY
BR 1ð
ENDJOB CLOSE (TAPEDCB,,DALOAD)
...
DUMAREA DS 8F
DALOAD DCB DSORG=PS,MACRF=(WL),DDNAME=DAOUTPUT, C
DEVD=DA,SYNAD=CHECKER,---
TAPEDCB DCB EODAD=INPUTEND,MACRF=(GL), ---
...
Referring to a Record
After you have determined how your data set is to be organized, you must consider
how the individual records will be referred to when the data set is updated or new
records are added. The record identification can be represented in any of the
following forms.
Record Key
In addition to the relative track or block address, you specify the address of a
virtual storage location containing the record key. The system computes the actual
track address and searches for the record with the correct key.
Note: Because of the form in which relative track addresses are recorded, direct
data sets whose records are to be identified by means other than actual
address must be limited in size to no more than 65536 tracks for the entire
data set.
Actual Address
You supply the actual address in the standard 8-byte form—MBBCCHHR.
Remember that the use of an actual address might force you to specify that the
data set is unmovable. In that case the data set will be ineligible to be
system-managed.
Extended Search
You request that the system begin its search with a specified starting location and
continue for a certain number of records or tracks. You can use the extended
search option to request a search for unused space where a record can be added.
To use the extended search option, you must specify in the DCB (DCBLIMCT) the
number of tracks (including the starting track) or records (including the starting
record) that are to be searched. If you specify a number of records, the system
might actually examine more than this number. In searching a track, the system
searches the entire track (starting with the first record); it therefore might examine
records that precede the starting record or follow the ending record.
If the DCB specifies a number equal to or greater than the number of tracks
allocated to the data set or the number of records within the data set, the entire
data set is searched in the attempt to satisfy your request.
Feedback Option
The feedback option specifies that the system is to provide the address of the
record requested by a READ or WRITE macro. This address can be in the same
form that was presented to the system in the READ or WRITE macro, or as an
8-byte actual address. You can specify the feedback option in the OPTCD
parameter of the DCB and in the READ or WRITE macro. If the feedback option is
omitted from the DCB, but is requested in a READ or WRITE macro, an 8-byte
actual address is returned to you.
If you want to add a record passing a relative block address, the system converts
the address to an actual track address. That track is searched for a dummy record.
If a dummy record is found, the new record is written in place of it. If there is no
dummy record on the track, you are informed that the write operation did not take
place. If you request the extended search option, the new record will be written in
place of the first dummy record found within the search limits you specify. If none is
found, you are notified that the write operation could not take place.
In the same way, a reference by relative track address causes the record to be
written in place of a dummy record on the referenced track or the first within the
search limits, if requested. If extended search is used, the search begins with the
first record on the track. Without extended search, the search can start at any
record on the track. Therefore, records that were added to a track are not
necessarily located on the track in the same sequence they were written in.
You will have to retrieve the record first (using a READ macro), test for a dummy
record, update, and write.
To add a new record, use a relative track address. The system examines the
capacity record to see if there is room on the track. If there is, the new record is
written. Under the extended search option, the record is written in the first available
area within the search limit.
//DIRADD DD DSNAME=SLATE.INDEX.WORDS,---
//TAPEDD DD ---
...
DIRECTAD START
...
OPEN (DIRECT,(OUTPUT),TAPEIN)
NEXTREC GET TAPEIN,KEY
L 4,KEY Set up relative record number
SH 4,=H'1ððð'
ST 4,REF
WRITE DECB,DA,DIRECT,DATA,'S',KEY,REF+1
WAIT ECB=DECB
CLC DECB+1(2),=X'ðððð' Check for any errors
BE NEXTREC
Notice that the write operation adds the key and the data record to the data set. If
the existing record is not a dummy record, an indication is returned in the exception
code of the DECB. For that reason, it is better to use the WAIT macro instead of
the CHECK macro to test for errors or exceptional conditions.
//DIRECTDD DD DSNAME=SLATE.INDEX.WORDS,---
//TAPINPUT DD ---
...
DIRUPDAT START
...
OPEN (DIRECT,(UPDAT),TAPEDCB)
NEXTREC GET TAPEDCB,KEY
PACK KEY,KEY
CVB 3,KEYFIELD
SH 3,=H'1'
ST 3,REF
READ DECBRD,DIX,DIRECT,'S','S',ð,REF+1
CHECK DECBRD
L 3,DECBRD+12
MVC ð(3ð,3),DATA
ST 3,DECBWR+12
WRITE DECBWR,DIX,DIRECT,'S','S',ð,REF+1
CHECK DECBWR
B NEXTREC
...
KEYFIELD DS ðD
DC XL3'ð'
KEY DS CL5
DATA DS CL3ð
REF DS F
DIRECT DCB DSORG=DA,DDNAME=DIRECTDD,MACRF=(RISXC,WIC), C
OPTCD=RF,BUFNO=1,BUFL=1ðð
TAPEDCB DCB ---
...
There is no check for dummy records. The existing direct data set contains 25 000
records whose 5-byte keys range from 00 001 to 25 000. Each data record is 100
bytes long. The first 30 characters are to be updated. Each input tape record
consists of a 5-byte key and a 30-byte data area. Notice that only data is brought
into virtual storage for updating.
When you are updating variable-length records, you should use the same length to
read and write a record.
Sharing DCBs
BDAM permits several tasks to share the same DCB and several jobs to share the
same data set. It synchronizes I/O requests at both levels by maintaining a
read-exclusive list.
When several tasks share the same DCB and each asks for exclusive control of the
same block, BDAM issues a system ENQ for the block (or in some cases the entire
track). It reads in the block and passes it to the first caller while putting all
subsequent requests for that block on a wait queue. When the first task releases
the block, BDAM moves it into the next caller's buffer and posts that task complete.
The block is passed to subsequent callers in the order the request was received.
BDAM not only synchronizes the I/O requests, but also issues only one ENQ and
one I/O request for several read requests for the same block.
Note: Because BDAM processing is not sequential and I/O requests are not
related, a caller can continue processing other blocks while waiting for
exclusive control of the shared block.
Because BDAM issues a system ENQ for each record held exclusively, it allows a
data set to be shared between jobs, so long as all callers use BDAM. The system
enqueues on BDAM's commonly understood argument.
BDAM supports multiple task users of a single DCB when working with existing
data sets. When operating in load mode, however, only one task can use the DCB
at a time. The following restrictions and comments apply when more than one task
shares the same DCB, or when multiple DCBs are used for the same data set.
Subpool 0 must be shared.
You should ensure that a WAIT or CHECK macro has been issued for all
outstanding BDAM requests before the task issuing the READ or WRITE macro
ends. In case of abnormal termination, this can be done through a STAE/STAI
or ESTAE exit.
FREEDBUF or RELEX macros should be issued to free any resources that
could still be held by the terminating task. You can free the resources during or
after task termination.
Note: Open, close, and all I/O must be performed in the same key and state
(problem state or supervisor state).
To ease the task of converting programs from ISAM to VSAM, you can wish to
consider the use of the ISAM interface for VSAM. See Appendix E, “Using ISAM
Programs with VSAM Data Sets” on page 541.
BISAM directly retrieves logical records by key, updates blocks of records in-place,
and inserts new records in their correct key sequence.
The problem program must synchronize all I/O operations with a CHECK or a WAIT
macro.
Other DCB parameters are available to reduce I/O operations by defining work
areas that contain the highest level master index and the records being processed.
A data set processed with QISAM can have unblocked fixed-length records (F),
blocked fixed-length records (FB), unblocked variable-length records (V), or blocked
variable-length records (VB).
QISAM can create an indexed sequential data set (QISAM, load mode), add
additional data records at the end of the existing data set (QISAM, resume load
mode), update a record in place, or retrieve records sequentially (QISAM, scan
mode).
You cannot use track overflow to allocate or extend an indexed sequential data set.
When you allocate an indexed sequential data set, you can allocate space for the
data set's prime area, overflow area, and its cylinder/master index(es) on the same
or separate volumes. For more information about space allocation, see OS/390
MVS JCL User's Guide.
QISAM automatically generates a track index for each cylinder in the data set and
one cylinder index for the entire data set. Specify the DCB parameters NTM and
OPTCD to show that the data set requires a master index. QISAM creates and
maintains as many as three levels of master indexes.
You can purge records by specifying the OPTCD=L DCB option when you allocate
an indexed sequential data set. The OPTCD=L option flags the records you want to
purge with a X'FF' in the first data byte of a fixed-length record or the fifth byte of
a variable-length record. QISAM ignores these flagged records during sequential
retrieval.
You can get reorganization statistics by specifying the OPTCD=R DCB option when
an indexed sequential data set is allocated. The problem program uses these
statistics to determine the status of the data set's overflow areas.
When you allocate an indexed sequential data set, you must write the records in
ascending key order.
Processing
The queued access method must be used to allocate an indexed sequential data
set. It can also be used to sequentially process or update the data set and to add
records to the end of the data set. The basic access method can be used to insert
new records between records already in the data set and to update the data set
directly.
The records in an indexed sequential data set are arranged according to collating
sequence by a key field in each record. Each block of records is preceded by a key
field that corresponds to the key of the last record in the block.
An indexed sequential data set resides on direct access storage devices and can
occupy as many as three different areas:
The prime area, also called the prime data area, contains data records and
related track indexes. It exists for all indexed sequential data sets.
The index area contains master and cylinder indexes associated with the data
set. It exists for a data set that has a prime area occupying more than one
cylinder.
The overflow area contains records that overflow from the prime area when
new data records are added. It is optional.
The track indexes of an indexed sequential data set are similar to the card catalog
in a library. For example, if you know the name of the book or the author, you can
look in the card catalog and obtain a catalog number that will enable you to locate
the book in the book files. You then go to the shelves and go through rowFs until
you find the shelf containing the book. Then you look at the individual book
numbers on that shelf until you find the particular book.
ISAM uses the track indexes in much the same way to locate records in an indexed
sequential data set.
As the records are written in the prime area of the data set, the system accounts
for the records contained on each track in a track index area. Each entry in the
track index identifies the key of the last record on each track. There is a track
index for each cylinder in the data set. If more than one cylinder is used, the
system develops a higher-level index called a cylinder index. Each entry in the
cylinder index identifies the key of the last record in the cylinder. To increase the
speed of searching the cylinder index, you can request that a master index be
developed for a specified number of cylinders, as shown in Figure 165.
Rather than reorganize the entire data set when records are added, you can
request that space be allocated for additional records in an overflow area.
Master Index
Cylinder Index
Prime Area
Records are written in the prime area when the data set is allocated or updated.
The last track of prime data is reserved for an end-of-file mark. The portion of
Figure 165 on page 511 labeled cylinder 1 illustrates the initial structure of the
prime area. Although the prime area can extend across several noncontiguous
areas of the volume, all the records are written in key sequence. Each record must
contain a key; the system automatically writes the key of the highest record before
each block.
When the ABSTR option of the SPACE parameter of the DD statement is used to
generate a multivolume prime area, the VTOC of the second volume, and of all
succeeding volumes, must be contained within cylinder 0 of the volume.
Index Areas
The operating system generates track and cylinder indexes automatically. As many
as three levels of master index are created if requested.
Track Index
The track index is the lowest level of index and is always present. There is one
track index for each cylinder in the prime area; it is written on the first tracks of the
cylinder that it indexes.
The index consists of a series of paired entries, that is, a normal entry and an
overflow entry for each prime track. Figure 166 shows the format of a track index.
For fixed-length records, each normal entry (and also DCBFIRSH) points either to
record 0 or to the first prime record on a track shared by index and data. For
variable-length records, the normal entry contains the key of the highest record on
the track and the address of the last record on the track.
The overflow entry is originally the same as the normal entry. (This is why 100
appears twice on the track index for cylinder 1 in Figure 165.) The overflow entry is
changed when records are added to the data set. Then the overflow entry contains
the key of the highest overflow record and the address of the lowest overflow
record logically associated with the track.
If all the tracks allocated for the prime data area are not used, the index entries for
the unused tracks are flagged as inactive. The last entry of each track index is a
dummy entry indicating the end of the index. When fixed-length record format has
been specified, the remainder of the last track of each cylinder used for a track
index contains prime data records, if there is room for them.
Each index entry has the same format as the others. It is an unblocked,
fixed-length record consisting of a count, a key, and a data area. The length of the
key corresponds to the length of the key area in the record to which it points. The
data area is always 10 bytes long. It contains the full address of the track or record
to which the index points, the level of the index, and the entry type.
Cylinder Index
For every track index created, the system generates a cylinder index entry. There is
one cylinder index for a data set that points to a track index. Because there is one
track index per cylinder, there is one cylinder index entry for each cylinder in the
prime data area, except for a 1-cylinder prime area. As with track indexes, inactive
entries are created for any unused cylinders in the prime data area.
Master Index
As an optional feature, the operating system creates a master index at your
request. The presence of this index makes long, serial searches through a large
cylinder index unnecessary.
You can specify the conditions under which you want a master index created. For
example, if you have specified NTM=3 and OPTCD=M in your DCB macro, a
master index is created when the cylinder index exceeds 3 tracks. The master
index consists of one entry for each track of cylinder index. If your data set is
extremely large, a higher-level master index is created when the first-level master
index exceeds three tracks. This higher-level master index consists of one entry for
each track of the first-level master index. This procedure can be repeated for as
many as three levels of master index.
Overflow Areas
As records are added to an indexed sequential data set, space is required to
contain those records that will not fit on the prime data track on which they belong.
You can request that a number of tracks be set aside as a cylinder overflow area to
contain overflows from prime tracks in each cylinder. An advantage of using
cylinder overflow areas is a reduction of search time required to locate overflow
records. A disadvantage is that there will be unused space if the additions are
unevenly distributed throughout the data set.
Instead of, or in addition to, cylinder overflow areas, you can request an
independent overflow area. Overflow from anywhere in the prime data area is
placed in a specified number of cylinders reserved solely for overflow records. An
advantage of having an independent overflow area is a reduction in unused space
If you request both cylinder overflow and independent overflow, the cylinder
overflow area is used first. It is a good practice to request cylinder overflow areas
large enough to contain a reasonable number of additional records, and an
independent overflow area to be used as the cylinder overflow areas are filled.
One-Step Method
To allocate an indexed sequential data set by the one-step method, you should:
Code DSORG=IS or DSORG=ISU and MACRF=PM or MACRF=PL in the DCB
macro.
Specify the following in the DD statement:
– DCB attributes DSORG=IS or DSORG=ISU
– Record length (LRECL)
– Block size (BLKSIZE)
– Record format (RECFM)
– Key length (KEYLEN)
– Relative key position (RKP)
– Options required (OPTCD)
– Cylinder overflow (CYLOFL)
– The number of tracks for a master index (NTM).
– Specify space requirements with the SPACE parameter.
– To reuse previously allocated space, omit the SPACE parameter and code
DISP=(OLD, KEEP).
Open the data set for output.
Use the PUT macro to place all the records or blocks on the direct access
volume.
Close the data set.
The records that comprise a newly allocated data set must be presented for writing
in ascending order by key. You can merge two or more input data sets. If you want
a data set with no records (a null data set), you must write at least one record
when you allocate the data set. You can subsequently delete this record to achieve
the null data set.
Notes:
1. If an unload is done that deletes all existing records in an ISAM data set, at
least one record must be written on the subsequent load. If no record is written,
the data set will be unusable.
2. If the records are blocked, you should not write a record with a hexadecimal
value of FF and a key of hexadecimal value FF. This value of FF is used for
If you do not specify full-track-index write, the operating system writes each normal
overflow pair of entries for the track index after the associated prime data track has
been written. If you do specify full-track-index write, the operating system
accumulates track index entries in virtual storage until either (a) there are enough
entries to fill a track or (b) end-of-data or end-of-cylinder is reached. Then the
operating system writes these entries as a group, writing one group for each track
of track index. The OPTCD=U option requires allocation of more storage space (the
space in which the track index entries are gathered), but the number of I/O
operations required to write the index can be significantly decreased.
When you specify the full-track-index write option, the track index entries are written
as fixed-length unblocked records. If the area of virtual storage available is not
large enough the entries are written as they are created, that is, in normal overflow
pairs.
Example
The example in Figure 167 on page 516 shows the creation of an indexed
sequential data set from an input tape containing 60-character records.
//INDEXDD DD DSNAME=SLATE.DICT(PRIME),DCB=(BLKSIZE=24ð,CYLOFL=1, C
// DSORG=IS,OPTCD=MYLR,RECFM=FB,LRECL=6ð,NTM=6,RKP=19, C
// KEYLEN=1ð),UNIT=338ð,SPACE=(CYL,25,,CONTIG),---
//INPUTDD DD ---
...
ISLOAD START ð
...
DCBD DSORG=IS
ISLOAD CSECT
OPEN (IPDATA,,ISDATA,(OUTPUT))
NEXTREC GET IPDATA Locate mode
LR ð,1 Address of record in register 1
PUT ISDATA,(ð) Move mode
B NEXTREC
...
CHECKERR L 3,=A(ISDATA) Initialize base for errors
USING IHADCB,3
TM DCBEXCD1,X'ð4'
BO OPERR Uncorrectable error
TM DCBEXCD1,X'2ð'
BO NOSPACE Space not found
TM DCBEXCD2,X'8ð'
BO SEQCHK Record out of sequence
The key by which the data set is organized is in positions 20 through 29. The
output records will be an exact image of the input, except that the records will be
blocked. One track per cylinder is to be reserved for cylinder overflow. Master
indexes are to be built when the cylinder index exceeds 6 tracks. Reorganization
information about the status of the cylinder overflow areas is to be maintained by
the system. The delete option will be used during any future updating.
Multiple-Step Method
To allocate an indexed sequential data set in more than one step, create the first
group of records using the one-step method already described. This first section
must contain at least one data record. The remaining records can then be added to
the end of the data set in subsequent steps, using resume load. Each group to be
added must contain records with successively higher keys. This method allows you
to allocate the indexed sequential data set in several short time periods rather than
in a single long one.
This method also allows you to provide limited recovery from uncorrectable output
errors. When an uncorrectable output error is detected, do not attempt to continue
processing or to close the data set. If you have provided a SYNAD routine, it
should issue the ABEND macro to end processing. If no SYNAD routine is
provided, the control program will end your processing. If the error shows that
space in which to add the record was not found, you must close the data set;
issuing subsequent PUT macros can cause unpredictable results. You should begin
recovery at the record following the end of the data as of the last successful close.
The rerun time is limited to that necessary to add the new records, rather than to
that necessary to re-create the entire data set.
Resume Load
When you extend an indexed sequential data set with resume load, the disposition
parameter of the DD statement must specify MOD. To ensure that the necessary
control information is in the DSCB before attempting to add records, you should at
least open and close the data set successfully on a system that includes resume
load. This is necessary only if the data set was allocated on a previous version of
the system. Records can be added to the data set by resume load until the space
allocated for prime data in the first step has been filled.
During resume load on a data set with a partially filled track or a partially filled
cylinder, the track index entry or the cylinder index entry is overlaid when the track
or cylinder is filled. Resume load for variable-length records begins at the next
sequential track of the prime data set. If resume load abnormally ends after these
index entries have been overlaid, a subsequent resume load will result in a
sequence check when it adds a key that is higher than the highest key at the last
successful CLOSE but lower than the key in the overlaid index entry. When the
SYNAD exit is taken for a sequence check, register 0 contains the address of the
high key of the data set. However, if the SYNAD exit is taken during CLOSE,
register 0 will contain the IOB address.
Allocating Space
An indexed sequential data set has three areas: prime, index, and overflow. Space
for these areas can be subdivided and allocated as follows:
Prime area—If you request only a prime area, the system automatically uses a
portion of that space for indexes, taking one cylinder at a time as needed. Any
unused space in the last cylinder used for index will be allocated as an
independent overflow area. More than one volume can be used in most cases,
but all volumes must be for devices of the same device type.
Index area—You can request that a separate area be allocated to contain your
cylinder and master indexes. The index area must be contained within one
volume, but this volume can be on a device of a different type than the one that
contains the prime area volume. If a separate index area is requested, you
cannot catalog the data set with a DD statement.
If the total space occupied by the prime area and index area does not exceed
one volume, you can request that the separate index area be imbedded in the
prime area (to reduce access arm movement) by indicating an index size in the
SPACE parameter of the DD statement defining the prime area.
If you request space for prime and index areas only, the system automatically
uses any space remaining on the last cylinder used for master and cylinder
indexes for overflow, provided the index area is on a device of the same type
as the prime area.
Overflow area—Although you can request an independent overflow area, it
must be contained within one volume and must be of the same device type as
the prime area. If no specific request for index area is made, then it will be
allocated from the specified independent overflow area.
The number of tracks that you can use on each cylinder equals the total
number of tracks on the cylinder minus the number of tracks needed for track
index and for prime data. That is:
Overflow tracks = total tracks
– (track index tracks + prime data tracks)
Note that, when you allocate a 1-cylinder data set, ISAM reserves 1 track on the
cylinder for the end-of-file mark. You cannot request an independent index for an
indexed sequential data set that has only 1 cylinder of prime data.
When you request space for an indexed sequential data set, the DD statement
must follow several conventions, as shown below and summarized in the table
which follows.
Space can be requested only in cylinders, SPACE=(CYL,(...)), or absolute
tracks, SPACE=(ABSTR,(...)). If the absolute track technique is used, the
designated tracks must make up a whole number of cylinders.
Data set organization (DSORG) must be specified as indexed sequential (IS or
ISU) in both the DCB macro and the DCB parameter of the DD statement.
All required volumes must be mounted when the data set is opened; that is,
volume mounting cannot be deferred.
If your prime area extends beyond one volume, you must specify the number of
units and volumes to be spanned; for example, UNIT=(3380,3),VOLUME=(,,,3).
You can catalog the data set using the DD statement parameter
DISP=(,CATLG) only if the entire data set is defined by one DD statement; that
is, if you did not request a separate index or independent overflow area.
As your data set is allocated, the operating system builds the track indexes in the
prime data area. Unless you request a separate index area or an imbedded index
area, the cylinder and master indexes are built in the independent overflow area. If
you did not request an independent overflow area, the cylinder and master indexes
are built in the prime area.
─────────────────────────────────────────────────────────────────────────────────────────────────
Criteria
───────────────────────────────────────── Restrictions on
1. Number of 2. Types of 3. Index Unit Types and
DD State- DD State- Size Number of Units
ments ments Coded? Requested Resulting Arrangement of Areas
──────────────────────────────────────────────────────────────────────────────────────────────────
3 INDEX -- None Separate index, prime, and
PRIME overflow areas.
OVFLOW
──────────────────────────────────────────────────────────────────────────────────────────────────
2 INDEX -- None Separate index and prime areas.
PRIME Any partially used index
cylinder is used for independent
overflow if the index and prime
areas are on the same type of
device.
──────────────────────────────────────────────────────────────────────────────────────────────────
2 PRIME No None Prime area and overflow area
OVFLOW with an index at its end.
──────────────────────────────────────────────────────────────────────────────────────────────────
2 PRIME Yes The statement Prime area and imbedded index,
OVFLOW defining the and overflow area.
prime area
cannot request
more than one
unit.
──────────────────────────────────────────────────────────────────────────────────────────────────
1 PRIME No None Prime area with index at its
end. Any partially used index
cylinder is used for
independent overflow.
──────────────────────────────────────────────────────────────────────────────────────────────────
1 PRIME Yes Statement Prime area with imbedded index
cannot request area; independent overflow in
more than one remainder of partially used
unit. index cylinder.
──────────────────────────────────────────────────────────────────────────────────────────────────
Figure 168. Requests for Indexed Sequential Data Sets
You can accomplish the same type of allocation by qualifying your dsname with the
element indication (PRIME). The PRIME element is assumed if omitted. It is
required only if you request an independent index or an overflow area. To request
an imbedded index area when an independent overflow area is specified, you must
specify DSNAME=dsname (PRIME). To indicate the size of the imbedded index,
you specify SPACE=(CYL,(quantity,,index size)).
//ddname DD DSNAME=dsname(INDEX),---
// DD DSNAME=dsname(PRIME),---
The ISAM load mode reserves the last prime data track for the file mark.
Prime data tracks required = (2ððððð records / 13ðð records per track) + 1 = 155
Example: Approximately 5000 overflow records are expected for the data set
described in step 1. Because 55 overflow records will fit on a track, 91 overflow
tracks are required. There are 91 overflow tracks for 155 prime data tracks, or
approximately 1 overflow track for every 2 prime data tracks. Because the 3380
disk pack for a 3380 Model AD4 has 15 tracks per cylinder, it would probably be
best to allocate 5 tracks per cylinder for overflow.
Overflow = 47968/(256+((12+267)/32)(32)+((3ð+267)/32)(32))
records = 47968/864
per track = 55
Example: Again assuming a 3380 Model AD4 disk pack and records with 12-byte
keys, we find that 57 index entries fit on a track.
Index = 47968/(256+((12+267)/32)(32)+((1ð+267)/32)(32))
entries = 47968/832
per track = 57
Note: For variable-length records, or when a prime data record will not fit on the
last track of the track index, the last track of the track index is not shared
with prime data records. In this case, if the remainder of the division is less
than or equal to 2, drop the remainder. In all other cases, round the quotient
up to the next integer.
Example: The 3380 disk pack from the 3380 Model AD4 has 15 tracks per cylinder.
You can fit 57 track index entries into one track. Therefore, you need less than 1
track for each cylinder.
Number of trk index = (2 (15 − 5) + 1) / (57 + 2)
trks per cylinder = 21 / 59
The space remaining on the track is 47968 − (21 (832)) = 30496 bytes.
This is enough space for 16 blocks of prime data records. Because the normal
number of blocks per track is 26, the blocks use 16/26ths of the track, and the
effective number of track index tracks per cylinder is therefore 1 − 16/26 or 0.385.
Note that space is required on the last track of the track index for a dummy entry to
show the end of the track index. The dummy entry consists of an 8-byte count field,
a key field the same size as the key field in the preceding entries, and a 10-byte
data field.
Example: If you set aside 5 cylinder overflow tracks, and you need 0.385ths of a
track for the track index, 9.615 tracks are available on each cylinder for prime data
records.
Prime data tracks per cylinder = 15 − 5 − (ð.385) = 9.615
Example: You need 155 tracks for prime data records. You can use 9.615 tracks
per cylinder. Therefore, you need 17 cylinders for your prime area and cylinder
overflow areas.
Number of cylinders required = (155) / (9.615) = 16.121 (round up to 17)
Example: You have 17 track indexes (from Step 6). Because 57 index entries fit on
a track (from Step 3), you need 1 track for your cylinder index. The remaining
space on the track is unused.
Number of tracks required for cyl. index = (17 + 1) / 57 = 18 / 57 = ð.316 < 1
Note that, every time a cylinder index crosses a cylinder boundary, ISAM writes a
dummy index entry that lets ISAM chain the index levels together. The addition of
dummy entries can increase the number of tracks required for a given index level.
To determine how many dummy entries will be required, divide the total number of
tracks required by the number of tracks on a cylinder. If the remainder is 0, subtract
1 from the quotient. If the corrected quotient is not 0, calculate the number of tracks
these dummy entries require. Also consider any additional cylinder boundaries
crossed by the addition of these tracks and by any track indexes starting and
stopping within a cylinder.
If the cylinder index exceeds the NTM specification, an entry is made in the master
index for each track of the cylinder index. If the master index itself exceeds the
NTM specification, a second-level master index is started. As many as three levels
of master indexes are created if required.
The space requirements for the master index are computed in the same way as
those for the cylinder index.
If the number of cylinder indexes is greater than NTM, calculate the number of
tracks for a first level master index as follows:
# Tracks for first level master index =
(Cylinder track indexes + 1) / Index entries per track
If the number of first level master indexes is greater than NTM, calculate the
number of tracks for a second level master index as follows:
# Tracks for second level master index =
(First level master index + 1) / Index entries per track
If the number of second level master indexes is greater than NTM, calculate the
number of tracks for a third level master index as follows:
# Tracks for second level master index =
(Second level master index + 1) / Index entries per track
Example: Assume that your cylinder index will require 22 tracks. Because large
keys are used, only 10 entries will fit on a track. If NTM was specified as 2, 3
tracks will be required for a master index, and two levels of master index will be
created.
Number of tracks required for master indexes = (22 + 1) / 1ð = 2.3
4. How much space is left on the last track of the track index?
Unused = (Number of index entries per track)
space – (2 (Number of tracks per cylinder
– Number of overflow tracks per cylinder) + 1)
(Number of bytes per index)
5. How many tracks on each cylinder can you use for prime data records?
Prime data = Tracks per cylinder
tracks per – Overflow tracks per cylinder
– Index tracks per cylinder
6. How many cylinders do you need for the prime data area?
Number of cylinders needed = Prime data tracks needed / Prime data tracks per cylinder
Sequential Updates
Assume that, using the data set allocated in Figure 167 on page 516, you are to
retrieve all records whose keys begin with 915. Those records with a date
(positions 13 through 16) before the current date are to be deleted. The date is in
the standard form as returned by the system in response to the TIME macro, that
is, packed decimal 00yyddds. Overflow records can be logically deleted even
though they cannot be physically deleted from the data set.
Figure 169 shows how to update an indexed sequential data set sequentially.
//INDEXDD DD DSNAME=SLATE.DICT,---
...
ISRETR START ð
DCBD DSORG=IS
ISRETR CSECT
...
USING IHADCB,3
LA 3,ISDATA
OPEN (ISDATA)
SETL ISDATA,KC,KEYADDR Set scan limit
TIME Today's date in register 1
ST 1,TODAY
NEXTREC GET ISDATA Locate mode
CLC 19(1ð,1),LIMIT
BNL ENDJOB
CP 12(4,1),TODAY Compare for old date
BNL NEXTREC
MVI ð(1),X'FF' Flag old record for
deletion
PUTX ISDATA Return delete record
B NEXTREC
TODAY DS F
KEYADDR DC C'915' Key prefix
DC XL7'ð' Key padding
LIMIT DC C'916'
DC XL7'ð'
...
CHECKERR
Because the operations are direct, there is no anticipatory buffering. However, if ‘S’
is specified on the READ macro, the system provides dynamic buffering each time
a read request is made. (See Figure 170 on page 528.)
If an error analysis routine has not been specified and a CHECK is issued, and an
error situation exists, the program is abnormally ends with a system completion
code of XX'01'. For both WAIT and CHECK, if you want to determine whether the
record is an overflow record, you should test the exception code field of the DECB.
After you test the exception code field, you need not set it to 0. If you have used a
READ KU (read an updated record) macro, and if you plan to use the same DECB
again to rewrite the updated record using a WRITE K macro, you should not set the
field to 0. If you do, your record might not be rewritten properly.
When you are using scan mode with QISAM and you want to issue PUTX, issue an
ENQ on the data set before processing it and a DEQ after processing is complete.
ENQ must be issued before the SETL macro, and DEQ must be issued after the
ESETL macro. When you are using BISAM to update the data set, do not modify
any DCB fields or issue a DEQ until you have issued CHECK or WAIT.
If you specify DISP=SHR, you must also issue an ENQ for the data set before each
I/O request and a DEQ on completion of the request. All users of the data set must
use the same qname and rname operands for ENQ. For example, you might use
the data set name as the qname operand. For more information about using ENQ
and DEQ, see OS/390 MVS Assembler Services Reference and OS/390 MVS
Assembler Services Guide.
Subtasking
For subtasking, I/O requests should be issued by the task that owns the DCB or a
task that will remain active while the DCB is open. If the task that issued the I/O
request ends, the storage used by its data areas (such as IOBs) can be freed, or
queuing switches in the DCB work area can be left on, causing another task issuing
an I/O request to the DCB to program check or to enter the wait state.
For example, if a subtask issues and completes a READ KU I/O request, the IOB
created by the subtask is attached to the DCB update queue. (READ KU means
the record retrieved is to be updated.) If that subtask ends, and subpool zero is not
shared with the subtask owning the DCB, the IOB storage area is freed and the
integrity of the ISAM update queue is destroyed. A request from another subtask,
attempting to use that queue, could cause unpredictable abends. As another
example, if a WRITE KEY NEW is in process when the subtask ends, a
'WRITE-KEY-NEW-IN-PROCESS' bit is left on. If another I/O request is issued to
the DCB, the request is queued but cannot proceed.
Exclusive control of the data set is requested, because more than one task might
be referring to the data set at the same time. Notice that, to avoid tying up the data
set until the update is completed, exclusive control is released after each block is
written.
Using FREEDBUF: Note the use of the FREEDBUF macro in Figure 170 on
page 528. Usually, the FREEDBUF macro has two functions:
To indicate to the ISAM routines that a record that has been read for update
will not be written back
To free a dynamically obtained buffer.
In Figure 170, because the read operation was unsuccessful, the FREEDBUF
macro frees only the dynamically obtained buffer.
The first function of FREEDBUF allows you to read a record for update and then
decide not to update it without performing a WRITE for update. You can use this
function even when your READ macro does not specify dynamic buffering, if you
have included S (for dynamic buffering) in the MACRF field of your READ DCB.
You can cause an automatic FREEDBUF merely by reusing the DECB; that is, by
issuing another READ or a WRITE KN to the same DECB. You should use this
feature whenever possible, because it is more efficient than FREEDBUF. For
example, in Figure 170, the FREEDBUF macro could be eliminated, because the
WRITE KN addressed the same DECB as the READ KU.
//INDEXDD DD DSNAME=SLATE.DICT,DCB=(DSORG=IS,BUFNO=1,...),---
//TAPEDD DD ---
...
ISUPDATE START ð
...
NEXTREC GET TPDATA,TPRECORD
ENQ (RESOURCE,ELEMENT,E,,SYSTEM)
READ DECBRW,KU,,'S',MF=E Read into dynamically
\ obtained buffer
WAIT ECB=DECBRW
TM DECBRW+24,X'FD' Test for any condition
BM RDCHECK but overflow
L 3,DECBRW+16 Pick up pointer to
\ record
MVC ISUPDATE-ISRECORD, Update record
(L'UPDATE,3),UPDATE
WRITE DECBRW,K,MF=E
WAIT ECB=DECBRW
TM DECBRW+24,X'FD' Any errors?
BM WRCHECK
DEQ (RESOURCE,ELEMENT,,SYSTEM)
B NEXTREC
RDCHECK TM DECBRW+24,X'8ð' No record found
BZ ERROR If not, go to error
\ routine
FREEDBUF DECBRW,K,ISDATA Otherwise, free buffer
MVC ISKEY,KEY Key placed in ISRECORD
MVC ISUPDATE,UPDATE Updated information
\ placed in ISRECORD
WRITE DECBRW,KN,,WKNAREA,'S',MF=E Add record to data set
WAIT ECB=DECBRW
TM DECBRW+24,X'FD' Test for errors
BM ERROR
DEQ (RESOURCE,ELEMENT,,SYSTEM) Release exclusive
\ control
B NEXTREC
WKNAREA DS 4F BISAM WRITE KN work field
ISRECORD DS ðCL5ð 5ð-byte record from ISDATA
DS CL19 DCB First part of ISRECORD
ISKEY DS CL1ð Key field of ISRECORD
DS CL1 Part of ISRECORD
ISUPDATE DS CL2ð Update area of ISRECORD
Other Updating Methods: For an indexed sequential data set with variable-length
records, you can make three types of updates by using the basic access method.
You can read a record and write it back with no change in its length, simply
updating some part of the record. You do this with a READ KU, followed by a
WRITE K, the same way you update fixed-length records.
Two other methods for updating variable-length records use the WRITE KN macro
and allow you to change the record length. In one method, a record read for update
(by a READ KU) can be updated in a manner that will change the record length
and then be written back with its new length by a WRITE KN (key new). In the
second method, you can replace a record with another record having the same key
and possibly a different length using the WRITE KN macro. To replace a record, it
is not necessary to have first read the record.
In either method, when changing the record length, you must place the new length
in the DECBLGTH field of the DECB before issuing the WRITE KN macro. If you
use a WRITE KN macro to update a variable-length record that has been marked
for deletion, the first bit (no record found) of the exceptional condition code field
(DECBEXC1) of the DECB is set on. If this condition is found, the record must be
written using a WRITE KN with nothing specified in the DECBLGTH field.
Note: Do not try to use the DECBLGTH field to determine the length of a record
read, because DECBLGTH is for use with writing records, not reading them.
If you are reading fixed-length records, the length of the record read is in
DCBLRECL, and if you are reading variable-length records, the length is in
the record descriptor word (RDW).
For this example, the maximum record length of both data sets is 256 bytes. The
key is in positions 6 through 15 of the records in both data sets. The transaction
code is in position 5 of records on the transaction tape. The work area
(REPLAREA) size is equal to the maximum record length plus 16 bytes.
Adding Records
Either the queued access method or the basic access method can be used to add
records to an indexed sequential data set. A record to be inserted between records
already in the data set must be inserted by the basic access method using the
WRITE KN (key new) macro. Records added to the end of a data set (that is,
records with successively higher keys), can be added to the prime data area or the
overflow area by the basic access method using WRITE KN, or they can be added
to the prime data area by the queued access method using the PUT macro.
//INDEXDD DD DSNAME=SLATE.DICT,DCB=(DSORG=IS,BUFNO=1,...),---
//TAPEDD DD ---
...
ISUPDVLR START ð
...
NEXTREC GET TPDATA,TRANAREA
CLI TRANCODE,2 Determine if replacement or
\ other transaction
BL REPLACE Branch if replacement
READ DECBRW,KU,,'S','S',MF=E Read record for update
CHECK DECBRW,DSORG=IS Check exceptional conditions
CLI TRANCODE,2 Determine if change or append
BH CHANGE Branch if change
...
...
\ CODE TO MOVE RECORD INTO REPLACEA+16 AND APPEND DATA FROM TRANSACTION
\ RECORD
...
MVC DECBRW+6(2),REPLAREA+16 Move new length from RDW
\ into DECBLGTH (DECB+6)
WRITE DECBRW,KN,,REPLAREA,MF=E Rewrite record with
\ changed length
CHECK DECBRW,DSORG=IS
B NEXTREC
CHANGE ...
...
\ CODE TO CHANGE FIELDS OR UPDATE FIELDS OF THE RECORD
...
WRITE DECBRW,K,MF=E Rewrite record with no
\ change of length
CHECK DECBRW,DSORG=IS
B NEXTREC
REPLACE MVC DECBRW+6(2),TRANAREA Move new length from RDW
\ into DECBLGTH (DECB+6)
WRITE DECBRW,KN,,TRANAREA-16,MF=E Write transaction record
\ as replacement for record
\ with the same key
CHECK DECBRW,DSORG=IS
B NEXTREC
CHECKERR ... SYNAD routine
...
REPLAREA DS CL272
TRANAREA DS CL4
TRANCODE DS CL1
KEY DS CL1ð
TRANDATA DS CL241
READ DECBRW,KU,ISDATA,'S','S',KEY,MF=L
ISDATA DCB DDNAME=INDEXDD,DSORG=IS,MACRF=(RUSC,WUAC),SYNAD=CHECKERR
TPDATA DCB ---
...
Figure 171. Directly Updating an Indexed Sequential Data Set with Variable-Length Records
Subsequent additions are written either on the prime track or as part of the overflow
chain from that track. If the addition belongs after the last prime record on a track
but before a previous overflow record from that track, it is written in the first
available location in the overflow area. Its link field contains the address of the next
record in the chain.
For BISAM, if you add a record that has the same key as a record in the data set,
a “duplicate record” condition is shown in the exception code. However, if you
specified the delete option and the record in the data set is marked for deletion, the
condition is not reported and the new record replaces the existing record. For more
information about exception codes, see DFSMS/MVS Macro Instructions for Data
Sets.
When you use the WRITE KN macro, the record being added is placed in the prime
data area only if there is room for it on the prime data track containing the record
with the highest key currently in the data set. If there is not sufficient room on that
track, the record is placed in the overflow area and linked to that prime track, even
though additional prime data tracks originally allocated have not been filled.
When you use the PUT macro, records are added to the prime data area until the
space originally allocated is filled. After this allocated prime area is filled, you can
add records to the data set using WRITE KN, in which case they will be placed in
the overflow area. Resume load is discussed in more detail under “Creating an
ISAM Data Set” on page 514.
To add records with successively higher keys using the PUT macro:
The key of any record to be added must be higher than the highest key
currently in the data set.
The DD statement must specify DISP=MOD or specify the EXTEND option in
the OPEN macro.
The data set must have been successfully closed when it was allocated or
when records were previously added using the PUT macro.
You can continue to add fixed-length records in this manner until the original space
allocated for prime data is exhausted.
When you add records to an indexed sequential data set using the PUT macro,
new entries are also made in the indexes. During resume load on a data set with a
partially filled track or a partially filled cylinder, the track index entry or the cylinder
index entry is overlaid when the track or cylinder is filled. If resume load abnormally
ends after these index entries have been overlaid, a subsequent resume load will
get a sequence check when adding a key that is higher than the highest key at the
last successful CLOSE but lower than the key in the overlaid index entry. When the
SYNAD exit is taken for a sequence check, register 0 contains the address of the
highest key of the data set.
Initial Format 100 Track 100 Track 100 Track 200 Track Track
1 1 2 2 Index
10 20 40 100
Prime
Data
150 175 190 200
Overflow
Add Records 40 Track 100 Track 3 190 Track 200 Track 3 Track
25 and 101 1 Record 1 2 Record 2 Index
10 20 25 40
Prime
Data
101 150 175 190
Add Records 26 Track 100 Track 3 190 Track 200 Track 3 Track
26 and 199 1 Record 3 2 Record 4 Index
10 20 25 26
Prime
Data
101 150 175 190
In one pass by writing it directly into another area of direct access storage. In
this case, the area occupied by the original data set cannot be used by the
reorganized data set.
The operating system maintains statistics that are pertinent to reorganization. The
statistics, written on the direct access volume and available in the DCB for
checking, include the number of cylinder overflow areas, the number of unused
tracks in the independent overflow area, and the number of references to overflow
records other than the first. They appear in the RORG1, RORG2, and RORG3
fields of the DCB.
When creating or updating the data set, if you want to be able to flag records for
deletion during updating, set the delete code (the first byte of a fixed-length record
or the fifth byte of a variable-length record) to X'FF'. If a flagged record is forced
off its prime track during a subsequent update, it will not be rewritten in the
overflow area, as shown in Figure 173, unless it has the highest key on that
cylinder.
Key Data
Fixed-Length X’FF’
Delete Code
BDW RDW
Key Data
Variable-Length LL00 LL00 X’FF’ LL00
Delete Code
Initial Format 100 Track 1 100 Track 1 200 Track 2 200 Track 2
10 20 40 100
Similarly, when you process sequentially, flagged records are not retrieved for
processing. During direct processing, flagged records are retrieved the same as
any other records, and you should check them for the delete code.
Buffer Requirements
The only case in which you will ever have to compute the buffer length (BUFL)
requirements for your program occurs when you use the BUILD or GETPOOL
macro to construct the buffer area. If you are creating an indexed sequential data
set (using the PUT macro), each buffer must be 8 bytes longer than the block size
to allow for the hardware count field. That is:
One exception to this formula arises when you are dealing with an unblocked
format-F record whose key field precedes the data field; its relative key position is 0
(RKP=0). In that case, the key length must also be added:
The buffer requirements for using the queued access method to read or update
(using the GET or PUTX macro) an indexed sequential data set are discussed
below.
For fixed-length unblocked records when both the key and data are to be read,
and for variable-length unblocked records, padding is added so that the data will
be on a doubleword boundary, that is:
Note: When you use the basic access method to update records in an indexed
sequential data set, the key length field need not be considered in
determining your buffer requirements.
If you are reading only the data portion of fixed-length unblocked records or
variable-length records, the work area is the same size as the record.
The size of the work area needed varies according to the record format and the
device type. You can calculate it during execution using device-dependent
information obtained with the DEVTYPE macro and data set information from the
DSCB obtained with the OBTAIN macro. (The DEVTYPE and OBTAIN macros are
discussed in DFSMS/MVS DFSMSdfp Advanced Services.)
Note that you can use the DEVTYPE macro only if the index and prime areas are
on devices of the same type or if the index area is on a device with a larger track
capacity than the device containing the prime area. If you are not trying to maintain
device independence, you can precalculate the size of the work area needed and
specify it in the SMSW field of the DCB macro. The maximum value for SMSW is
65535.
For fixed-length blocked records, the size of the main storage work area (SMSW) is
calculated as follows:
SMSW = (DS2HIRPR) (BLKSIZE + 8) + LRECL + KEYLEN
The value for DS2HIRPR is in the index (format-2) DSCB. If you do not use the
MSWA and SMSW parameters, the control program supplies a work area using the
formula BLKSIZE + LRECL + KEYLEN.
For variable-length records, SMSW can be calculated by one of two methods. The
first method can lead to faster processing, although it might require more storage
than the second method.
The second method yields a minimum value for SMSW. Therefore, the first method
is valid only if its application results in a value higher than the value that would be
derived from the second method. If neither MSWA nor SMSW is specified, the
control program supplies the work area for variable-length records, using the
second method to calculate the size.
In all the above formulas, the terms BLKSIZE, LRECL, KEYLEN, and SMSW are
the same as the parameters in the DCB macro (Trk Cap=track capacity). REM is
the remainder of the division operation in the formula and N is the first constant in
the block length formulas. (REM-N-KEYLEN) is added only if its value is positive.
for SMSI is 65535. If you do not use this technique, the index on the volume must
be searched. If the high-level index is greater than 65535 bytes in length, your
request for the high-level index in storage is ignored.
The size of the storage area (SMSI parameter) varies. To allocate that space
during execution, you can find the size of the high-level index in the DCBNCRHI
field of the DCB during your DCB user exit routine or after the data set is open.
Use the DCBD macro to gain access to the DCBNCRHI field (see Chapter 22,
“Specifying and Initializing a Data Control Block” on page 293). You can also find
the size of the high-level index in the DS2NOBYT field of the index (format 2)
DSCB, but you must use the utility program IEHLIST to print the information in the
DSCB. You can calculate the size of the storage area required for the high-level
index by using the formula
SMSI = (Number of Tracks in High-Level Index)
(Number of Entries per Track)
(Key Length + 1ð)
The formula for calculating the number of tracks in the high-level index is in
“Calculating Space Requirements” on page 520. When a data set is shared and
has the DCB integrity feature (DISP=SHR), the high-level index in storage is not
updated when DCB fields are changed.
Device Control
An indexed sequential data set is processed sequentially or directly. Direct
processing is accomplished by the basic access method. Because you provide the
key for the record you want read or written, all device control is handled
automatically by the system. If you are processing the data set sequentially, using
the queued access method, the device is automatically positioned at the beginning
of the data set.
In some cases, you might want to process only a section or several separate
sections of the data set. You do that by using the SETL macro, which directs the
system to begin sequential retrieval at the record having a specific key. The
processing of succeeding records is the same as for normal sequential processing,
except that you must recognize when the last desired record has been processed.
At this point, issue the ESETL macro to ends sequential processing. You can then
begin processing at another point in the data set. If you do not specify a SETL
macro before retrieving the data, the system assumes default SETL values. (See
the GET and SETL macros in DFSMS/MVS Macro Instructions for Data Sets.)
The key class is useful because you do not have to know the entire key of the first
record to be processed. A key class consists of all the keys that begin with identical
characters. The key class is defined by specifying the desired characters of the key
class at the address specified in the lower-limit address of the SETL macro and
setting the remaining characters to the right of the key class to binary zeros.
To use actual addresses, you must keep a record of where the records were
written when the data set was allocated. The device address of the block containing
the record just processed by a PUT-move macro is available in the 8-byte data
control block field DCBLPDA. For blocked records, the address is the same for
each record in the block.
Because the DCBOPTCD field in the DCB can be changed after the data set is
allocated (by respecifying the OPTCD in the DCB or DD statement), it is possible to
retrieve deleted records. Then, SETL functions as noted above.
When the delete option is specified in the DCB, the SETL macro options function
as follows:
SETL B Start retrieval at the first undeleted record in the data set.
SETL K Start retrieval at the record matching the specified key, if that record is
not deleted. If the record is deleted, an NRF (no record found)
indication is set in the DCBEXCD field of the DCB, and SYNAD is
given control.
SETL KH Start with the first undeleted record whose key is equal to or higher
than the specified key.
SETL KC Start with the first undeleted record having a key that falls into the
specified key class or follows the specified key class.
SETL I Start with the first undeleted record following the specified direct
access address.
Without the delete option specified, QISAM retrieves and handles records marked
for deletion as nondeleted records.
Note: Regardless of the SETL or delete option specified, the NRF condition will be
posted in the DCBEXCD field of the DCB, and SYNAD is given control if the
key or key class:
Is higher than any key or key class in the data set
Does not have a matching key or key class in the data set.
Although the ISAM interface is an efficient way of processing your existing ISAM
programs, all new programs that you write should be VSAM programs. Indexed
sequential data sets should be migrated to VSAM key-sequenced data sets.
Existing programs can use the ISAM/VSAM interface to access those data sets and
need not be deleted. You can use the REPRO command with the ENVIRONMENT
keyword to handle the ISAM “dummy” records.
VSAM, through its ISAM interface program, allows a debugged program that
processes an indexed sequential data set to process a key-sequenced data set.
The key-sequenced data set can have been converted from an indexed-sequential
or a sequential data set (or another VSAM data set) or can be loaded by one of
your own programs. The loading program can be coded with VSAM macros, ISAM
macros, PL/I statements, or COBOL statements. That is, you can load records into
a newly defined key-sequenced data set with a program that was coded to load
records into an indexed sequential data set.
Figure 174 on page 542 shows the relationship between ISAM programs
processing VSAM data with the ISAM interface and VSAM programs processing the
data.
ISAM or Existing
VSAM Indexed ISAM
Programs Sequential Programs
Data Set
Convert
Load ISAM
VSAM Key-Sequenced Interface
Data Set
Access Access
VSAM
Access
ISAM Programs
New VSAM Converted to
Programs VSAM Programs
There are some minor restrictions on the types of processing an ISAM program can
do if it is to be able to process a key-sequenced data set. These restrictions are
described in “Restrictions on the Use of the ISAM Interface” on page 550.
The ISAM interface receives return codes and exception codes for logical and
physical errors from VSAM, translates them to ISAM codes, and routes them to the
processing program or error-analysis (SYNAD) routine through the ISAM DCB or
DECB. Figure 175 shows QISAM error conditions and the meaning they have
when the ISAM interface is being used.
Figure 176 shows BISAM error conditions and the meaning they have when the
ISAM interface is being used.
If invalid requests occur in BISAM that did not occur previously and the request
parameter list indicates that VSAM is unable to handle concurrent data-set
positioning, the value specified for the STRNO AMP parameter should be
increased. If the request parameter list indicates an exclusive-use conflict,
reevaluate the share options associated with the data.
Figure 177 gives the contents of registers 0 and 1 when a SYNAD routine specified
in a DCB gets control.
You can also specify a SYNAD routine through the DD AMP parameter (see “JCL
for Processing with the ISAM Interface” on page 548). Figure 178 gives the
contents of registers 0 and 1 when a SYNAD routine specified through AMP gets
control.
If your SYNAD routine issues the SYNADAF macro, registers 0 and 1 are used to
communicate. When you issue SYNADAF, register 0 must have the same contents
it had when the SYNAD routine got control and register 1 must contain the address
of the DCB.
When you get control back from SYNADAF, the registers have the same contents
they would have if your program were processing an indexed sequential data set:
Register 0 contains a completion code, and register 1 contains the address of the
SYNADAF message.
The completion codes and the format of a SYNADAF message are given in
DFSMS/MVS Macro Instructions for Data Sets.
Figure 179 shows abend codes issued by the ISAM interface when there is no
other method of communicating the error to the user.
If a SYNAD routine specified through AMP issues the SYNADAF macro, the
parameter ACSMETH can specify either QISAM or BISAM, regardless of which of
the two is used by your processing program.
Figure 180 shows the DEB fields that are supported by the ISAM interface. Except
as noted, field meanings are the same as in ISAM.
Each volume of a multivolume component must be on the same type of device; the
data component and the index component, however, can be on volumes of devices
of different types.
When you define the key-sequenced data set into which the indexed sequential
data set is to be copied, you must specify the attributes of the VSAM data set for
variable and fixed-length records.
To learn the attributes of the ISAM data set you can use a program such as
ISPF/PDS, ISMF or the IEHLIST Utility. IEHLIST can be used to get the dump
format of the format 1 DSCB. The layout is in DFSMS/MVS DFSMSdfp Advanced
Services. The RKP should be at X'5B'.
The level of sharing allowed when the key-sequenced data set is defined should be
considered. If the ISAM program opens multiple DCBs pointing to different DD
statements for the same data set, a share-options value of 1, which is the default,
allows only the first DD statement to be opened. See “Cross-Region Share Options”
on page 178 for a description of the cross-region share-options values.
With ISAM, deleted records are flagged as deleted, but are not actually removed
from the data set. To avoid reading VSAM records that are flagged as deleted
(X'FF'), code DCB=OPTCD=L. If your program depends on a record's only being
flagged and not actually removed, you might want to keep these flagged records
when you convert and continue to have your programs process these records. The
access method services REPRO command has a parameter (ENVIRONMENT) that
causes VSAM to keep the flagged records when you convert.
The DCB parameter in the DD statement that identifies a VSAM data set is invalid
and must be removed. If the DCB parameter is not removed, unpredictable results
can occur. Certain DCB-type information can be specified in the AMP parameter,
which is described later in this chapter.
Figure 181 shows the DCB fields supported by the ISAM interface.
For a complete description of the AMP parameter and its syntax, see OS/390 MVS
JCL Reference.
record length in a BISAM DECB is ignored, except when you are replacing a
variable-length record with the WRITE macro.
If your processing program issues the SETL I or SETL ID instruction, you must
modify the instruction to some other form of the SETL or remove it. The ISAM
interface cannot translate a request that depends on a specific block or device
address.
Although asynchronous processing can be specified in an ISAM processing
program, all ISAM requests are handled synchronously by the ISAM interface;
WAIT and CHECK requests are always satisfied immediately. The ISAM
CHECK macro does not result in a VSAM CHECK macro's being issued but
merely causes exception codes in the DECB (data event control block) to be
tested.
If your ISAM SYNAD routine examines information that cannot be supported by
the ISAM interface (for example, the IOB), specify a replacement ISAM SYNAD
routine in the AMP parameter of the VSAM DD statement.
The ISAM interface uses the same RPL over and over, thus, for BISAM, a
READ for update uses up an RPL until a WRITE or FREEDBUF is issued
(when the interface issues an ENDREQ for the RPL). (When using ISAM you
can merely issue another READ if you do not want to update a record after
issuing a BISAM READ for update.)
The ISAM interface does not support RELOAD processing. RELOAD
processing is implied when an attempt is made to open a VSAM data set for
output, specifying DISP=OLD, and, also, the number of logical records in the
data set is greater than zero.
ISAMDATA contains records flagged for deletion; these records are to be kept in
the VSAM data set. The ENVIRONMENT(DUMMY) parameter in the REPRO
command tells the system to copy the records flagged for deletion.
//CONVERT JOB ...
//JOBCAT DD DISP=SHR,DSNAME=USERCTLG
//STEP EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=A
//ISAM DD DISP=OLD,DSNAME=ISAMDATA,DCB=DSORG=IS
//VSAM DD DISP=OLD,DSNAME=VSAMDATA
//SYSIN DD \
REPRO -
INFILE(ISAM ENVIRONMENT(DUMMY)) -
OUTFILE(VSAM)
/\
To drop records flagged for deletion in the indexed-sequential data set, omit
ENVIRONMENT(DUMMY).
The use of the JOBCAT DD statement prevents this job from accessing any
system—managed data sets.
When the processing program closes the data set, the interface issues VSAM PUT
macros for ISAM PUT locate requests (in load mode), deletes the interface routines
from virtual storage, frees virtual-storage space that was obtained for the interface,
and gives control to VSAM.
| In the table below, the CCSID Conversion Group column identifies the group, if
| any, containing the CCSIDs which can be supplied to BSAM or QSAM for
| ISO/ANSI V4 tapes to convert from or to the CCSID shown in the CCSID column.
| For a description of CCSID conversion and how you can request it, see “Converting
| Character Data” on page 279.
| See “CCSID Conversion Groups” on page 566 for the conversion groups. A blank
| in the CCSID Conversion Group column indicates that the CCSID is not supported
| by BSAM or QSAM for CCSID conversion with ISO/ANSI V4 tapes.
CCSID
CCSID Conversion Group DEFAULT LOCALNAME
37 1 COM EUROPE EBCDIC
256 NETHERLAND EBCDIC
259 SYMBOLS SET 7
273 1 AUS/GERM EBCDIC
277 1 DEN/NORWAY EBCDIC
278 1 FIN/SWEDEN EBCDIC
280 1 ITALIAN EBCDIC
282 1 PORTUGAL EBCDIC
284 1 SPANISH EBCDIC
285 1 UK EBCDIC
286 AUS/GER 3270 EBCD
290 2 JAPANESE EBCDIC
297 1 FRENCH EBCDIC
300 JAPAN LATIN HOST
301 JAPAN DB PC-DATA
367 4 US ANSI X3.4 ASCII
420 ARABIC EBCDIC
421 MAGHR/FREN EBCDIC
423 GREEK EBCDIC
424 HEBREW EBCDIC
437 USA PC-DATA
500 4 INTL EBCDIC
803 HEBREW EBCDIC
813 GREEK/LATIN ASCII
819 ISO 8859-1 ASCII
833 KOREAN EBCDIC
834 KOREAN DB EBCDIC
835 T-CHINESE DB EBCD
CCSID
CCSID Conversion Group DEFAULT LOCALNAME
836 S-CHINESE EBCDIC
837 S-CHINESE EBCDIC
838 THAILAND EBCDIC
839 THAI DB EBCDIC
850 LATIN-1 PC-DATA
851 GREEK PC-DATA
852 ROECE PC-DATA
853 TURKISH PC-DATA
855 CYRILLIC PC-DATA
856 HEBREW PC-DATA
857 TURKISH PC-DATA
860 PORTUGESE PC-DATA
861 ICELAND PC-DATA
862 HEBREW PC-DATA
863 CANADA PC-DATA
864 ARABIC PC-DATA
865 DEN/NORWAY PC-DAT
866 CYRILLIC PC-DATA
868 URDU PC-DATA
869 GREEK PC-DATA
870 ROECE EBCDIC
871 1 ICELAND EBCDIC
874 THAI PC-DISPLAY
875 3 GREEK EBCDIC
880 CYRILLIC EBCDIC
891 KOREA SB PC-DATA
895 JAPAN 7-BIT LATIN
896 JAPAN 7-BIT KATAK
897 JAPAN SB PC-DATA
899 SYMBOLS - PC
903 S-CHINESE PC-DATA
904 T-CHINESE PC-DATA
905 TURKEY EBCDIC
912 ISO 8859-2 ASCII
915 ISO 8859-5 ASCII
916 ISO 8859-8 ASCII
918 URDU EBCDIC
920 ISO 8859-9 ASCII
926 KOREA DB PC-DATA
927 T-CHINESE PC-DATA
928 S-CHINESE PC-DATA
929 THAI DB PC-DATA
930 JAPAN MIX EBCDIC
931 JAPAN MIX EBCDIC
932 JAPAN MIX PC-DATA
933 KOREA MIX EBCDIC
934 KOREA MIX PC-DATA
935 S-CHINESE MIX EBC
936 S-CHINESE PC-DATA
937 1 T-CHINESE MIX EBC
938 T-CHINESE MIX PC
939 JAPAN MIX EBCDIC
942 JAPAN MIX PC-DATA
944 KOREA MIX PC-DATA
946 S-CHINESE PC-DATA
948 T-CHINESE PC-DATA
949 KOREA KS PC-DATA
CCSID
CCSID Conversion Group DEFAULT LOCALNAME
951 IBM KS PC-DATA
1008 ARABIC ISO/ASCII
1010 FRENCH ISO-7 ASCI
1011 GERM ISO-7 ASCII
1012 ITALY ISO-7 ASCII
1013 UK ISO-7 ASCII
1014 SPAIN ISO-7 ASCII
1015 PORTUGAL ISO7 ASC
1016 NOR ISO-7 ASCII
1017 DENMK ISO-7 ASCII
1018 FIN/SWE ISO-7 ASC
1019 BELG/NETH ASCII
1020 CANADA ISO-7
1021 SWISS ISO-7
1023 SPAIN ISO-7
1025 CYRILLIC EBCDIC
1026 3 TURKEY LATIN-5 EB
1027 2 JAPAN LATIN EBCD
1040 KOREA PC-DATA
1041 JAPAN PC-DATA
1042 S-CHINESE PC-DATA
1043 T-CHINESE PC-DATA
1046 ARABIC - PC
1047 5 LATIN OPEN SYS EB
1051 HP EMULATION
1088 KOREA KS PC-DATA
1089 ARABIC ISO 8859-6
1097 FARSI EBCDIC
1098 FARSI - PC
1100 MULTI EMULATION
1101 BRITISH ISO-7 NRC
1102 DUTCH ISO-7 NRC
1103 FINNISH ISO-7 NRC
1104 FRENCH ISO-7 NRC
1105 NOR/DAN ISO-7 NRC
1106 SWEDISH ISO-7 NRC
1107 NOR/DAN ISO-7 NRC
4133 1 USA EBCDIC
4369 1 AUS/GERMAN EBCDIC
4370 1 BELGIUM EBCDIC
4371 1 BRAZIL EBCDIC
4372 CANADA EBCDIC
4373 1 DEN/NORWAY EBCDIC
4374 1 FIN/SWEDEN EBCDIC
4376 1 ITALY EBCDIC
4378 1 PORTUGAL EBCDIC
4380 1 LATIN EBCDIC
4381 1 UK EBCDIC
4386 2 JAPAN EBCDIC SB
4393 1 FRANCE EBCDIC
4396 JAPAN EBCDIC DB
4516 ARABIC EBCDIC
4519 GREEK EBCDIC 3174
4520 HEBREW EBCDIC
4533 SWISS PC-DATA
4596 4 LATIN AMER EBCDIC
4929 KOREA SB EBCDIC
CCSID
CCSID Conversion Group DEFAULT LOCALNAME
4932 S-CHIN SB EBCDIC
4934 THAI SB EBCDIC
4946 LATIN-1 PC-DATA
4947 GREEK PC-DATA
4948 LATIN-2 PC-DATA
4949 TURKEY PC-DATA
4951 CYRILLIC PC-DATA
4952 HEBREW PC-DATA
4953 TURKEY PC-DATA
4960 ARABIC PC-DATA
4964 URDU PC-DATA
4965 GREEK PC-DATA
4966 ROECE LATIN-2 EBC
4967 1 ICELAND EBCDIC
4970 THAI SB PC-DATA
4976 CYRILLIC EBCDIC
4993 JAPAN SB PC-DATA
5014 URDU EBCDIC
5026 JAPAN MIX EBCDIC
5028 JAPAN MIX PC-DATA
5029 KOREA MIX EBCDIC
5031 S-CH MIXED EBCDIC
5033 1 T-CHINESE EBCDIC
5035 JAPAN MIX EBCDIC
5045 KOREA KS PC-DATA
5047 KOREA KS PC DATA
5143 5 LATIN OPEN SYS
8229 1 INTL EBCDIC
8448 INTL EBCDIC
8476 SPAIN EBCDIC
8489 FRANCE EBCDIC
8612 ARABIC EBCDIC
8629 AUS/GERM PC-DATA
8692 4 AUS/GERMAN EBCDIC
9025 KOREA SB EBCDIC
9026 KOREA DB EBCDIC
9047 CYRILLIC PC-DATA
9056 ARABIC PC-DATA
9060 URDU PC-DATA
9089 JAPAN PC-DATA SB
9122 JAPAN MIX EBCDIC
9124 JAPAN MIX PC-DATA
9125 KOREA MIX EBCDIC
12325 CANADA EBCDIC
12544 FRANCE EBCDIC
12725 FRANCE PC-DATA
12788 4 ITALY EBCDIC
13152 ARABIC PC-DATA
13218 JAPAN MIX EBCDIC
13219 JAPAN MIX EBCDIC
13221 KOREA MIX EBCDIC
16421 1 CANADA EBCDIC
16821 ITALY PC-DATA
16884 4 FIN/SWEDEN EBCDIC
20517 1 PORTUGAL EBCDIC
20917 UK PC-DATA
20980 4 DEN/NORWAY EBCDIC
CCSID
CCSID Conversion Group DEFAULT LOCALNAME
24613 1 INTL EBCDIC
24877 JAPAN DB PC-DISPL
25013 USA PC-DISPLAY
25076 4 DEN/NORWAY EBCDIC
25426 LATIN-1 PC-DISP
25427 GREECE PC-DISPLAY
25428 LATIN-2 PC-DISP
25429 TURKEY PC-DISPLAY
25431 CYRILLIC PC-DISP
25432 HEBREW PC-DISPLAY
25433 TURKEY PC-DISPLAY
25436 PORTUGAL PC-DISP
25437 ICELAND PC-DISP
25438 HEBREW PC-DISPLAY
25439 CANADA PC-DISPLAY
25440 ARABIC PC-DISPLAY
25441 DEN/NOR PC-DISP
25442 CYRILLIC PC-DISP
25444 URDU PC-DISPLAY
25445 GREECE PC-DISPLAY
25450 THAILAND PC-DISP
25467 KOREA SB PC-DISP
25473 JAPAN SB PC-DISP
25479 S-CHIN SB PC-DISP
25480 T-CHINESE PC-DISP
25502 KOREA DB PC-DISP
25503 T-CHINESE PC-DISP
25504 S-CHINESE PC-DISP
25505 THAILAND PC-DISP
25508 JAPAN PC-DISPLAY
25510 KOREA PC-DISPLAY
25512 S-CHINESE PC-DISP
25514 T-CHINESE PC-DISP
25518 JAPAN PC-DISPLAY
25520 KOREA PC-DISPLAY
25522 S-CHINESE PC-DISP
25524 T-CHINESE PC-DISP
25525 KOREA KS PC-DISP
25527 KOREA KS PC-DISP
25616 KOREA SB PC-DISP
25617 JAPAN PC-DISPLAY
25618 S-CHINESE PC-DISP
25619 T-CHINESE PC-DISP
25664 KOREA KS PC-DISP
28709 1 T-CHINESE EBCDIC
29109 USA PC-DISPLAY
29172 4 BRAZIL EBCDIC
29522 LATIN-1 PC-DISP
29523 GREECE PC-DISPLAY
29524 ROECE PC-DISPLAY
29525 TURKEY PC-DISPLAY
29527 CYRILLIC PC-DISP
29528 HEBREW PC-DISPLAY
29529 TURKEY PC-DISPLAY
29532 PORTUGAL PC-DISP
29533 ICELAND PC-DISP
29534 HEBREW PC-DISPLAY
CCSID
CCSID Conversion Group DEFAULT LOCALNAME
29535 CANADA PC-DISPLAY
29536 ARABIC PC-DISPLAY
29537 DEN/NOR PC-DISP
29540 URDU PC-DISPLAY
29541 GREECE PC-DISPLAY
29546 THAILAND PC-DISP
29614 JAPAN PC-DISPLAY
29616 KOREA PC-DISPLAY
29618 S-CHINESE PC-DISP
29620 T-CHINESE PC-DISP
29621 KOREA KS MIX PC
29623 KOREA KS PC-DISP
29712 KOREA PC-DISPLAY
29713 JAPAN PC-DISPLAY
29714 S-CHINESE PC-DISP
29715 T-CHINESE PC-DISP
29760 KOREA KS PC-DISP
32805 1 JAPAN LATIN EBCDIC
33058 2 JAPAN EBCDIC
33205 SWISS PC-DISPLAY
33268 4 UK/PORTUGAL EBCDIC
33618 LATIN-1 PC-DISP
33619 GREECE PC-DISPLAY
33620 ROECE PC-DISPLAY
33621 TURKEY PC-DISPLAY
33623 CYRILLIC PC-DISP
33624 HEBREW PC-DISPLAY
33632 ARABIC PC-DISPLAY
33636 URDU PC-DISPLAY
33637 GREECE PC-DISPLAY
33665 JAPAN PC-DISPLAY
33698 JAPAN KAT/KAN EBC
33699 JAPAN LAT/KAN EBC
33700 JAPAN PC-DISPLAY
33717 KOREA KS PC-DISP
37301 AUS/GERM PC-DISP
37364 BELGIUM EBCDIC
37719 CYRILLIC PC-DISP
37728 ARABIC PC-DISPLAY
37732 URDU PC-DISPLAY
37761 JAPAN SB PC-DISP
37796 JAPAN PC-DISPLAY
37813 KOREA KS PC-DISP
41397 FRANCE PC-DISPLAY
41460 4 SWISS EBCDIC
41824 ARABIC PC-DISPLAY
41828 URDU PC-DISPLAY
45493 ITALY PC-DISPLAY
45556 4 SWISS EBCDIC
45920 ARABIC PC-DISPLAY
49589 UK PC-DISPLAY
49652 4 BELGIUM EBCDIC
53748 4 INTL EBCDIC
61696 4 GLOBAL SB EBCDIC
61697 GLOBAL SB PC-DATA
61698 GLOBAL PC-DISPLAY
61699 GLBL ISO-8 ASCII
CCSID
CCSID Conversion Group DEFAULT LOCALNAME
61700 GLBL ISO-7 ASCII
61710 GLOBAL USE ASCII
61711 4 GLOBAL USE EBCDIC
61712 4 GLOBAL USE EBCDIC
| 5ðð 45556
| 4596 49652
| 8692 53748
| 12788 61696
| 16884 61711
| 2ð98ð 61712
| 25ð76 1ð47
| 29172 5143
| 33268 367
| 4146ð
| 5ðð 53748
| 4596 61696
| 8692 61711
| 12788 61712
| 16884 1ð47
| 2ð98ð 5143
| 25ð76 367
| 29172 1ð27
| 33268 29ð
| 4146ð 4386
| 45556 33ð58
| 49652
| 1ð47
| 5143
| 37 28ð 5143
| 4133 4376 367
| 8229 282 5ðð
| 16421 4378 4596
| 2ð517 284 8692
| 24613 438ð 12788
| 287ð9 285 16884
| 328ð5 4381 2ð98ð
| 937 29ð 25ð76
| 5ð33 4386 29172
| 273 33ð58 33268
| 4369 297 4146ð
| 437ð 4393 45556
| 275 871 49652
| 4371 4967 53748
| 277 875 61696
| 4373 1ð26 61711
| 278 1ð27 61712
| 4374 1ð47
| 37 28ð 4596
| 4133 4376 8692
| 8229 282 12788
| 16421 4378 16884
| 2ð517 284 2ð98ð
| 24613 438ð 25ð76
| 287ð9 285 29172
| 328ð5 4381 33268
| 937 29ð 4146ð
| 5ð33 4386 45556
| 273 33ð58 49652
| 4369 297 53748
| 437ð 4393 61696
| 275 871 61711
| 4371 4967 61712
| 277 875 875
| 4373 1ð26 1ð26
| 278 1ð27 367
| 4374 5ðð
| USER
| refers to the CCSID which describes the data used by the problem program. It
| is derived from (in order of precedence):
| If the table entry contains a 0, it means that the CCSID parameter was not
| supplied in the JCL. In this case, if data management performs CCSID
| conversion, a system default CCSID of 500 is used.
| TAPE (DD)
| refers to the CCSID which is specified by the CCSID parameter on the DD
| statement, dynamic allocation, or TSO ALLOCATE. If the table entry contains a
| 0, it means that the CCSID parameter was not supplied. In this case, if data
| management performs CCSID conversion, a system default CCSID of 367 is
| used when open for output and not DISP=MOD.
| Label
| refers to the CCSID which will be stored in the tape label during output
| processing (not DISP=MOD), or which is found in an existing label during
| DISP=MOD or input processing. Unless otherwise indicated in the tables, a
| CCSID found in the tape label overrides a CCSID specified on the DD
| statement on input.
| Conversion
| refers to the type of data conversion data management will perform based on
| the combination of CCSIDs supplied from the previous three columns.
| Figure 188 describes processing used when the data set is opened for output with
| DISP=MOD or when the OPEN option is EXTEND.
| Note: This is only allowed for IBM created Version 4 tapes. Attempting to open a
| non-IBM created Version 4 tape with DISP=MOD will result in
| ABEND513-10.
| Figure 189 describes processing used when the data set is opened for INPUT or
| RDBACK.
| When converting EBCDIC code to ASCII code, all EBCDIC code not having an
| ASCII equivalent is converted to X'1A'. When converting ASCII code to EBCDIC
| code, all ASCII code not having an EBCDIC equivalent is converted to X'3F'.
| Because Version 3 ASCII uses only 7 bits in each byte, bit 0 is always set to 0
| during EBCDIC to ASCII conversion and is expected to be 0 during ASCII to
| EBCDIC conversion.
| ð 1 2 3 4 5 6 7 8 9 A B C D E F
| ðð-ðF ððð1ð2ð31Að91A7F 1A1A1AðBðCðDðEðF
| 1ð-1F 1ð1112131A1Að81A 18191A1A1C1D1E1F
| 2ð-2F 1A1A1A1A1AðA171B 1A1A1A1A1Að5ð6ð7
| 3ð-3F 1A1A161A1A1A1Að4 1A1A1A1A14151A1A
| 4ð-4F 2ð1A1A1A1A1A1A1A 1A1A5B2E3C282B21
| 5ð-5F 261A1A1A1A1A1A1A 1A1A5D242A293B5E
| 6ð-6F 2D2F1A1A1A1A1A1A 1A1A7C2C255F3E3F
| 7ð-7F 1A1A1A1A1A1A1A1A 1A6ð3A234ð273D22
| 8ð-8F 1A61626364656667 68691A1A1A1A1A1A
| 9ð-9F 1A6A6B6C6D6E6F7ð 71721A1A1A1A1A1A
| Að-AF 1A7E737475767778 797A1A1A1A1A1A1A
| Bð-BF 1A1A1A1A1A1A1A1A 1A1A1A1A1A1A1A1A
| Cð-CF 7B41424344454647 48491A1A1A1A1A1A
| Dð-DF 7D4A4B4C4D4E4F5ð 51521A1A1A1A1A1A
| Eð-EF 5C1A535455565758 595A1A1A1A1A1A1A
| Fð-FF 3ð31323334353637 38391A1A1A1A1A1A
| ð 1 2 3 4 5 6 7 8 9 A B C D E F
| ðð-ðF ððð1ð2ð3372D2E2F 16ð525ðBðCðDðEðF
| 1ð-1F 1ð1112133C3D3226 18193F271C1D1E1F
| 2ð-2F 4ð4F7F7B5B6C5ð7D 4D5D5C4E6B6ð4B61
| 3ð-3F FðF1F2F3F4F5F6F7 F8F97A5E4C7E6E6F
| 4ð-4F 7CC1C2C3C4C5C6C7 C8C9D1D2D3D4D5D6
| 5ð-5F D7D8D9E2E3E4E5E6 E7E8E94AEð5A5F6D
| 6ð-6F 7981828384858687 8889919293949596
| 7ð-7F 979899A2A3A4A5A6 A7A8A9Cð6ADðA1ð7
| 8ð-8F 3F3F3F3F3F3F3F3F 3F3F3F3F3F3F3F3F
| 9ð-9F 3F3F3F3F3F3F3F3F 3F3F3F3F3F3F3F3F
| Að-AF 3F3F3F3F3F3F3F3F 3F3F3F3F3F3F3F3F
| Bð-BF 3F3F3F3F3F3F3F3F 3F3F3F3F3F3F3F3F
| Cð-CF 3F3F3F3F3F3F3F3F 3F3F3F3F3F3F3F3F
| Dð-DF 3F3F3F3F3F3F3F3F 3F3F3F3F3F3F3F3F
| Eð-EF 3F3F3F3F3F3F3F3F 3F3F3F3F3F3F3F3F
| Fð-FF 3F3F3F3F3F3F3F3F 3F3F3F3F3F3F3F3F
This list may include acronyms and abbreviations from: BCS. Basic catalog structure.
The American National Standard Dictionary for BDAM. Basic direct access method
Information Systems ANSI X3.172-1990, copyright
1990 by the American National Standards Institute BDW. Block descriptor word.
(ANSI). Copies may be purchased from the
American National Standards Institute, 11 West BFALN. Buffer alignment (parameter of DCB and DD).
42nd Street, New York, New York 10036.
BFTEK. Buffer technique (parameter of DCB and DD).
The Information Technology Vocabulary developed
by Subcommittee 1, Joint Technical Committee 1, of BISAM. Basic index sequential access method.
the International Organization for Standardization
and the International Electrotechnical Commission BLP. Bypass label processing.
(ISO/IEC JTC2/SC1).
BLKSIZE. Block size (parameter of DCB and DD).
AL. American National Standard Labels. BSR. Backspace over a specified number of blocks or
records (parameter of CNTRL).
AMODE. Addressing mode (24, 31, ANY).
BUFC. Buffer control block.
ANSI. American National Standards Institute.
BUFCB. Buffer pool control block (parameter of DCB
AOR. Application owning region.
and DD).
APF. Authorized program facility.
BUFL. Buffer length (parameter of DCB and DD).
ASCII. American National Standard Code for
BUFNO. Buffer number (parameter of DCB and DD).
Information Interchange.
BUFOFF. Buffer offset (length of ASCII block prefix by
AUL. American National Standard user labels (value of
which the buffer is offset; parameter of DCB and DD).
LABEL).
DA. Direct access (value of DEVD or DSORG). ECB. Event control block.
DADSM. Direct access device space management. ECSA. Extended common service area.
DAU. Direct access unmovable data set (value of EODAD. End-of-data set exit routine address
DSORG). (parameter of DCB, DCBE, and EXLST).
DCB. Data control block (control block name, macro, ESDS. Entry-sequenced data set.
or parameter on DD statement).
ESETL. End-of-sequential retrieval (QISAM macro).
DCBD. Data control block dummy section.
EXCP. Execute channel program.
DCBE. Data control block extension.
EXLST. Exit list (parameter of DCB and VSAM
DD. Data definition. macros).
DDM. Distributed Data Management. EXPDT. Expiration date for a data set (JCL keyword).
ICFC. Integrated catalog facility catalog. MGMTCLAS. Management class (JCL keyword).
INOUT. Input then output (parameter of OPEN). MOD. Modify data set (value of DISP).
NRZI. Nonreturn-to-zero-inverted.
Abbreviations 577
NSL. Nonstandard label (value of LABEL).
R
NSR. Nonshared resources.
R0. Record zero.
NTM. Number of tracks in cylinder index for each entry
in lowest level of master index (parameter of DCB). RACF. Resource Access Control Facility.
OUTINX. Output at end of data set (to extend) then REFDD. Refer to previous DD statement (JCL
input (parameter of OPEN). keyword).
POSIX. Portable operating system interface for RRN. Relative record number.
computer environments.
PSW. Program status word. SER. Volume serial number (value of VOLUME).
QISAM. Queued index sequential access method. SF. Sequential forward (parameter of READ or
WRITE).
QSAM. Queued sequential access method.
SI. Shift in.
SMSW. Size of main-storage work area (parameter of UCS. Universal character set.
DCB).
UHL. User header label.
SO. Shift out.
UPD. Update.
SP. Space lines on a printer (parameter of CNTRL).
USAR. User security authorization record.
SRB. Service request block.
USVR. User security verification routine.
SS. Select stacker on card reader (parameter of
CNTRL). UTL. User trailer label.
TCB. Task control block, and trusted computing base. VVDS. VSAM volume data set.
Abbreviations 579
580 DFSMS/MVS V1R5 Using Data Sets
Glossary
A B
| access method services. A multifunction service backup. The process of creating a copy of a data set
| program that manages VSAM and non-VSAM data sets, or object to be used in case of accidental loss.
| as well as integrated catalog facility (ICF) and VSAM
| catalogs. Access method services provides the following basic catalog structure (BCS). The name of the
| functions: catalog structure in the integrated catalog facility
environment. See also integrated catalog facility
| defines and allocates space for VSAM data sets,
catalog.
| VSAM catalogs, and ICF catalogs
| converts indexed-sequential data sets to blocking. The process of combining two or more
| key-sequenced data sets records in one block.
| modifies data set attributes in the catalog block size. A measure of the size of a block, usually
| reorganizes data sets specified in units such as records, words, computer
words, or characters. (A)
| facilitates data portability among operating systems
| creates backup copies of data sets
| assists in making inaccessible data sets accessible
C
| lists the records of data sets and catalogs catalog. A data set that contains extensive information
required to locate other data sets, to allocate and
| defines and builds alternate indexes deallocate storage space, to verify the access authority
| converts CVOLS and VSAM catalogs to ICF of a program or operator, and to accumulate data set
| catalogs usage statistics. See master catalog and user catalog.
alias. An alternate name for a member of a partitioned class. See SMS class.
data set.
cluster. A named structure consisting of a group of
allocation. Generically, the entire process of obtaining related components. For example, when the data is
a volume and unit of external storage, and setting aside key-sequenced, the cluster contains both the data and
space on that storage for a data set. index components; for data that is entry-sequenced, the
cluster contains only a data component.
alternate index. In systems with VSAM, a collection of
index entries related to a given base cluster and collating sequence. An ordering assigned to a set of
organized by an alternate key, that is, a key other than items, such that any two sets in that assigned order can
the prime key of the associated base cluster data be collated.
records; it gives an alternate directory for finding
records in the data component of a base cluster. component. In systems with VSAM, a named,
cataloged collection of stored records, such as the data
application. The use to which an access method is component or index component fo a key-sequenced file
put or the end result that it serves: contrasted to the or alternate index.
internal operation of the access method.
compress. (1) To reduce the amount of storage
automatic class selection (ACS) routine. A required for a given data set by having the system
procedural set of ACS language statements. Based on replace identical words or phrases with a shorter token
a set of input variables, the ACS language statements associated with the word or phrase. (2) To reclaim the
generate the name of a predefined SMS class, or a list unused and unavailable space in a partitioned data set
of names of predefined storage groups, for a data set. that results from deleting or modifying members by
moving all unused space to the end of the data set.
automatic data set protection (ADSP). In MVS, a
user attribute that causes all permanent data sets compressed format. A particular type of
created by the user to be automatically defined to extended-format data set specified with the
RACF with a discrete RACF profile. (COMPACTION) parameter of data class. VSAM can
compress individual records in a compressed-format
concurrent copy. A function to increase the data security. Prevention of access to or use of data
accessibility of data by enabling you to make a or programs without authorization. As used in this
consistent backup or copy of data concurrent with the publication, the safety of data from unauthorized use,
usual application program processing. theft, or purposeful destruction.
configuration. (1) The arrangement of a computer data synchronization. The process by which the
system as defined by the characteristics of its functional system ensures that data previously given to the system
units. (2) See SMS configuration. via WRITE, CHECK, PUT, and PUTX macros is written
to some form of non-volatile storage.
control blocks in common (CBIC). A facility that
allows a user to open a VSAM data set so the VSAM DFSMSdfp. A DFSMS/MVS functional component or
control blocks are placed in the common service area base element of OS/390, that provides functions for
(CSA) of the MVS operating system. This provides the storage management, data management, program
capability for multiple memory accesses to a single management, device management, and distributed data
VSAM control structure for the same VSAM data set. access.
control interval definition field (CIDF). In VSAM, the DFSMSdss. A DFSMS/MVS functional component or
four bytes at the end of a control interval that contains base element of OS/390, used to copy, move, dump,
the displacement from the beginning of the control and restore data sets and volumes.
interval to the start of the free space and the length of
the free space. If the length is 0, the displacement is to DFSMShsm. A DFSMS/MVS functional component or
the beginning of the control information. base element of OS/390, used for backing up and
recovering data, and managing space on volumes in the
control unit. A hardware device that controls the storage hierarchy.
reading, writing, or displaying of data at one or more
input/output devices. Acts as the interface between DFSMS/MVS. An IBM System/390 licensed program
channels and devices. that provides storage, data, and device management
functions. When combined with MVS/ESA SP Version 5
control volume (CVOL). A volume that contains one it composes the base MVS/ESA operating environment.
or more indexes of the catalog. DFSMS/MVS consists of DFSMSdfp, DFSMSdss,
DFSMShsm, and DFSMSrmm.
cross memory. A synchronous method of
communication between address spaces. dictionary. A table that associates words, phrases, or
data patterns to shorter tokens. The tokens replace the
associated words, phrases, or data patterns when a
D data set is compressed.
data class. A collection of allocation and space direct access. The retrieval or storage of data by a
attributes, defined by the storage administrator, that are reference to its location in a data set rather than relative
used to create a data set. to the previously retrieved or stored data. See also
addressed-direct access and keyed-direct access.
Data Control Block (DCB). A control block used by
access method routines in storing and retrieving data. direct access device space management (DADSM).
An operating system component used to control space
data definition (DD) statement. A job control allocation and deallocation on DASD.
statement that describes a data set associated with a
particular job step. direct data set. A data set whose records are in
random order on a direct access volume. Each record
data integrity. Preservation of data or programs for is stored or retrieved according to its actual address or
their intended purpose. As used in this publication, the its address according to the beginning of the data set.
safety of data from inadvertent destruction or alteration. Contrast with sequential data set.
data management. The task of systematically directory entry services (DE Services). Directory
identifying, organizing, storing, and cataloging data in Entry(DE) Services provides directory entry services for
an operating system. PDS and PDSE data sets. Not all of the functions will
operate on a PDS however. DE Services is useable by
data record. A collection of items of information from
authorized as well as unauthorized programs through
the standpoint of its use in an application, as a user
the executable macro, DESERV.
entry-sequenced data set. A data set whose records format-F. Fixed-length records.
are loaded without respect to their contents, and whose
format-FB. Fixed-length, blocked records.
RBAs cannot change. Records are retrieved and stored
by addressed access, and new records are added at
format-FBS. Fixed-length, blocked, standard records.
the end of the data set.
format-FBT. Fixed-length, blocked records with track
ESA/370. A hardware architecture unique to the IBM
overflow option.
3090 Enhanced model processors and the 4381 Model
Groups 91E and 92E. It reduces the effort required for format-FS. Fixed-length, standard records.
managing data sets, removes certain MVS/XA
constraints that limit applications, extends addressability format-U. Undefined-length records.
for system, subsystem, and application functions, and
helps exploit the full capabilities of SMS. format-V. Variable-length records.
exception. An abnormal condition such as an I/O error format-VB. Variable-length, blocked records.
encountered in processing a data set or a file.
format-VBS. Variable-length, blocked, spanned
EXCEPTIONEXIT. An exit routine invoked by an records.
exception.
format-VS. Variable-length, spanned records.
export. To create a backup or portable copy of a
VSAM cluster, alternate index, or integrated catalog free control interval pointer list. In a sequence-set
facility user catalog. index record, a vertical pointer that gives the location of
a free control interval in the control area governed by
extended format. The format of a data set that has a the record.
data set name type (DSNTYPE) of EXTENDED. The
data set is structured logically the same as a data set free space. Space reserved within the control intervals
that is not in extended format but the physical format is of a key-sequenced data set for inserting new records
different. See also striped data set and compressed into the data set in key sequence or for lengthening
format. records already there; also, whole control intervals
reserved in a control area for the same purpose.
extent. A continuous space on a DASD volume
occupied by a data set or portion of a data set.
Glossary 583
Hiperspace contains only user data and does not
G contain system control blocks or common areas; code
does not execute in a Hiperspace. Unlike a data space,
generation data group (GDG). A collection of
data in Hiperspace cannot be referenced directly; data
historically related non-VSAM data sets that are
must be moved to an address space in blocks of 4KB
arranged in chronological order; each data set is called
before they can be processed. Hiperspace pages can
a generation data set.
be backed by expanded storage or auxiliary storage,
but never by main storage. The Hiperspace used by
generation data group base entry. An entry that
VSAM is only backed by expanded storage. See also
permits a non-VSAM dat set to be associated with other
Hiperspace buffer.
non-VSAM dat sets as generation data sets.
Hiperspace buffer. A 4k-byte-multiple buffer which
generation data set (GDS). One of the data sets in a
facilitates the moving of data between a Hiperspace and
generation data group; it is historically related to the
an address space. VSAM Hiperspace buffers are only
others in the group.
backed by expanded storage.
generic profile. A RACF profile that contains security
information about multiple data sets, users, or resources
that may have similar characteristics and require a
I
similar level of protection. Contrast with discrete profile. import. To restore a VSAM cluster, alternate index, or
integrated catalog facility catalog from a portable data
gigabyte. 230 bytes, 1 073 741 824 bytes. This is set created by the EXPORT command.
approximately a billion bytes in American English.
index record. A collection of index entries that are
retrieved and stored as a group. Contrast with data
H record.
hardware configuration definition (HCD). An
integrated catalog facility catalog. A catalog that is
interactive interface in MVS that enables an installation
composed of a basic catalog structure (BCS) and its
to define hardware configurations from a single point of
related volume tables of contents (VTOCs) and VSAM
control.
volume data sets (VVDSs). See also basic catalog
header entry. In a parameter list of GENCB, MODCB, structure and VSAM volume data set.
SHOWCB, or TESTCB, the entry that identifies the type
I/O device. An addressable input/output unit, such as
of request and control block and gives other general
a direct access storage device, magnetic tape device,
information about the request.
or printer.
header, index record. In an index record, the 24-byte
ISAM interface. A set of routines that allow a
field at the beginning of the record that contains control
processing program coded to use ISAM (indexed
information about the record.
sequential access method) to gain access to a VSAM
header label. (1) An internal label, immediately key-sequence data set.
preceding the first record of a file, that identifies the file
and contains data used in file control. (2) The label or
data set label that precedes the data records on a unit
K
of recording media. key-sequenced data set (KSDS). A VSAM data set
whose records are loaded in ascending key sequence
| hierarchical file system (HFS) data set. See OS/390 and controlled by an index. Records are retrieved and
| UNIX data set. stored by keyed access or by addressed access, and
new records can be inserted in key sequence because
Hiperbatch. An extension to both QSAM and VSAM
of free space allocated in the data set. Relative byte
designed to improve performance. Hiperbatch uses the
addresses can change, because of control interval or
data lookaside facility to provide an alternate fast path
control area splits.
method of making data available to many batch jobs.
kilobyte. 210 bytes, 1 024 bytes.
Hiperspace. A high performance virtual storage space
of up to two gigabytes. Unlike an address space, a
MEDIA2. Enhanced Capacity Cartridge System Tape optimum block size. For non-VSAM data sets,
optimum block size represents the block size that would
MEDIA3. High Performance Cartridge Tape result in the greatest space utilization on a device,
taking into consideration record length and device
MEDIA4. Extended High Performance Cartridge Tape characteristics.
Glossary 585
primary space allocation. Amount of space
P requested by a user for a data set when it is created.
Contrast with secondary space allocation.
page. (1) A fixed-length block of instructions, data, or
both, that can be transferred between real storage and prime key. One or more characters within a data
external page storage. (2) To transfer instructions, record used to identify the data record or control its use.
data, or both between real storage and external page A prime key must be unique.
storage.
partitioned data set (PDS). A data set on direct record-level sharing. See VSAM Record-Level
access storage that is divided into partitions, called Sharing (VSAM RLS)
members, each of which can contain a program, part of
a program, or data. register. An internal computer component capable of
storing a specified amount of data and accepting or
partitioned data set extended (PDSE). A transferring this data rapidly.
system-managed data set that contains an indexed
directory and members that are similar to the directory relative record data set (RRDS). A VSAM data set
and members of partitioned data sets. A PDSE can be whose records have fixed or variable lengths, and are
used instead of a partitioned data set. accessed by relative record number.
password. A unique string of characters that a Resource Access Control Facility (RACF). An
program, a computer operator, or a terminal user must IBM-licensed program or a base element of OS/390,
supply to meet security requirements before a program that provides for access control by identifying and
gains access to a data set. verifying the users to the system, authorizing access to
protected resources, logging the detected unauthorized
PDS directory. A set of records in a partitioned data attempts to enter the system, and logging the detected
set (PDS) used to relate member names to their accesses to protected resources.
locations on a DASD volume.
RLS. See VSAM Record-Level Sharing (VSAM RLS).
petabyte. 250 bytes, 1 125 899 906 842 624 bytes.
This is approximately a quadrillion bytes in American reusable data set. A VSAM data set that can be
English. reused as a work file, regardless of its old contents. It
must not be a base cluster of an alternate index.
pointer. An address or other indication of location. For
example, an RBA is a pointer that gives the relative
location of a data record or a control interval in the data S
set to which it belongs.
scheduling. The ability to request that a task set
portability. The ability to use VSAM data sets with should be started at a particular interval or on
different operating systems. Volumes whose data sets occurrence of a specified program interrupt.
are cataloged in a user catalog can be demounted from
storage devices of one system, moved to another secondary space allocation. Amount of additional
system, and mounted on storage devices of that space requested by the user for a data set when
system. Individual data sets can be transported primary space is full. Contrast with primary space
between operating systems using access method allocation.
services.
security. See data security.
shared resources. A set of functions that permit the stripe. In DFSMS/MVS, the portion of a striped data
sharing of a pool of I/O-related control blocks, channel set that resides on one volume. The records in that
programs, and buffers among several VSAM data sets portion are not always logically consecutive. The system
open at the same time. See also local shared resources distributes records among the stripes such that the
and global shared resources. volumes can be read from or written to simultaneously
to gain better performance. Whether it is striped is not
slot. For a relative record data set, the data area apparent to the application program.
addressed by a relative record number which may
contain a record or be empty. striped data set. An extended format data set that
occupies multiple volumes. A software implementation
SMS class. A list of attributes that SMS applies to of sequential data striping.
data sets having similar allocation (data class),
performance (storage class), or backup and retention striping. A software implementation of a disk array
(management class) needs. that distributes a data set across multiple volumes to
improve performance.
SMS configuration. A configuration base, Storage
Management Subsystem class, group, library, and drive SYNAD. The physical error user exit routine.
definitions, and ACS routines that the Storage
Management Subsystem uses to manage storage. See synchronize. See data synchronization.
also base configuration and source control data set.
parallel sysplex. A collection of systems in a
SMS-managed data set. A data set that has been multisystem environment supported by Cross System
assigned a storage class. Coupling Facility (XCF).
SMSVSAM. The name of the VSAM server that system. A functional unit, consisting of one or more
provides VSAM record–level sharing (RLS). See also computers and associated software, that uses common
VSAM RLS. storage for all or part of a program and also for all or
part of the data necessary for the execution of the
spanned record. A logical record whose length program.
exceeds control interval length, and as a result,
crosses, or spans, one or more control interval Note: A computer system can be a stand-alone unit,
boundaries within a single control area. or it can consist of several connected units.
storage administrator. A person in the data system-managed data set. A data set that has been
processing center who is responsible for defining, assigned a storage class.
implementing, and maintaining storage management
system-managed buffering for VSAM. A facility
policies.
| available for system-managed extended-format VSAM
storage class. A collection of storage attributes that | data sets in which DFSMSdfp determines the type of
identify performance goals and availability requirements, buffer management technique along with the number of
Glossary 587
buffers to use, based on data set and application update number. For a spanned record, a binary
specifications. number in the second RDF of a record segment that
indicates how many times the segments of a spanned
system-managed directory entry (SMDE). record should be equal. An inequality indicates a
System-Managed Directory Entry is a directory that possible error.
contains all the information contained in the PDS
directory entry (as produced by the BLDL macro) as user buffering. The use of a work area in the
well as information specific to program objects, in the processing program's address space for an I/O buffer;
extensible format. VSAM transmits the contents of a control interval
between the work area and direct access storage
system-managed storage. Storage managed by the without intermediary buffering.
Storage Management Subsystem. SMS attempts to
deliver required services for availability, performance, user catalog. An optional catalog used in the same
and space to applications. See also DFSMS way as the master catalog and pointed to by the master
environment. catalog. It also lessens the contention for the master
catalog and facilitates volume portability.
T
V
tape library dataserver. A hardware device that
maintains the tape inventory associated wi a set of tape tape library dataserver. A hardware device that
drives. An automated tape library dataserver also maintains the tape inventory associated wi a set of tape
manages mounting, removal, and storage of tapes. drives. An automated tape library dataserver also
manages mounting, removal, and storage of tapes.
Tape Library Dataserver. A hardware device that
maintains the tape inventory associated with a set of volume positioning. Rotating the reel or cartridge so
tape drives. An automated tape library dataserver also that the read-write head is at a particular point on the
manages the mounting, removal, and storage of tapes. tape.
task control block (TCB). Holds control information VSAM record-level sharing (VSAM RLS). An
related to a task. extension to VSAM that provides direct record-level
sharing of VSAM data sets from multiple address
terabyte. 240 bytes, 1 099 511 627 776 bytes. This is spaces across multiple systems. Record-level sharing
approximately a trillion bytes in American English. uses the System/390 Coupling Facility to provide
cross-system locking, local buffer invalidation, and
trailer label. A file or data set label that follows the cross-system data caching.
data records on a unit of recording media.
VSAM shared information (VSI). Blocks that are used
transaction ID (TRANSID). A number associated with for cross-system sharing.
each of several request parameter lists that define
requests belonging to the same data transaction. VSAM sphere. The base cluster of a VSAM data set
and its associated alternate indexes.
U
universal character set (UCS). A printer feature that
permits the use of a variety of character arrays.
Character sets used for these printers are called UCS
images.
Index 591
BISAM (basic indexed sequential access method) block (continued)
(continued) size (continued)
sharing a DCB 339 VSAM data set 141, 142
SYNAD routine 464 blocking
BLDL macro automatic 331
build list format 381, 411 basic access method 325
coding example 388 defined 269
description factor 343
partitioned data set 381, 382 fixed-length records 270, 281
PDSE 411 records
BLDVRP macro BSAM 12
access method 94, 119, 190 QISAM 509
BLKSIZE parameter QSAM 12
card reader and punch 289 spanned records 273
concatenation 350 undefined-length records 278
data check 298 variable-length records 272, 273
description 298 BLT 266
direct data set requirement 499 boundary alignment
reading PDSE directory 400 buffer 314
specifying 31 data control block 304
VSAM 100 BPAM (basic partitioned access method) 1
block See also partitioned data set
chaining, with ciphertext feedback 58 See also PDSE
count creating
EOV exit 476 partitioned data set 379
exit routine 476, 477 PDSE 406, 408
data 269 data set
descriptor word 273 DCB abend exit routine 471
See also BDW directory 4
event control 453, 455 EODAD routine 460
incomplete, reading 278 sharing 337, 340
length defined 4
changing 363 description 11, 12
determining 361 I/O status information 452
reading with BSAM 361 processing
variable 273 partitioned data sets 8, 373, 393
null segment 276 PDSE 8, 395, 434
prefix retrieving members
access method requirements 280, 281 partitioned data set 387, 390
ASCII magnetic tape 314 PDSE 424
block length 280, 281 BSAM (basic sequential access method)
blocked records 281 CHECK macro 328
buffer alignment 314 concatenating sequential data sets 351
creating 281 creating
data types 280, 282 direct data set 501, 502
format-D records 281 partitioned data set 378, 381
reading 280, 281 PDSE 408, 410
short 363, 402 data set
size EODAD routine 460
concatenation 349 user labels 480
DASD data set 298 user totaling 488, 489
ISO/ANSI spanned records 283 defined 4
ISO/ANSI Version 3 or Version 4 tapes 283, 298 description 12
minimum 287, 298 extending a sequential data set 356
non-VSAM data sets 298 I/O status information 452
reading with BSAM 361 overlap of I/O 325, 359
tape data set 299
Index 593
catalog (continued) checkpoint/restart
management 11 check of JFCBFLAG 480
problems, checking 110 LPALIB, restriction 477
protection 51 CHKPT macro
selection order 91 EOV exit routine 477
verification 45 choosing to compress data 87
CATALOG macro 19 CICS
CATALOG parameter 93 with VSAM RLS 202
cataloging CIDF (control interval definition field) 66, 165
data sets CIMODE parameter 42
automatic 19 ciphertext 58
defined 19 classes
GDG 435, 437 examples 303
naming 19 JCL keyword for 302
CBIC (control blocks in common) 171, 178 management 25
CBUF (control block update facility) 182 specifying through JCL 25
CCSID 555 storage 25
CCSID decision tables 568 using 21, 302
CDRA 555 CLOSE macro
chain request parameter lists 124 considerations 309, 340
chained scheduling description
channel programs 359, 360 non-VSAM 305, 311
description 359 VSAM 136, 137
ignored request conditions 360 EODAD routine 460
non-DASD data sets 360 MODE 308
SAM 360 multiple data sets 305
channel programs parallel input processing 333, 335
chained segments 360 partitioned data set 386, 387
EXCP PDSE 424
execution 16 STOW macro 386
number of (NCP) 326 SYNAD 305, 466
channel status word 459 temporary close option 137, 305, 311
character codes TYPE=T 137, 305, 311
ASCII 266 volume positioning 293, 305, 312
DBCS 497 closing a data set
EBCDIC 266 non-VSAM 305, 311
CHECK macro VSAM 136, 137
basic access method 325 cluster
before CLOSE 305 access method services parameters 93
description 328, 329 base 107
EODAD routine 460 deleting 112
exception codes 457, 459 guaranteed space 95
sharing a data set 338 index component 89
SYNAD routine 466 information 89, 92
updating KSDS 89
partitioned data set 390 naming 90
PDSE 429 processing 89
VSAM 135 replacing VSAM 42
check routine, examining DECB 328 system-generated name format 90
checkpoint using SMS 92
data set verification 45
RACF-protected 341 VSAM 89
security 341 CNTRL macro
shared data sets 185 description 445
shared resource restrictions 199 CNVTAD macro 160
Index 595
control interval (continued)
size D
adjustments 144 DASD (direct access storage device)
default 143 ACS routines 21
KSDS 144 characteristics 302
limitations 141 control interval size 142
selection 141 DASD volumes 31
variable-length RRDS 142 data class 21
split data set
description 76, 258 indexed sequential 510
JRNAD routine 223 input 342
storage 158 device selection 170
control password 52 Hiperspace buffer 189
control section, dummy 304 record format 291
CONTROLINTERVALSIZE parameter 94 shared 341
conversion system-determined block size 298
ASCII to/from EBCDIC 279, 327, 331 track capacity 142
ISAM to VSAM 547 data
partitioned data set to PDSE 379, 432 buffers
PDSE to partitioned data set 379 allocating 154
conversion codes, tables 573 RPL options 132
count area sequential access 157
count data format 265 check
count key data format 264 BLKSIZE 298
ISAM index entry format 513 class
create mode 101 alternate index 105
cross assigning 21, 347
memory mode 137 DATACLAS keyword 302, 347
reference table with direct data sets 500 defining cluster 21
region sharing 181 definition 21
system sharing 181, 187 examples 303
cross-region sharing 178, 185 JCL keywords 302
cryptographic space allocation 95
key data set 61 using 21, 302
option, protection of offline data 57 component
CSA (common service area) 171 processing 131
CVOL 435, 443 space calculation 98
cylinder control block 475
allocation 32 See also DCB
capacity 263 control interval 143, 250
data set space 97 definition statement 1
index See also DD statement
calculating space requirements 517 encryption key 60
definition 511, 513 encryption standard 57
overflow See also DES
calculating space 517, 521 event control block 1, 453
defined 513 See also DECB
specifying size 518 integrity
tracks 518 passwords 51, 55
CYLINDERS parameter 93 protection 47
CYLOFL parameter RACF 47, 49
allocating ISAM data set 518 lookaside facility 363
creating ISAM data set 514 See also DLF
mode processing
buffers 318
controlling simple buffers 319
GET-locate and PUTX macros 318
Index 597
data set (continued) DCBE macro (continued)
type buffer pool 316
non-VSAM 8 data control block 293
VSAM 8, 69, 87 example 390
unlike characteristics 352, 392 IHADCBE macro 305
unmanageable by SMS 23 PASTEOD=YES 340
unmovable 265 processing partitioned data sets or PDSEs 12
unopened 192 QSAM 368
data striping 364 SAM 314
DATATEST parameter 212, 214 sharing non-VSAM data sets 337
DB2 69 DCBLPDA field 539
DBCS (double-byte character set) DCBLRECL field 273
character codes 497 DCBNCRHI field 538
printing and copying 497 DCBOFPPC bit 350, 354
record length DCBPRECL field 273
fixed-length records 497 DCBSYNAD field 304
variable-length records 498 DD statement
SBCS strings 497 DCB 293
SI (shift in) 497 fields 298
SO (shift out) 497 JFCB 293
DCB (data control block) name sharing 174
abend exit use 293
availability 451 DE Services (directory entry services) 412
description 471, 473 DESERV GET_ALL 384
options 472 FUNC=GET 382
specifying 451 DEB (data extent block)
abend installation exit 475 ISAM interface 546
address 304 DECB (data event control block)
allocation retrieval list 470 contents 453
attempted recovery conditions 473 description 328
attributes of, determining 293, 304 exception code 452, 459
changing 304 use 356
creation 293 DECIPHER parameter 62
description 293, 295 deciphering data 57, 58
dummy control section 304 deferred
exit requests by transaction ID 195
availability 451 write requests 132, 194
parameter list passed to abend 472 writing buffers 195
routines 303, 451 DEFINE command
specifying 451 ALTERNATEINDEX 105, 106
exit list 468 CLUSTER 89
fields 297 control interval size 141
Installation Open Exit 476 creating a generation data group 444
modifying 293 free space 146
open exit 475 GENERATIONDATAGROUP 442
parameters 298 PAGESPACE 109
primary sources of information 293 RECATALOG 19
sequence of completion 294 DELETE command 110
shared 508 delimiter parameters 59
sharing a data set 337 See also REPRO command
DCBBLKSI field 291 DEQ macro
DCBD macro 304 sharing BISAM DCB 526
example 390 sharing non-VSAM data sets 338, 339
DCBE macro DES (data encryption standard) 57
alignment 314 descriptor word 1
BSAM 360, 368 See also BDW, RDW, SDW
Index 599
DSECT statement 304 end-of (continued)
dsname 90 file, software 165
DSNAME parameter key-range 44
DD statement 385, 387, 424 sequential retrieval 461
DSNTYPE keyword 301, 406 See also ESETL
DSORG parameter ENDREQ macro 132, 136, 176
CLOSE TYPE=T 306 ENQ macro 187, 338
direct data set 501 entry-sequenced data set 1
indexed sequential data set 514 See also ESDS
non-VSAM 301 EOD (end-of-data)
partitioned data set 378, 385, 387 indicator 311
PDSE 408, 424 restoring values with VERIFY 45
VSAM 100 SRB processing 138
dummy EODAD (end-of-data) routine 221
control section 304 basic access method 325
data sets 357 BSP macro 447
records changing address 304
direct data set 504 concatenated data sets 349, 393
duplicate data set names 91 cross-memory mode 137
DYNALLOC macro description 460
allocation example 29 EODAD routine entered 460
cataloging data sets 19 exit routine
defining EXCEPTIONEXIT 220
non-VSAM data set 9, 347 JRNAD, journalizing transactions 221
PDSE 406 indicated by CIDF 164
VSAM data set 9, 89 processing 120
requesting data class 22 programming considerations 220, 460
SVC 99 parameter list 293 register contents 220, 460
dynamic specifications 460
allocation 1 EOV (end-of-volume) 460
See also DYNALLOC macro block count exit 476
buffering defer nonstandard input trailer label exit 476
basic access method 316 EODAD routine entered 460
buffer control 500 exit routine 477
direct data set 500 forcing 313
ISAM data set 515, 526 physical sequential data sets 477
processing 184, 311, 313
routine
E DCB abend exit 471, 472
EBCDIC (extended binary coded decimal interchange ERASE
code) macro 131
conversion to/from ASCII 10, 266, 327, 331 parameter 95, 112
magnetic tape volumes 266 erase-on-scratch 48
record format dependencies 269, 290 EROPT (automatic error options)
ECB (event control block) DCB macro 466
description 453, 455 error
ECB (event control block) analysis
exception code bits 455 exception codes 457
empty sequential data set 309 logical 120, 229
empty VSAM data set 103 options, automatic 466
ENCIPHER parameter 62 physical 120, 231
enciphering data 57, 58 register contents 464, 465
encryption 60 status indicators 452, 459
end-of uncorrectable 461
data 1 conditions 543, 544
See also EOD determinate 309
Index 601
extended binary coded decimal interchange code 1 force end-of-volume 1
See also EBCDIC See also FEOV macro
extended format data sets 7, 34, 79, 361, 364 forced writing, buffers 132
extended logical record interface 285 format
extended search option (BDAM) 503 alternate index 84
extents control
concatenation limit 392, 432 area 68
defined 33 interval 65
limit for sequential data sets 365 interval definition field 165
maximum 33 ESDS 70
fixed-length RRDS 76
free space 74
F index
facility class entry 256
checkpoint data sets 341 record 253
IHJ.CHKPT 341 record entry 255
FCB (forms control buffer) record header 253
defining an image 478 KSDS 73, 81
image LDS 76
exit 478 record definition field 166
formats 446 system-generated name 90
identification in JFCBE 480 variable-length RRDS 77
SETPRT 446 format-D records
overflow condition 446 block prefix 281
feedback tape data sets 281
BDAM READ macro 327 variable-length spanned records 283
BDAM WRITE macro 328 format-F records
option 504 card reader and punch 289
FEOV macro description 270, 281
description 313 ISO/ANSI tapes 280, 281
EODAD routine 460 parallel input processing 332
spanned records 275, 313 standard format 270
trailer label exit 483 format-S records
field contents, displaying 126 extended logical record interface 285
file access exit 310 segment descriptor word 284
file key, secondary 60 format-U records
file system card reader and punch 289
accessing data 7, 9 description 278, 286
filtering 92 ISO/ANSI tapes 286
FIND macro parallel input processing 332
description format-V records
partitioned data set 385 block descriptor word 273
PDSE 419 card punch 289
EODAD routine 460, 461 description 272, 277
sharing a data set 339 parallel input processing 332
switching between PDSE members 422 record descriptor word 273
updating segment control codes 275
partitioned data set 390 segment descriptor word 275
PDSE 429 spanned 273, 277
fixed-length records forms control buffer 478
description 270, 281 See also FCB
members of PDSE or partitioned data set 271 free control interval 255
parallel input processing 332 FREE parameter 311
fixed-length RRDS 1 free space
See also relative record data set altering 148
description 6 control interval 75
Index 603
Hiperspace buffer indeterminate errors 309
DASD 189 index
overview 10 access, with GETIX and PUTIX 249
horizontal pointer index entry 254 area
calculating space 517, 518
creation 510
I buffers 154
I/O component
buffers control interval size 141
managing with shared resources 194 opening 249
sharing 189 space calculation 98
space management 152 control interval
control block sharing 189 data control area 250
DASD 291 size 144, 159
data set split 258
spooling 343 cylinder
device calculating space 517
card reader and punch 289 overflow area 513
magnetic tape 287, 288 data on separate volumes 159
printer 290 entry
error data control interval relation 250
recovery 336 record format 255
journaling of errors 196 spanned records 259
overlapped 325 imbedding 160
sequential data set levels, VSAM 251
devices 287 master
overlap operations 356 calculating space 517
status creating 513
indicators 459 using 511
information 452 options
ICI (improved control interval access) 170, 178, 183 alternate 107
ICKDSF (Device Support Facilities) 336, 493 cylinder 97
IDRC 296 imbedding 158
IEBCOPY program replicating 158
compressing partitioned data sets 373, 374, 392 pointers 252
IECOENTE macro 484 prime 75, 250
IECOEVSE macro processing 249
open/EOV volume security/verification exit 486 record
parameter list 487 format 253
IEHLIST program 518, 538 replication 159
IEHMOVE program 375, 376 rotational delay 94
IEHPROGM program 48, 54, 443, 444 sequence set 94, 252, 253
IGWCISM macro 420 space allocation 509
IHAARL macro 470 structure 251
IHADCB dummy section 304 track, space 518
IMBED parameter 94, 146, 160 update 258
imbedded index area 517, 518 upgrade 108
imbedding sequence set 160 virtual storage 159
IMPORT command 42 indexed sequential access method 541
improved control interval access 169 See also ISAM
See also ICI indexed sequential data set 5
Improved Data Recording Capability 288 See also ISAM
See also magnetic tape volumes adding records at the end 530, 532
independent overflow area allocating space 517, 524, 535, 538
description 513 areas
specifying 520 index 512, 513
overflow 513
Index 605
JRNAD exit routine (continued) KSDS (key-sequenced data set) (continued)
journaling 121 control interval (continued)
journalizing transactions 223 size 143
recording RBA changes 223 copying 104
shared resources 197 description 6
values 197 direct access 82
format 73
free space 94, 143
K index
key accessing 249
alternate 86 alternate 84
class 538 options 158
compression 76, 256, 257 processing 249
count-key-data format 264 inserting records
data 73 description 73, 76
data encryption 60 direct 127
field sequential 127
indexed sequential data set 510 key ranges 149
KSDS 73 printing 111
indexed sequential data set processing the data component 131
adding records 530, 534 record
retrieving records 525, 531 access 81
RKP (relative key position) 514, 535 retrieval 159
track index 510, 513 storage 159
nonunique 84 reorganizing 109, 147
parameters for VSAM 100 structural errors 211
PDSE directory 400 KU (key, update)
pointer pair 107 coding example 527
printing a data set 111 defined 327
ranges read updated ISAM record 526
naming 149, 151
qualifier 151
record L
collating position 73 LABEL parameter
secondary file 60 password protection 55
unique 84 specifying standard labels 482
key-sequenced data set 1 labels
See also KSDS direct access
keyed DSCB 493, 496
access 81 format 493
direct retrieval 130 user label groups 495
sequential retrieval 129 volume label group 493, 495
KEYLEN parameter exits 480, 483
DASD 291 ISO/ANSI Version 3 or Version 4 tapes 297
description 301 symmetry conflict 297
direct data set 501 validation exit 310
PDSE 400 LDS 1
KEYRANGES parameter 94 See also linear data set
KEYS parameter 93 LEAVE option
KILOBYTES parameter 93 close processing 305, 306
KN (key, new) 527 end-of-volume processing 312
KSDS (key-sequenced data set) length of block 361, 363, 401
allocating buffers using direct processing 154 LERAD exit routine
cluster 89 error analysis 229
cluster analysis 211 register contents 229
control interval
access 163
Index 607
mixed processing 143 non-SMS-managed data set 1
MLA (multilevel alias) facility 91 See also non-system-managed data set
MMBBCCHHR non-system-managed data set
actual address 265, 503 allocating GDSs 441
data set size 33 allocation examples 26
ISM 551 generation data group 435
maximum 33 non-VSAM
track address 32 access methods 54
MNTACQ macro 160 data set
MODCB macro initializing 293
modifying control block contents 125 specifying a DCB 293
model DSCB 440 nonindexed data set 93
modifying nonsequential processing of sequential data 12
partitioned data set 390 nonshared resources 157
PDSE 429 nonspanned records 68, 167
sequential data set 355 nonspecific tape volume mount exit
move mode general register rules 485
buffers 318 IECOENTE macro parameter list 484
parallel input processing 332 return codes
PDSE restriction with UPDAT option 297 defined 484
MRKBFR macro specifying 484
marking buffer for output 198 saving and restoring conventions 485
releasing exclusive control 176 nonstandard label 263, 266
MSS 160 nonunique
MSWA parameter 537 alternate key 84, 107
MULTACC parameter in DCBE 361 alternate key path processing 134
multiple pointer 84
cylinder data sets 97 NONUNIQUEKEY parameter 135
data sets NOREUSE parameter 93
closing 308 NOSCRATCH parameter on DELETE 112
opening 308 note list 376
string processing 133 NOTE macro
multitasking mode, sharing data sets 310, 339, 340 ABS parameter 447
multivolume data set description 447
NOTE macro 447 switching between PDSE members 422
MULTSDN parameter on DCBE 361 updating
MXIG command partitioned data set 390
SPACE parameter 18 PDSE 421, 429
NSR subparameter 134, 193
NTM parameter 513
N NUIW parameter 192
name null data set 308
alternate index 106 null segment
cluster 90 description 276
component 90 PDSE restriction 276, 402
data set 18, 90 numbered data set 93
duplicate 91
generation data group 435, 436, 437
key ranges 151 O
system-generated 90 O/C/EOV (open/close/end-of-volume)
NCP parameter exit routine
BSAM 326 parameter list passed to user label 481
NFS return code, user label 482
HFS files 13 standard user label exit 480
non-MSS support 160 O/EOV (open/end-of-volume)
nonspecific tape volume mount exit 484
Index 609
password (continued) PDSE (partitioned data set extended) (continued)
protection record
authorize access 51 numbers 399
catalog entry 51 processing 400, 402
system-managed data set 51 relative track addresses 398, 400
PASSWORD data set 55 renaming 432
path restrictions 402
alternate index 87 rewriting 430
processing 134, 157 sharing 426
verification 45 space
PC (card punch) record format 289 allocation 403
PDAB (parallel data access block) considerations 403
description 488 space reuse 398
work area for DCB 333 space, noncontiguous 404
PDSE (partitioned data set extended) switching between members 422
allocating 25, 406, 408 SYNCDEV macro 449
allocation 404 synchronizing to DASD 449
block size 405 TRUNC macro restriction 323
blocksize 398 TTR examples 399
compression 405 updating 419, 429
concatenation 431 performance
connection 411 buffering 302
converting 379, 432 control interval access 169
creating 406, 410 cylinder boundaries 307
defined 4, 395 data lookaside facility 363
deleting 430 Hiperbatch 161, 363
directory 404, 412, 422 improvement 141, 161
BLDL macro 411 sequential data sets 325, 359, 387
BLKSIZE parameter 400 physical block size 142
description 397, 398 physical errors
FIND macro 419 analysis routine 120, 196
indexed search 396, 397 analyzing 231
KEYLEN parameter 400 SYNAD exit 120
NOTE macro 421 physical sequential data sets 477
POINT macro 421 plaintext data encrypting key 60
reading 431 POINT macro
size limit 397 EODAD routine 460
STOW macro 423 OUTINX 448
updating 423 retrieving records
directory structure 398 non-VSAM 448
extended sharing protocol 428 VSAM 130, 189
extents 392, 405 switching between PDSE members 422
fragmentation 406 updating
free space 405 partitioned data set 390
macros 411, 424 PDSE 421, 429
member WRITE macro 448
adding 409 pointers 84, 252
adding records 430 positioning
creating multiple 410 direct access volume
retrieving 424 POINT macro 448
null segments 276 to a block 448
performance member address
advantages 395 FIND macro 385
recommendations 433 partitioned data set 385
positioning 400 RPL options 132
reblocking records 396, 401 sequential access 189
Index 611
qualified name of a data set 90 read (continued)
qualifier 1 integrity, cross-region sharing 180
See also name READ macro
data set name 90 basic access method 326
queued access method 1 description 326
See also QSAM EODAD routine 460
buffer format-U records 278
control 313, 318 KU (key, update) coding example 528
pool 313 sharing a data set 338
SYNAD routine 466
updating
R existing records (ISAM data sets) 526
R0 record partitioned data set 390
direct data sets 501 PDSE 429
non-VSAM data sets 264 rear key compression 256
RACF (Resource Access Control Facility) reblocking
authorization records
checking 48 examples 401
opening a data set 122 PDSE 396, 401
checkpoint data sets RECATALOG parameter 93
facility class 341 RECFM (record format)
installation exit 50 DASD 291
protected entry 48 fixed-length 281
protection 47, 49, 50 ISO/ANSI 281
residual data, erasing 48 magnetic tape 287, 288
RBA (relative byte address) parameter
ESDS 81 card punch 289
high used, VSAM 101 card reader 289
JRNAD control character 287
parameter list 197 selecting 269, 270
recording changes 223 sequential data sets 269
locate a buffer pool 197 printer 290
logical record 71 sequential data sets 287
printing a data set 111 spanned variable-length 273, 277
retrieving records directly 81 undefined-length 278, 286
RD (card reader) 289 variable-length 272, 283
RDBACK parameter RECFM parameter
label exit routine 482 non-VSAM 301
opening magnetic tape volume 295 VSAM 100
RDF (record definition field) record
examples 66 access
format 166 KSDS 81, 84
loading data set 100 path 134
structure 167, 168 update password 122
RDJFCB macro adding to a data set 104, 147
JFCB exit 479 blocking 1
return codes 470 See also blocking
RDW (record descriptor word) control characters 286
data mode exception for spanned records 273 control interval size 142
description 273 definition field 66
extended logical record interface 286 See also RDF
segment descriptor word 276 deleting 131
updating indexed sequential data set 529 descriptor word 1, 273
variable-length records format-D 282, 283, 284 See also RDW
read direct data set 504
access, password 52, 122 ESDS 70
Index 613
reorganization (continued) return codes (continued)
ISAM statistics 510 RDJFCB macro 470
key-sequenced data set 109, 147 user labels 482
REPRO command 40, 104 REUSE parameter 93, 103
REPL parameter 159 REWIND option 305
REPLACE parameter 41 rewriting
REPLICATE parameter 94 partitioned data set 392
REPRO command PDSE 430
backup and recovery 40 RKP (relative key position) parameter 514, 535
copying 104 RLS (record-level sharing)
delimiter parameters 59 access method
loading data set 89 RLS 201
options 40 RLSE parameter 307
reorganizing a data set 40, 104 RLSWAIT exit routine 230
termination 101 rotational delay, reducing 159
using 100 RPL (request parameter list)
request macros chaining
functions 11 building 124
REREAD option 312 coding guidance 218
RESERVE macro 182 concurrent requests 134
residence restriction 125 creating 123
residual data, erasing 48 data buffers and positioning 132
Resource Access Control Facility 47 exit routine correction 219
See also RACF parameter 164
resource pool transaction IDs 195
building 189 RRDS (relative record data set)
connecting 193 description 6
deferred writes 194 fixed-length 76, 83
deleting 193 free space 94
determining size 191 hexadecimal values 169
statistics 192 inserting records
types 190 keyed-direct 128
resume load mode keyed-sequential 128
abend 532 skip-sequential 128
extending printing 111
indexed sequential data set 517 record access 83
VSAM data set 127 record definition field 76
partially filled track or cylinder 517 specifying slot 128
QISAM 509 variable-length 77
resume loading 127, 514 RRN (relative record number)
retention period 303 description 99
RETPD keyword 303 replacing record in backup data set 41
retrieving RRDS 76
generation data set 443, 444
keyed direct 130
keyed sequential 129 S
partitioned data set members 387, 390 SAM (sequential access method)
PDSE members 424, 425 buffer space 313
records null segment processing 402
direct access 82 save area
directly 130, 525 user totaling 489
sequential access 81 SBCS (single-byte character set) 497
sequentially 129, 524 scan mode 509, 526
skip-sequential access 82 SCHBFR macro
return codes description 197
block count exit 476, 477
Index 615
SI/SO (shift in/shift out) 1, 497 space allocation (continued)
See also DBCS key range values 150
simple buffering linear data set 97
description 322 partitioned data set 11, 377, 378
parallel input processing 332, 334 PDSE 11, 403
single-byte character set 1 spanned records 145
See also SBCS specifying 31, 37
size VIO (virtual I/O) data set 18
control VSAM
area 145 data set 95
interval 141 performance 143
control interval 141 SPACE parameter
data control interval 143 ALX command 18
index control interval 144 JCL keyword 403
VSAM data set 65 MXIG command 18
skeleton VSAM program 140 partitioned data set 377
skip-sequential access SPANNED parameter 69, 94, 142
control interval size 141 spanned records
fixed-length RRDS 83, 128 basic direct access method 277
key-sequenced data set 82 description 68, 69
variable-length RRDS 83, 128 index entries 259
SKP option 466 logical record
small VSAM data sets 97 access 315
SMDE 413 interface 275
SMS (Storage Management Subsystem) QSAM processing 275
ACS routines 21 RDF structure 168
ALTER command 99 sequential access method 273
alternate index space allocation 145
data class 105, 107 variable-length 273
space allocation 106 SPEED parameter 94, 102
classes 21, 92 sphere, data set name sharing 176
clusters 92 splitting control intervals 76, 258
configuration 21 spooling SYSIN and SYSOUT data sets 343
data set SRB (service request block) 137, 169
allocating 25 stacker selection
defining temporary VSAM 91 CNTRL macro 445
requirements 21 control characters 270, 287
type (PDSE or PDS) 407 STACK parameter 289
description 3, 21 standard label
requirements 22 DASD volumes 263, 493
security 94 magnetic tape volumes 263, 266
space allocation 95 standard user label exit 480
tape 50 status
SMSI parameter 537 following an I/O operation 452
SMSW parameter 537 indicators 459
software end-of-file 166 STEPCAT DD statement 241
space allocation storage administrator 21
alternate index 106 storage class
average record length 31 definition 21
calculation examples 303
data component 98 guaranteed space 95
index component 98 JCL keywords 302
cylinders 97 space allocation 95
DASD volumes 33 STORCLAS keyword 302
direct data set 500 using 302
indexed sequential data set 509, 517, 524
Index 617
TESTCB macro UHL (user header label) 480, 483
request parameter list 195 UIW field (user-initiated writes) 193
test contents of fields 125 unblocking records
time sharing option 56 BPAM 11
See also TSO BSAM 12
trace, diagnostic 160 QISAM 509
track QSAM 12
addressing undefined length records 1
relative 265, 499 See also format-U records
capacity 142 unique alternate key 84
defined 263 UNIQUE parameter 95, 151
format UNIQUEKEY attribute 135
count data format 264 unlabeled magnetic tape 263, 267
count key data format 264 unqualified name 90
index UPAD exit routine
entries 512 cross-memory mode 137, 236
indexed sequential data set 510 parameter list 235
resume load 517 programming considerations 121, 235
number on cylinder 518 register contents at entry 234
overflow option user processing 234
description 265 UPDAT option 1
variable-length records 272 See also update
space allocation 97 update
track overflow 265 access password protection 52, 122
TRACKS parameter 93 after data set recovery 44
trailer label alternate index records 135
user 480 control interval 165
transactions, journalizing 221 DCB fields when opening a data set 295
TRANSID parameter 195 direct data set 504
translation ESDS records 71, 131
ASCII to/from EBCDIC 327, 331 format-U records 278
TRC (table reference character, 3800) 270, 273, 278, ISAM records 524, 529
290, 345 KSDS records 74, 131
TRKCALC macro with a PDSE 403 partitioned data set 390, 391
TRTCH parameter 288 PDSE 419, 429
TRUNC macro RRDS records 77
BSAM and BPAM 328 spanned record segments number 168
buffer processing 313 spanned records 275
description 323 UPDATE attribute 108
PDSE restriction 323 update mode
TSO (time sharing option) EODAD routine entered for BSAM 460
allocating data sets 28 PUTX 319
entering names for APF authorization 56 simple buffering 321
renaming a PDSE 432 UPGRADE attribute 108
terminal data set 263 USAR (user-security-authorization record) 236
TTR (track record address) user
defined 265 accessing VSAM shared information 187
partitioned data set 375 buffering 163, 169, 178
PDSE 266, 399, 400 catalog 241
TYPE=T parameter 137, 305, 311 CBUF processing considerations 184
generated exit routines 120
header label (UHL) 480, 483
U label
UBF (user buffering) 169, 178 header 496
UCS (universal character set) trailer 496
image 446 label exit routine 480, 483
Index 619
VSAM (virtual storage access method) (continued) VVR (VSAM volume record)
I/O buffers 152 deleting 112
ISAM programs for processing 542
JCL DD statement 241
key-sequenced data set W
accessing records 81 WAIT macro
description 73, 76 basic access method
examining for errors 211 BDAM 329, 506
index processing 249 BISAM 329
loading 101 description 329
linear data set example 528
accessing records 82 QSAM parallel input processing 332
description 76 work file 103
loading records 99 write integrity, cross-region sharing 181
macro WRITE macro
overall chart 138 add form 501
performance improvement 141 basic access method 325
programming considerations 217 BDAM 501
relative byte address 71 description 327
relative record data set EODAD routine 460
accessing records 83 format-U records 278
fixed-length records 76, 77 K (key) 526, 529
variable length records 77, 78 KN (key, new) 529, 532, 535
sample program 138, 140 note list 376
shared information blocks 183 sharing a data set 338
See also VSI blocks SYNAD routine 466
sphere 176 updating
string processing 153 concatenated partitioned data sets 390
structural analysis 211 concatenated PDSEs 429
temporary data set 243 direct data set 505
use 11 partitioned data set 390
user-written exit routines PDSE 429
coding guidelines 218 write validity check 302
functions 217 WRITECHECK parameter 94
verification 45 writing
volume data set 93 buffer 132, 195
See also VVDS new record (ISAM) 328
VSAM data set 33 short block 363
VSE (Virtual Storage Extended) updated ISAM record 328
embedded checkpoint records 445 WRTBFR macro 195
BSP 447
chained scheduling 360
CNTRL 445
X
XDAP macro 16
POINT 448
XLRI (extended logical record interface)
VSI blocks
using 285
cross system sharing 183
data component 187
VTAM ACB 308
VTOC (volume table of contents)
data set, deleting 112
description 19
DSCB 495
ISAM data set 512
pointer 495
VVDS (VSAM volume data set) 93, 111
Overall, how satisfied are you with the information in this book?
Very Very
Satisfied Satisfied Neutral Dissatisfied Dissatisfied
Overall satisfaction Ø Ø Ø Ø Ø
How satisfied are you that the information in this book is:
Very Very
Satisfied Satisfied Neutral Dissatisfied Dissatisfied
Accurate Ø Ø Ø Ø Ø
Complete Ø Ø Ø Ø Ø
Easy to find Ø Ø Ø Ø Ø
Easy to understand Ø Ø Ø Ø Ø
Well organized Ø Ø Ø Ø Ø
Applicable to your tasks Ø Ø Ø Ø Ø
When you send comments to IBM, you grant IBM a nonexclusive right to use or distribute your comments
in any way it believes appropriate without incurring any obligation to you.
Name Address
Company or Organization
Phone No.
Cut or Fold
Readers' Comments — We'd Like to Hear from You
IBM
Along Line
SC26-4922-04
NO POSTAGE
NECESSARY
IF MAILED IN THE
UNITED STATES
Cut or Fold
SC26-4922-04 Along Line
IBM
SC26-4922-ð4
Spine information: