File reorganization and table compaction

From m204wiki
Jump to navigation Jump to search

Overview

In the life of a database, certain conditions can arise that make reorganizing a Model 204 file either necessary or desirable. This page summarizes the criteria that normally make reorganization worthwhile, as well as the techniques to accomplish the reorganization.

But not every situation requires a reorganization:

  • Changes in data structures often can be handled by simply adding, deleting, or changing fields in a file, and you can thus avoid a complete file reorganization. Refer to the discussion of the DEFINE, REDEFINE, RENAME, and DELETE commands in Defining fields manually for information about manipulating fields.
  • You can often resize Tables B, D, E, and X, as discussed in Managing file and table sizes.

The above cases aside, the main reorganization and compaction tools available are listed in the following table:

File reorganization tools
Tool Description For more information, see ...
Fast/Unload and Fast/Reload Products available to speedily reorganize even the largest Model 204 files. In addition, some of the limitations in other reorganization methods (invisible fields, procedures and Large Objects for example) are handled seamlessly. Fast/Unload

Fast/Reload

Fixed or variable unload and reload Requires the data to be unloaded into a structure and then reloaded. Techniques for reorganization
REORGANIZE OI:
Reorganizing just the Ordered Index
Improve the file's performance by cleansing the Ordered Index. Reorganizing the Ordered Index
Compacting Tables B and X Combines extension records to improve performance. Optionally cleans up logically deleted records. Dynamic data compactor for Table B
Compacting Table E For files without the FILEORG X'100' bit set, defragments Table E to create larger contiguous areas of free pages. Dynamic data compactor for Table E

Exceptions

Reorganizing Dictionary/204 files

For information about reorganizing Model 204 Dictionary/204 files, see the Rocket Model 204 installation documentation for your operating system.

Proprietary system files must not be reorganized

With the exception of CCASYS (which internally is a standard Model 204 file), never reorganize or otherwise modify Rocket Model 204 proprietary (identified by their "CCA" prefix) files or procedures, unless specifically instructed to do so by Technical Support.

Criteria for reorganization

A number of different conditions call for file reorganization, including:

  • Expanding the size of Table A and Table C for a file before the tables become full
  • Improving Model 204 performance by:
    • Rebuilding the indices
    • Arranging records so that records read together are likely on the same page
    • Resetting file parameters
    • Reducing extension chains
  • Recovering space that has been rendered unusable

Some of these conditions are described in the following sections.

To a very large extent, deciding on which (and when) files to reorganize depends on either of these:

  • How the files are updated: how often records are added and deleted; and how often data is added to (or deleted from) records.
  • If changes are required that involve parameters that cannot be adjusted dynamically.

Avoiding table full conditions

Normally, file growth most directly affects Table B, D, E and X, all of which can be modified dynamically with the INCREASE command without necessitating a reorganization (see Managing file and table sizes). However, Table A and Table C cannot be adjusted dynamically. New distinct KEY field values that previously have not occurred in the file require, and data growth that starts new segments in Table B, both will cause additional entries in Table C. New field definitions, FRV and/or CODED field values require Table A space.

Additional requirements can often be anticipated and included in the Table A and Table C allocations (or, you can convert any KEY fields to ORDERED to avoid Table C issues entirely). Table A usually is very small relative to the entire file, and Table C often is considerably smaller than either Tables B, D, E or X. But, in other situations, these tables might need to be expanded during file reorganization.

Monitoring Table A and Table C

To avoid filling Table A or Table C, regularly monitor table page retries (ARETRIES and CRETRIES parameters, respectively), more often in highly volatile databases. If the number of page retries shows a noticeable increase from one run to another, the file might be approaching a table full condition and might require reorganization. The TABLEC command monitors the use of Table C.

Monitoring hash key files

The special Model 204 file organization that uses hashed access to Table B (see Hash key files) does not allow dynamic expansion of Table B. Monitoring the SPILLADD parameter, the number of spills, and BHIGHPG, the number of full pages, helps to anticipate the need to reorganize hashed files.

Improving performance: Data

When a file is created, certain file parameters are set, which affect the utilization of the tables. Some of these parameters cannot be reset and the file might need to be reorganized to reflect changes in file characteristics. If the file is not reorganized when file characteristics change, system performance might be affected.

Avoiding extension records

For example, if new fields are added to records in a file, Model 204 might create extension records. If the number of extension records increases (check the EXTNADD parameter), consider reorganizing the file to turn Table X on or adjust parameters such as BRECPPG and BRESERVE. Use the TABLEB command (or its TABLEBX variation) to determine which parameters need adjusting.

Note: Increasing the amount of reserve space per page (BRESERVE) without reorganizing the file affects only new Table B pages.

Combining extension records

Another way to reduce extension records and improve performance without a file reorganization is to run the data compactor, described in Dynamic data compactor for Table B.

Aggressive use of the COMPACTB command should mean that you rarely need to reorganize a file due to extension records.

Reducing the number of extension records

Reorganizing a file will result in the lowest possible number of extension records (based on your parameter settings) as possible. For files which do not have Table X enabled, often you would adjust the BRECPPG parameter to decrease the number of extensions. Care must be taken that you do not decrease it too far as this can cause space wastage on pages that are mostly extensions and cause an increase of Table B usage. By enabling Table X, extension record management is far more straightforward.

Reorganizing sorted files

Model 204 sorted files also might need to be reorganized. File parameters, such as BPGPMSTR, BPGPOVFL, and BEXTOVFL, might not reflect current requirements. The result is added overflow records or spill between overflow areas. A discernible increase in OVLFADD or SPILLADD might indicate that a sorted file needs to be reorganized. In the extreme case, if you do not reorganize the file, new records cannot be stored.

Improving performance: Indexing

The Model 204 file indexes all tie to the internal record number (IRN) of the record. If you have a file which has a high volume of additions and deletions, over time both the hash key index (Table C) and the B-Tree (in Table D) will grow. In both cases, a full reorganization of the file will rebuild the indexes and improve performance. See also Reorganizing the Ordered Index for a way to rebuild the B-Tree.

Recovering unusable space

In an entry order file, deleted space in Table B is reused only for the expansion of existing records on that page.

Avoiding unusable space problems

Highly volatile files might require reorganization to recover the record numbers from deleted records. Such files use space more efficiently if they are unordered with the reuse record number option.

Techniques for reorganization

To reorganize a Model 204 file, copy the records from Table B and the procedures from Table D onto sequential data sets and reload from those data sets. This section discusses techniques for copying records and procedures onto sequential data sets, and how to rebuild the access control table (ACT).

To reorganize a file, follow these steps:

  1. Copy the records from Table B into one or more sequential data sets. (See Copying records.)
  2. Copy the procedures into a sequential data set. (See Copying procedures and Reloading a small number of procedures.)
  3. Re-create and initialize the file, making the desired changes to space allocation, file parameters, and field definitions. If no space parameter changes are needed, initialize the file and define the fields.
  4. Reload the file from the sequential data sets in step 1.
  5. Re-enter the SECURE commands needed to rebuild the Access Control Table (ACT).
  6. Re-create the procedures from the sequential data set in step 2. (See Reloading a small number of procedures.)

Note: You cannot use the file backup utility (DUMP and RESTORE) described in File dumping and restoring to reorganize files. However, you can use RESTORE with the 128 option to defragment Table B and D space that was added with the INCREASE command.

Copying records

If the file was updated after its creation, you must retrieve all the records in Table B and copy their fields into a sequential data set.

If the file was originally created from a sequential data set and not subsequently updated, you can rerun the file load with the same data set. This is, however, an unlikely circumstance, because static files rarely need reorganizing.

INVISIBLE fields

INVISIBLE fields are handled differently; see the description of recreating INVISIBLE fields on Reorganizing INVISIBLE fields.

Copying and reloading examples

The following examples demonstrate User Language techniques for copying and reloading records in a Model 204 file. These techniques use SOUL, the directed output feature (USE command), and the File Load utility. The Host Language Interface can replace the SOUL portions, if desired, although SOUL is usually more efficient.

Copying into a fixed-format file

The most efficient method for reorganizing most files is to create a fixed-format sequential file and a corresponding file load program based on that format.

For example, assume that you have a PEOPLE file that contains records consisting of three fields:

Field nameMaximum lengthField type
NAME20KEY
AGE3KEY,NR
SEX6KEY,COD,FV

The following z/OS request produces a sequential data set named COPY in which each fixed-length record contains the NAME, AGE, and SEX fields from one record in the PEOPLE file.

Set the UDDLPP parameter to zero before running the request to suppress page headers. Set UDDCCC to zero or to at least one greater than the output record length. You might also need to reset the length of the LOBUFF parameter in the PARM field of the EXEC statement.

Copying files

For step 1 of Techniques for reorganization, run these commands and this SOUL request:

OPEN PEOPLE RESET UDDLPP = 0,UDDCCC = 0 USE OUTTAPE BEGIN ALL: FIND ALL RECORDS END FIND PRINT.LOOP: FOR EACH RECORD IN ALL PRINT NAME WITH AGE AT COLUMN 21 - WITH SEX AT COLUMN 24 END FOR END

DCB and FILEDEF information

OUTTAPE is the name of the following DD statement in the JCL of the batch or Online job in z/OS:

//OUTTAPE DD DSN=COPY,DISP=(KEEP),UNIT=TAPE, // DCB=(RECFM=FB,LRECL=29,BLKSIZE=1450)

The same process in z/VM is as follows:

FILEDEF OUTTAPE TAP1 SL (BLOCK 1450 LRECL 29 RECFM FB LABELDEF OUTTAPE VOLID 123456

Reloading files

For step 4 of Techniques for reorganization, reloading the file, COPY becomes input to the File Load run:

CREATE FILE PEOPLE parameters File parameters that may reflect required changes . . . END OPEN PEOPLE INITIALIZE DEFINE FIELD NAME WITH KEY DEFINE FIELD AGE WITH KEY RANGE DEFINE FIELD SEX WITH KEY CODED FEW-VALUED FLOD -1,-1,0 NAME=1,20,X'8000' AGE=21,3 SEX=24,6 END

Using PAI FLOD for files with varying record formats

If your files contain different record types and varying record formats, you might want to have a general utility program to reorganize a Model 204 file.

Copying files

The PERSONEL file has many field names and several types of records. For step 1 of Techniques for reorganization, use the following commands and SOUL request:

OPEN PERSONEL RESET UDDLPP = 0,UDDCCC = 0 USE OUTTAPE BEGIN ALL: FIND ALL RECORDS END FIND PRINT.LOOP: FOR EACH RECORD IN ALL PRINT '*' PRINT ALL INFORMATION END FOR END

DCB information

OUTTAPE is the name of the DD statement in the JCL of the job:

//OUTTAPE DD DSN=COPY,DISP=(,KEEP),UNIT=TAPE, // DCB=(RECFM=VB,LRECL=260, // BLKSIZE=1000)

COPY sequential data set

The previous request produces a sequential data set named COPY in which each variable-length record is either an asterisk or one field of one record in the PERSONEL file. This is the format of these sequential records:

Step 1 Sequential record layout
Column 1 2 3 4 5 6 7 8 ...
  Length Reserved Field name=value

Parameter settings for the sequential data set

As shown in the layout above, the field name begins in column 5, and spaces are required before and after the equal sign. If the field name=value portion of any line exceeds the value of the LOBUFF parameter, reset LOBUFF in the EXEC statement, and set LRECL in the OUTTAPE DD statement to: LOBUFF plus 4.

The default values of 1,000,000 for the MUDD, MOUT, and MDKRD parameters might need to be adjusted upward for PAI dumps on large files.

Reloading files using the File Load utility

For step 4 in Techniques for reorganization, the COPY data set is input to a file load program that uses the D statement (see the description of the File Load utility). The following sequence reloads the records from COPY.

CREATE FILE PERSONEL parameters . . . END OPEN PERSONEL INITIALIZE DEFINE fields Field definitions that may reflect required changes . . . FLOD -1,-1,0 #21 Top of loop G Get record from TAPEI =24,5,* Asterisk; go to start new record I 13,,,0 Initialize to zero #22 Top of equal sign search loop =23,6|13,= Equal sign; go to compute value length I 13,,,1|13 Increment field name length =22 Continue search for equal sign #23 Equal sign found I 14,1,2,-7,-1|13 Compute proper value length =25,4|15*Y Flag set to Y; go to store first field D 5,0|13=8|13,0|14 Store field (other than first) in record =21 Go to top of loop #24 New record starting I 15,,,232 Set flag to decimal value of EBCDIC Y =21 Go to top of loop #25 Storing first field D 5,0|13=8|13,0|14,X'8000' Set mode bit to begin a new record I 15 Reset flag register END EOJ

The line after label 24 starts a new record. With minor modifications to this generalized file load program, you can reorganize different Model 204 files.

Reorganizing or loading a file that contains Large Objects

A Model 204 file that contains Large Object data can be reorganized as described below in Using PAI FLOD for files with varying record formats. The SOUL PAI (PRINT ALL INFORMATION) statement unloads the data from a file, and a standard FLOD program reloads the data.

PAI statement for... Output is formatted as...

Non-LOB field

fieldname=fieldvalue

LOB field

fieldname=descriptor LOB data line 1 LOB data line 2 ... LOB data line n

For a LOB field (BLOB or CLOB), the data value is replaced by a LOB descriptor, which contains information such as the LOB length and RESERVE value. The LOB descriptor contains unprintable characters. The actual LOB value then follows on subsequent records (as many as are required to hold the LOB value, depending on the OUTMRL of the output data set to which the PAI output is directed).

When a FLOD runs a statement to load a field value, it checks whether the field is a LOB, and if so processing assumes that the data is formatted as illustrated in the previous table — namely, the fieldvalue is actually a descriptor — and loads it accordingly.

The descriptor format is normally of no concern to the file manager or FLOD programmer, as the descriptor is built by the PAI statement that unloaded the data.

If you need to build the LOB descriptor yourself, for example, for an initial load of a file that contains Large Object data, see Building a Large Object descriptor.

Reorganizing sorted and hash key files

For sorted or hash key files, you need to modify the generalized file load program shown in Using PAI FLOD for files with varying record formats. Load the sort or hash key by the file load statement that starts a new record. Consider the following factors:

  • Key may or may not be the first field printed by the PAI request.
  • Key may or may not appear in every record.
  • Length of the key value may or may not be the same in every record.

These factors are discussed in the following sections, followed by examples of the PAI FLOD reorganization method illustrating different combinations of sort or hash key characteristics.

Note: Reorganizing hash key files with the PAI FLOD method requires twice as much Table B I/O as for other types of files. Hash key files might require significantly more time if you do not use the M204HASH utility to presort the TAPEI input date to FLOD or FILELOAD. For more information on the M204HASH utility, see Hash key files.

Modifying the PAI request for preallocated fields

When the sort or hash key is a preallocated field, or when a file contains no preallocated fields, the key appears first in the PAI output.

When preallocated fields are used but the sort or hash key is not one of the preallocated fields, you need to modify the PAI request to print the sort or hash key value first for each record.

In addition, you need to modify the file load program to load the sort or hash key as the first field and skip the second occurrence that results from the PAI statement.

Example

In the following example, the sort or hash key field position is unknown, the sort or hash key field is required in every record, the sort or hash key field value length is constant, and the sort or hash key field is not necessarily the first field printed by the PAI request. The following assumptions are made:

  • Sort or hash key is a field called FILEKEY.
  • FILEKEY appears in every record (FILEORG X'02' bit is on).
  • FILEKEY might not be the first field printed by the PAI request. (There are preallocated fields in the file, but FILEKEY is not one of them.)
  • Value of FILEKEY always has a length of 8.
  • No other fields in the file have names beginning with the character string "FILEKEY " (FILEKEY followed by a blank), which would be skipped by the CASE statement.

The data in the file is dumped to a sequential data set called OUTTAPE with the same DCB attributes as described on Using PAI FLOD for files with varying record formats. The procedure used to dump the data is:

OPEN filename RESET UDDLPP=0,UDDCCC=0 USE OUTTAPE BEGIN ALL: FIND ALL RECORDS END FIND PRINT.LOOP: FOR EACH RECORD IN ALL PRINT '*' PRINT FILEKEY PRINT ALL INFORMATION END FOR END

The following FILELOAD program loads the data from OUTTAPE:

FLOD -1,-1,0 #21 G =24,5,* Asterisk; go to start new record. CASE 5,8 Skip loading second occurrence of FILEKEY=21 FILEKEY produced by ENDCASE PRINT ALL INFORMATION statement. I 13,,,0 Initialize counter to 0. #22 =23,6|13,= Look for = starting in col. 6. I 13,,,1|13 Increment field name length. =22 Continue search for equal sign. #23 Come here when equal sign found. I 14,1,2,-7,-1|13 Compute length of value. D 5,0|13=8|13,0|14 Store field in record. =21 #24 Come here when asterisk found. G Get next record - contains FILEKEY. FILEKEY=5,8,X'8000' Begin new record. END

Modifying the PAI request for files in which the sort or hash key is not required

For a sorted or hash key file in which the sort or hash key is not required in every record, modify the file load program to determine whether the sort or hash key exists. If it does not exist, load the first field with mode bits of X'A000'. In the following example, the sort or hash key might not be in every record.

For a sort or hash key file in which the key value is variable in length, compute the value length. The same statements that are used to compute the length of the values of other fields can be used to compute the length of the value of the sort or hash key. In the following example, the key value length is variable.

The following sort or hash key file reorganization examples use a slight modification of the PAI FLOD reorganization method described on Using PAI FLOD for files with varying record formats. They apply to either sorted or hash key files.

Sort or hash key not required, PAI sample request

In the following example, the sort or hash key field position is known, the sort or hash key field is not required in every record, and the sort or hash key field value length is variable. Specifically, the following assumptions are made:

  • Sort or hash key is a field called FILEKEY.
  • FILEKEY might not appear in every record (FILEORG X'02' bit is off).
  • FILEKEY is always the first field printed by the PAI request.
  • Length of the value of FILEKEY is variable.
  • No other fields in the file have names beginning with the character string "FILEKEY " (FILEKEY followed by a blank).

The data in the file is dumped to a sequential data set (called OUTTAPE) with the same DCB attributes described on Using PAI FLOD for files with varying record formats. The procedure used to dump the data is:

OPEN filename RESET UDDLPP=0,UDDCCC=0 USE OUTTAPE BEGIN ALL: FIND ALL RECORDS END FIND PRINT.LOOP: FOR EACH RECORD IN ALL PRINT '*' PRINT ALL INFORMATION END FOR END

The following FILELOAD program loads the data from OUTTAPE:

FLOD -1,-1,0 I 2,,,1 Constant value 1 for comparison. #21 G =24,5,* Asterisk; turn on start-new-rec flag. I 13,,,0 Initialize counter to 0. #22 =23,6|13,= Look for = starting in col. 6. I 13,,,1|13 Increment field name length. =22 Continue search for equal sign. #23 Come here when equal sign found. I 14,1,2,-7,-1|13 Compute length of value. T 25,1|1*4,1|2*8 Go to 25 if start-new-rec flag is on. D 5,0|13=8|13,0|14 Else, load field into rec. =21 Go get next field. #25 Here when start-new-rec flag is on. CASE 5,8 Use different mode bits when FILEKEY =26 FILEKEY exists in ENDCASE current record. D 5,0|13=8|13,0|14,X'A000' No FILEKEY; use X'A000' I 1,,,0 Turn off start-new-rec flag. =21 Go get next field. #26 Come here when FILEKEY exists in rec. FILEKEY=15,0|14,X'8000' Load FILEKEY. I 1,,,0 Turn off start-new-rec flag. =21 Go get next field. #24 Come here when asterisk found. I 1,,,1 Turn on start-new-rec flag. END

Reorganizing INVISIBLE fields

If your site has any records with INVISIBLE fields, you need to build provisions into step 1 (from Techniques for reorganization) of either method to reorganize the files that have INVISIBLE fields. INVISIBLE fields do not occupy space in Table B and must be re-created by other means.

Reorganizing INVISIBLE FRV fields

For example, if the PERSONEL file has an INVISIBLE FRV field called TYPE, which implies the record type, modify the PAI FLOD request as follows:

BEGIN TYPE: FOR EACH VALUE OF TYPE TYPE.VAL: FIND ALL RECORDS FOR WHICH TYPE = VALUE IN TYPE END FIND FOR EACH RECORD IN TYPE.VAL PRINT '*' PRINT 'TYPE = ' WITH VALUE IN TYPE PRINT ALL INFORMATION END FOR END FOR END

This request assumes that there is only one occurrence of TYPE for each record, and that each record has a TYPE value.

Reorganizing INVISIBLE NON-FRV fields

INVISIBLE fields that are NON-FRV must be derived by other means. Many INVISIBLE fields can be created from another VISIBLE field in the record. For example, an INVISIBLE KEY field LAST NAME can be carried in the same record as the VISIBLE field NAME. Use the $Substr and $Index functions to extract the value for LAST NAME from NAME. This value can then be included in the fixed format record or the PAI reorganization.

Fixed-format dump example

For example, for a fixed-format dump:

BEGIN ALL: FIND ALL RECORDS END FIND PROCESS.LP: FOR EACH RECORD IN ALL %LN = $SUBSTR(NAME,$INDEX(NAME,' ')+1) PRINT NAME WITH AGE AT COLUMN 21 - WITH SEX AT COLUMN 24 - WITH %LN AT COLUMN 30 END FOR END

PAI FLOD example

For a PAI FLOD:

BEGIN ALL: FIND ALL RECORDS END FIND PROCESS.LP: FOR EACH RECORD IN ALL %LN = $SUBSTR(NAME,$INDEX(NAME,' ')+1) PRINT '*' PRINT ALL INFORMATION PRINT 'LAST NAME = ' WITH %LN END FOR END

Step 4 can load the field LAST NAME in the same manner in which the other fields are loaded. Although the value is part of the records in the sequential data set, it is not stored in Table B if the field description established in a DEFINE command indicates it is INVISIBLE.

Take care in the file design process to evaluate reorganization requirements and anticipate which technique, if necessary at all, is most suitable.

Copying procedures

Before reorganizing any file that contains procedures, dump the procedures from the file to a sequential data set as follows:

RESET UDDLPP=0,UDDCCC=80 USE OUTPROCS OPEN FILEX DISPLAY (ALIAS,LABEL) ALL

DCB information

OUTPROCS is the name of the following DD statement in the JCL of the Model 204 job:

//OUTPROCS DD DSN=PROCS,DISP=(,KEEP), // DCB=(LRECL=80,BLKSIZE=80,RECFM=F,),...

or, an alternate:

//OUTPROCS DD DSN=PROCS,DISP=(,KEEP), // DCB=(LRECL=80,BLKSIZE=n*80,RECFM=FB,),...

A data set concatenated to a DD* data set line //CAIN DD * must be compatible with the DCB of the DD* data set which is DCB=(LRECL=80,BLKSIZE=N*80, RECFM=F OR FB). A tape or disk data set can be used.

Copying secured procedures

If procedure security is in effect, make sure that the user issuing the DISPLAY command has the authority to display all procedures (DISPLAY ALL command) to avoid losing any secured procedures.

Reloading procedures

After the file has been reloaded, reload the procedures by running a batch job that has the PROCS data set as part of the concatenated input stream:

  • Be sure to include the *LOWER command after the OPEN if the procedures contain uppercase and lowercase characters.
  • If any procedure line exceeds 72 characters, include (in the EXEC statement) settings of INMRL and INCCC large enough to accommodate the longest line.

For example:

//CCAIN DD * PAGESZ=6184 OPEN FILEX password /* // DD DSN=PROCS,DISP=OLD,... // DD * EOJ /*

Reloading a small number of procedures

If you are reloading a small number of procedures that do not have aliases, it may be easier to copy the procedures to another Model 204 file, and then copy them back. Using this method of copying procedures causes any aliases to be removed when the file is initialized (step 3 of Techniques for reorganization).

For example, the PEOPLE file, which contains five procedures, needs to be reorganized. The JUNK file is a file with no procedures in it, created for this purpose. To perform step 2 of Techniques for reorganization, open the PEOPLE and JUNK files, and issue the command:

IN PEOPLE COPY PROCEDURE ALL TO JUNK

Step 6 is then the reverse command:

IN JUNK COPY PROCEDURE ALL TO PEOPLE

For more information, see COPY PROCEDURE command.

z/VSE example

z/VSE does not allow data set concatenation. However, with z/VSE/POWER using the * $$ PUN statement and the USE data set facility, you can create a job to reload procedures:

* $$ PUN DISP=I,CLASS=I . . DEFINE DATASET OUTPROCS WITH SCOPE=SYSTEM - FILENAME=SYSPCH LRECL=80 RECFM=F RESET UDDLPP=0,UDDCCC=80 USE OUTPROCS BEGIN PRINT '// JOB LOAD PROCEDURES' PRINT '// OPTION SYSPARM=INCCC=80' . Print out JCL and user zero input . to precede displayed procedures. END USE OUTPROCS DISPLAY (ALIAS,LABEL) ALL USE OUTPROCS BEGIN PRINT 'EOJ' PRINT '/*' PRINT '/&' END . .

z/VM example

To dump procedures to a z/VM disk file, execute the following command:

ONLINE BYPASS UNLDPROC

Where:

The UNLDPROC EXEC defines the files to be used by the ONLINE command.

The procedures are dumped to the CMS disk file, CARS PROCFILE A.

UNLDPROC EXEC

The UNLDPROC EXEC follows:

&CONTROL OFF FILEDEF * CLEAR FILEDEF CCAPRINT DISK UNLDPROC CCAPRINT A FILEDEF CCAAUDIT DISK UNLDPROC CCAAUDIT A FILEDEF CCATEMP N DSN WORK CCATEMP FILEDEF CCASNAP PRINTER FILEDEF CCASTAT N DSN WORK CCASTAT FILEDEF CARS M DSN M204DB CARS FILEDEF OUTPROC DISK CARS PROCFILE A (RECFM FB LRECL 80 - BLKSIZE 800 FILEDEF CCAIN DISK UNLDPROC CCAIN * &STACK SYSOPT 128 LIBUFF 600 LOBUFF 600 INCCC 80

UNLDPROC CCAIN file

The CCAIN file, UNLDPROC CCAIN, is:

PAGESZ=6184 R UDDCCC=80,UDDLPP=0 OPEN CARS USE OUTPROC DISPLAY PROC(ALIAS,LABEL) ALL EOJ

To load the procedures back into a file, execute the following command:

ONLINE BYPASS LOADPROC

LOADPROC EXEC

The following LOADPROC EXEC defines the files to be used by the ONLINE command:

&CONTROL OFF FILEDEF * CLEAR FILEDEF CCAPRINT DISK LOADPROC CCAPRINT A FILEDEF CCAAUDIT DISK LOADPROC CCAAUDIT A FILEDEF CCATEMP N DSN WORK CCATEMP FILEDEF CCASNAP PRINTER FILEDEF CARS M DSN M204DB CARS FILEDEF CCAIN DISK LOADPROC CCAIN A M204APND CCAIN DISK CARS PROCFILE A M204APND CCAIN DISK EOJ CCAIN A &STACK SYSOPT 128 LIBUFF 600 LOBUFF 600 INCCC 80

CCAIN files

The CCAIN input consists of the following files, which are concatenated using M204APND:

  • LOADPROC CCAIN contains commands to open the database file.
  • CARS PROCFILE is the z/VM disk file that the procedures were dumped to using UNLDPROC.
  • EOJ CCAIN contains commands to close the file and end the job.

LOADPROC CCAIN is:

PAGESZ=6184 OPEN CARS

EOJ CCAIN is:

CLOSE CARS EOJ

Access control table

For a file that uses procedure security, there is no automatic way to rebuild the access control table. Display the mapping of user classes to procedure classes with the command:

DISPLAY (PRIVILEGES) ALL

Then reenter the privilege mappings with a new series of SECURE commands. Refer to the discussion of other DISPLAY command capabilities in Overview, and to the discussion of the SECURE command.

Reorganizing the Ordered Index

The Ordered Index can be adjusted or reorganized in whole or in part to improve its performance or its spacing requirements or both.

You can reorganize the Ordered Index to replenish space in an Ordered Index where the deletions of B-tree entries have left many tree nodes with excessive unused space. You can also maximize the efficiency of your site's range retrievals by taking advantage of the clustering of the segment lists stored on Table D pages that results from a reorganization.

The REORGANIZE OI command is used along with the deferred update Z command to reorganize the Ordered Index.

Changing spacing parameters with REDEFINE

You might want to modify the spacing parameters of one or more of the ORDERED fields in the Ordered Index in response to or in anticipation of a change in the nature of the data being stored. (For example, you might anticipate a different pattern or frequency of the updates to the Ordered Index.)

If you need to change the spacing parameters of one or more of the ORDERED fields for future updating, use the REDEFINE command with the field(s) involved. See the discussion of ORDERED fields in ORDERED and NON-ORDERED attributes, and the REDEFINE command discussion in Defining field retrieval attributes, for more information about the Ordered Index B-tree structure and alteration.

REORGANIZE OI command

You can issue the REORGANIZE OI command for either the entire Ordered Index or for one specified field that has the ORDERED attribute. When you issue REORGANIZE OI:

  • REORGANIZE OI writes out Ordered Index deferred update records to a variable-length deferred update data set.
  • These deferred updates are applied to the file using the Z command.
  • One deferred update record is written for each record indexed in the part of the Ordered Index being reorganized. The length of the update record varies according to the length of the value that is indexed. The deferred update data set created can be very large.

For more information about variable-length deferred update data sets and applying the deferred updates using the Z command, see Deferred update feature.

REORGANIZE OI format

The format for the REORGANIZE OI command is:

Syntax

REORGANIZE OI [fieldname]

Where:

fieldname is an ORDERED field.

Privileges required

This command requires file manager privileges and exclusive control of the file. It can be used only in single user runs. Before issuing REORGANIZE, open the file in deferred update mode with a variable-length deferred update data set. See Deferred update feature for a description of the special form of the OPEN command used for opening a file in deferred update mode.

The REORGANIZE command deletes all the entries from the part of the Ordered Index that is being reorganized, and writes the updates to rebuild the Ordered Index into the variable-length deferred update data set.

Rebuilding the Ordered Index with the Z command

To rebuild the Ordered Index, issue the Z command. If the settings of the spacing parameters LRESERVE, NRESERVE, and IMMED are changed for any field before the Z command is issued, the Ordered Index is rebuilt using the new parameter values for that field. The new parameter values result in a different space utilization.

The data sets used by the Z command with REORGANIZE are the same as in ordinary File Load or deferred update runs:

  • Data set containing the Ordered Index deferred update records must be named SORT5.
  • Data set produced by the Z command is TAPE5.

The Z command clusters the Ordered Index segment lists stored on Table D list pages so that, if possible, all the lists for one or more given "field name = value" pairs are on the same Table D list page. This clustering dramatically improves the performance of a FIND that includes a range condition.

Dynamic data compactor for Table B

The data compactor for Table B dynamically reduces the chains of extension records for unordered files. You can use the data compactor Online or in Batch.

Frequently updated files may have records with long chains of extension records spanning several pages. When a record is updated and there is not enough space on the page to store a new or changed field, Model 204 allocates an extension record on a different page. Over time, records may acquire multiple extensions on different pages. Reducing the number of extension records may improve performance, especially for scanning long records, and free the record numbers used by extension records.

Understanding the constraints database

The constraints database is a dynamic hashed database that resides in a few virtual storage pages with any extra space needed residing in CCATEMP. In this database there are primary pages and overflow pages, also called extension records. When a primary page fills up, additional records go on to overflow pages that are chained from the primary page. When a primary page become too full, a process is evoked which splits the page into two pages. This process continues to the maximum-allowed number of primary pages at which point the splitting process stops and the overflow page chain is allowed to grow as large as required.

As the constraints database contracts, this process is reversed. Those pages with only a few records are merged together or compacted. This process continues until only one page remains.

Combining extension records

The data compactor reduces the number of extensions by combining several extension into one. The data compactor first tries to use a page from the reuse queue for a new extension record. If no page is suitable, the compactor uses an unused page in Table B increasing the BHIGHPG parameter. The logical order of extension in a record is preserved. After writing a new extension record old ones are deleted and the space on corresponding pages may be reused.

Depending on the length of an extension record and the parameter settings for page reuse, the data compactor may increase or decrease the reuse queue. The COMPACTB command uses the reuse queue-adding eligible pages and deleting used pages, as would any updating program.

For example, you might see the following message that indicates that all free (unused) pages allowed for compaction have been used.

M204.2677: ALL FREE PAGES ALLOWED FOR COMPACTION HAVE BEEN USED. COMMAND COMPACTB ENDS.

  • If the FREE parameter of the COMPACTB command was set at 100 (%), then all unused pages are gone and the file manager must increase Table B size.
  • If there are some numbers of unused pages left in Table B (the difference between BSIZE and BHIGHPG), then the file manager must increase the FREE parameter.

This is the same approach Model 204 uses when allocating a new record or a new extension: to save time for the next user and speed up the allocation. The compactor makes this process very visible, because it scans many more pages on the reuse queue and may require much more free space on a pages.

The data compactor does not change the order of the fields within a record.

Implementation of the COMPACTB command led to the implementation of the BLDREUSE command, anticipating that you might want to assure that as many pages as possible were available on the reuse queue for COMPACTB processing to use.

Recovery processing and the data compactor

Recovery supports the addition and deletion of extension records as a part of a record update, as well a creating journal entries for moving extension records. Journal entries 6 track moved extension records.

COMPACTB data compaction

The COMPACTB process provides compacting results and improves performance. The following features are included in COMPACTB processing:

  • Support for files with Table X
  • A DELETE option to physically delete logically deleted records

COMPACTB processing can be used to avoid frequent file reorganizations needed to reduce the number of extension records and can also be used to reclaim space occupied by logically deleted records — sometimes referred to as dirty data.

Recovery of the compaction process is fully supported.

Improved compaction for all files

If a Table B base record has one and only one extension record and the base record page has sufficient free space, then the data compactor will combine the extension record with the base record and delete the extension. For a base record with multiple extensions, the compactor will try to combine the extension records into fewer extensions, but will not attempt to combine them with the base record.

Compaction for files with Table X

COMPACTB processing supports files defined with a Table X. Compaction for files defined with Table X have better performance, because compaction operates on a page basis instead of a record basis and locks fewer resources, especially the existence bit map.

Physical delete of logically deleted records

A logically deleted record remains physically stored in the file. The corresponding existence bit is zero-or off-in the Existence Bit Pattern (EBP), but otherwise all fields, data, and index entries are present and consume space. The record, however, cannot be accessed because the existence bit is zero.

The data compactor can physically delete logically deleted records in files. The compactor accomplishes this by checking all records on a Table B page to verify that the EBP indicates that the record exists. For records that are physically present, but the EBP indicates that the record has been logically deleted, the compactor will delete the physical records-if requested.

The DELETE option of the COMPACTB command specifies that the process must physically delete logically deleted records, whether the physical records are in Table B (base and extensions) or the physical records are in Table B with extensions in Table X.

Space and record numbers freed by physically deleting logically deleted records are reusable.

You can use the DELETE option with all other options. For example:

COMPACTB FROM 15 TO 23 FREE 20 DELETE

The data compactor finds logical records with a nonzero length within the indicated record number range (in the previous example, 15 to 23) that do not exist in the EBP and deletes the data and index entries for such records. This example also releases 20% of the unused or free pages to the data compactor, which can be used for new extension records. Files with lock pending updates (LPU) disabled are processed with the data compactor placing an exclusive enqueue on the entire file. In this case, a warning message is issued.

For files with many logically deleted records, processing the DELETE option may be CPU and I/O intensive. The following new message is printed at the completion of data compactor processing that states the total number of deleted records.

M204.2754: NUMBER OF DELETED LOGICALLY DELETED RECORDS: count

All statistical information from compaction is presented as Model 204 messages, as shown in the following COMPACTB command outputs.

COMPACTB using the FREE option

COMPACTB FREE 85 *** M204.2749: FILE filename NUMBER OF BASIC RECORDS PROCESSED: 36000 *** M204.2750: FILE filename NUMBER OF EXTENSION RECORDS BEFORE COMPACTION: 110001 *** M204.2751: FILE filename NUMBER OF EXTENSION RECORDS AFTER COMPACTION: 35991 *** M204.2752: FILE filename NUMBER OF NOT PROCESSED (LOCKED) RECORDS: 0 *** M204.2753: FILE filename NUMBER OF FREE PAGES USED: 1026

COMPACTB using the DELETE option

COMPACTB DELETE *** M204.2749: FILE filename NUMBER OF BASIC RECORDS PROCESSED: 35000 *** M204.2750: FILE filename NUMBER OF EXTENSION RECORDS BEFORE COMPACTION: 106952 *** M204.2751: FILE filename NUMBER OF EXTENSION RECORDS AFTER COMPACTION: 34991 *** M204.2752: FILE filename NUMBER OF NOT PROCESSED (LOCKED) RECORDS: 0 *** M204.2753: FILE filename NUMBER OF FREE PAGES USED: 755 *** M204.2754: FILE filename NUMBER OF DELETED LOGICALLY DELETED RECORDS: 1000

Dynamic data compactor for Table E

Functionality

For Model 204 files where FILEORG X'100' is not set, Large Objects are stored as consecutive chunks of Table E pages. When Large Objects are created and deleted frequently, gaps can occur between objects that may not be reused due to their small size. The COMPACTE command lets you compact Table E by grouping gaps together, thus reducing Table E fragmentation. To find usable gaps that may be compacted, the Table E map must be analyzed.

The Table E compactor can combine orphan spaces in Table E without file reorganization and run without exclusive use of file. When processing finds a gap, the large object that follows the gap is switched with the gap. The large object moves left, concentrating objects at the beginning of Table E, while the gap moves right, concentrating free space at the end of Table E. Although a Large Object may be pointed to by one and only one record, different fields in the same record may point to different Large Objects.

Considerations for use

Some compactions may be counter productive. For example, if a segment has 49 objects, each the size of 1000 pages, and 49 gaps of 1-2 bytes each for a total size of 152 pages, then moving 49,000 pages to reclaim a modest 152 page gap is inefficient. On the other hand for objects with average size of 1-100 pages, compacting a hundred 1-page gaps is beneficial.

The TABLEE command, like the TABLEB command, reports Table E usage statistics: the number of gaps and total gap size. Because compaction is heavily I/O and CPU intensive, you should compact Table E only when you can expect substantial results.

For files with large Table E and really large objects (thousands of pages) you must take care to prevent unnecessary page movements.

The compactor analyzes Table E based driven by the bitmap pages, one Table E set of 49152 pages at a time. Table E contains not only object pages but bitmap pages also. The compactor’s implementation has the following limitations:

  • Bitmap pages (allocated one per segment) are not moved, so the worst result of compaction is two gaps per segment.
  • Objects residing in more than one segment are not moved.

Using the TABLEE and COMPACTE commands

To effectively compact Table E, Rocket Software recommends running a TABLEE command with the SEG option, identifying segments with large number of gaps, running the COMPACTE command for segments of interest, and then running another TABLEE command for compacted segments to check the results.

COMPACTE back out and recovery

No back out capabilities are provided for Table E compaction. To facilitate recovery, the compactor writes preimages of all a large object’s pages that are subject to move. You may need to increase checkpoint data set size. In the worst case almost all pages in Table E may be preimaged. The journal data set size increase is much smaller. It writes 50 bytes per object moved. If a problem happens during compaction, base the recovery action on error messages:

  • For error messages generated while analyzing Table E (messages 2809,2810, 2818, 2819, 2821), a file must be regenerated.
  • For error messages generated while moving an object (messages 2811,2823) a normal file recovery should be adequate.

COMPACTE performance

Table E compactor processing is highly I/O and CPU intensive. When gaps combine and grow in size, it may be quite expensive to do page-by-page constraints checking. Using the EXCL option lets you avoid constraints checking, but the total file will be unavailable to other users for the duration of compaction.

COMPACTE and checkpoints

The COMPACTE command runs as one long transaction. After reading the MAXPR (number of pages), processing stops, the transaction ends, and a checkpoint is attempted. Also, at this time processing checks whether the user is being bumped or is exceeding limits, such as I/O or CPU slices or a higher priority user needs to run. These checks happen only after an object has been moved. If a very long — hundreds of pages — object is moved, the transaction or sub transaction checkpoint may be delayed or prevented.

COMPACTE results

COMPACTE *** M204.2811: FILE filename NUMBER OF MOVED OBJECTS: 180 *** M204.2812: FILE filename NUMBER OF MOVED PAGES: 12 *** M204.2813: FILE fielname NUMBER OF RECORD LOCKING CONFLICTS: 0 *** M204.2814: FILE filename NUMBER OF MULTISEGMENT OBJECTS: 5