COMPACTB command: Difference between revisions

From m204wiki
Jump to navigation Jump to search
m (more conversion cleanup)
m (minor cleanup)
Line 33: Line 33:


==Usage==
==Usage==
<p>
<ul>
The data compactor is a CPU- and I/O-intensive process, so do not compact data at your site during high load periods. To avoid monopolizing system resources, the data compactor checks for bump or long request conditions at the following intervals:</p>
<li>The data compactor is a CPU- and I/O-intensive process, so do not compact data at your site during high load periods. To avoid monopolizing system resources, the data compactor checks for a <var>BUMP</var> command or for long request conditions at the following intervals:
<ul>  
<ul>  
<li>
<li>When compacting files with Table X, the check for a bump or for long request conditions is done  
<p>
when one of the following milestones (whichever comes first) is reached:
When compacting files with Table X, the check for bump or long request conditions is done for every:</p>
<ul>  
<ul>  
<li>
<li>30 compacted records</li>
<p>30 compacted records, or</p>
</li>
   
   
<li>
<li>30 processed Table B pages</li>
<p>30 processed Table B pages, or</p>
</li>
   
   
<li>
<li>30 logically deleted records</li>
<p>30 logically deleted records,</p>
After the check, the counter is reset and processing continues until the next time one of the milestones is reached; and so on.</ul></li>
</li>
</ul>
whichever comes first.  </li>
   
   
<li>
<li>When compacting files with no Table X, <var>COMPACTB</var> allows higher priority users to run at every 30 records read, whether the records are compacted or not. </li>
<p>When compacting files with no Table X, <var>COMPACTB</var> allows higher priority users to run at every 30 records read, regardless of whether the records are compacted or not. </p>
</ul></li>
</li>
 
</ul>
<li>The data compactor commits at every record compacted.  
<p>
The data compactor commits at every record compacted.</p>
<p>
<p>
If your system crashes, only one record compaction might be lost. Data is never lost. The worst that might happen when the system crashes is that after recovery processing, one record, and only one, would have either of these:</p>
If your system crashes, only one record compaction might be lost. Data is never lost. The worst that might happen when the system crashes is that after recovery processing, one record, and only one, would have either of these:</p>
Line 69: Line 59:
<p>
<p>
You cannot reclaim space for orphaned chains without a file reorganization. There is no backout for the data compactor.</p>
You cannot reclaim space for orphaned chains without a file reorganization. There is no backout for the data compactor.</p>
<p>
</li>
The compactor takes preimages of all the pages it will change and writes a journal record, containing all compacted extensions for each compaction, which might require that you increase the checkpoint and journal data set sizes.</p>
 
<p>
<li>The compactor takes preimages of all the pages it will change and writes a journal record, containing all compacted extensions for each compaction, which might require that you increase the checkpoint and journal data set sizes.</li>
There is no restriction on record length or number of extension records for the data compactor, as long as there is enough free space on pages to hold the compacted records.</p>
 
<p>
<li>There is no restriction on record length or number of extension records for the data compactor, as long as there is enough free space on pages to hold the compacted records.</li>
The <var>COMPACTB</var> command maintains the same logical order (which is visible to programs) of extension records in the compacted records as in the original record. The physical order of extension records (that is, the order of the page numbers) might differ.</p>
 
<li>The <var>COMPACTB</var> command maintains the same logical order (which is visible to programs) of extension records in the compacted records as in the original record. The physical order of extension records (that is, the order of the page numbers) might differ.</li>
</ul>


====Setting the DELETE argument====
====Setting the DELETE argument====

Revision as of 17:05, 13 February 2015

Summary

Privileges
File Manager
Function
Combine as many extension records as possible into one extension record for the current or specified file.

Note: An automatic optimization occurs if a base record has only one extension record, and the combined base plus extension record will fit on the page currently occupied by the base record. In that case, the result of compaction is a single base record and the elimination of the extension.

Syntax

COMPACTB [FROM ssss][TO eeee][FREE nn][MAXE nn] [DELETE]

Where:

  • FROM ssss indicates the starting Table B page number where the compactor starts looking for extensions. The default is zero.
  • TO eeee indicates the ending logical record number where the compactor stops working.
  • FREE nn indicates the percentage of unused or free pages (Table B or Table X) that can be used by the data compactor for new extension records. The default is 10 percent. The percent of free pages is calculated as follows:
    • Table B: ((BSIZE - BHIGHPG) / BSIZE) * 100
    • Table X: ((XSIZE - XHIGHPG / XSIZE) * 100
  • MAXE nn specifies the percentage of a page size (6144 bytes) that defines the maximum extension record size eligible for compaction. Larger extensions are not moved. The default is 80 percent.
  • DELETE specifies to physically delete logically deleted records. The default behavior is to not delete the logically deleted records.

Usage

  • The data compactor is a CPU- and I/O-intensive process, so do not compact data at your site during high load periods. To avoid monopolizing system resources, the data compactor checks for a BUMP command or for long request conditions at the following intervals:
    • When compacting files with Table X, the check for a bump or for long request conditions is done when one of the following milestones (whichever comes first) is reached:
      • 30 compacted records
      • 30 processed Table B pages
      • 30 logically deleted records
      • After the check, the counter is reset and processing continues until the next time one of the milestones is reached; and so on.
    • When compacting files with no Table X, COMPACTB allows higher priority users to run at every 30 records read, whether the records are compacted or not.
  • The data compactor commits at every record compacted.

    If your system crashes, only one record compaction might be lost. Data is never lost. The worst that might happen when the system crashes is that after recovery processing, one record, and only one, would have either of these:

    • The old extension chain with the new extension chain being orphaned.
    • The new extension chain with the old extension chain being orphaned.

    You cannot reclaim space for orphaned chains without a file reorganization. There is no backout for the data compactor.

  • The compactor takes preimages of all the pages it will change and writes a journal record, containing all compacted extensions for each compaction, which might require that you increase the checkpoint and journal data set sizes.
  • There is no restriction on record length or number of extension records for the data compactor, as long as there is enough free space on pages to hold the compacted records.
  • The COMPACTB command maintains the same logical order (which is visible to programs) of extension records in the compacted records as in the original record. The physical order of extension records (that is, the order of the page numbers) might differ.

Setting the DELETE argument

To use the DELETE argument of the COMPACTB command for a file, that file must already have a Table X defined for it: XSIZE is greater than zero in the file CREATE arguments.

Setting the MAXE argument

The larger an extension, the less likely that it will be combined with other extensions, because the largest single extension record cannot be larger than a page size. When extension chains are very long and contain mainly very short extensions, a smaller MAXE setting might produce better results more quickly. You might want to test various MAXE settings to find which page percentage size is most advantageous for your data.

Running the data compactor

The data compactor tries to lock records on a one-at-a-time basis. Records with extensions that are subject to compaction are locked exclusively as long as compaction for the record takes place. If record lock is not available, then the record is skipped. Avoid running the data compactor with applications that lock large numbers of records for a long time.

Reviewing data compactor statistics

At the end of processing, the compactor prints the following statistics:

M204.2749: NUMBER OF BASIC RECORDS PROCESSED: nnnn M204.2750: NUMBER OF EXTENSION RECORDS BEFORE COMPACTION: nnnn M204.2751: NUMBER OF EXTENSION RECORDS AFTER COMPACTION: nnnn M204.2752: NUMBER OF NOT PROCESSED (LOCKED) RECORDS: nnnn M204.2753: NUMBER OF FREE PAGES USED: nnnn M204.2754: NUMBER OF DELETED LOCIGALLY DELETED RECORDS: nnnn

Example: file statistic changes

> IN TEST 1 VIEW BHIGHPG,EXTNADD,EXTNDEL,BQLEN BHIGHPG 39 TABLE B HIGHEST ACTIVE PAGE EXTNADD 42 EXTENSION RECORDS ADDED EXTNDEL 15 EXTENSION RECORDS DELETED BQLEN 2 TABLE B QUEUE LENGTH > IN TEST1 COMPACTB FROM 0 TO 9999999 FREE 100 MAXE 100 NUMBER OF BASIC RECORDS PROCESSED: 20 NUMBER OF EXTENSION RECORDS BEFORE COMPACTION: 27 NUMBER OF EXTENSION RECORDS AFTER COMPACTION: 20 NUMBER OF NOT PROCESSED (LOCKED) RECORDS: 0 NUMBER OF FREE PAGES USED: 2 > IN TEST1 VIEW BHIGHPG,EXTNADD,EXTNDEL,BQLEN BHIGHPG 41 TABLE B HIGHEST ACTIVE PAGE EXTNADD 49 EXTENSION RECORDS ADDED EXTNDEL 29 EXTENSION RECORDS DELETED BQLEN 5 TABLE B QUEUE LENGTH

The value of the BQLEN parameter changed. When the original extensions were deleted, a page became suitable for reuse and was placed on the Reuse Queue. In this example unused pages were used for new extensions. If new extensions are placed on a page from the Reuse Queue, BQLEN might decrease.

Example: compacting extensions

A single record has nine extension records with the following length. The lengths of the extensions in this example were chosen arbitrarily to illustrate how and when extensions are combined or not combined.

Ext.1 Ext.2 Ext.3 Ext.4 Ext.5 Ext.6 Ext.7 Ext.8 Ext.9
40 1200 2400 3200 4300 2300 60 90 120

The chain is reduced to four parts:

  • Part 1: Extensions 1, 2, and 3 have a combined length of 3640 that compacts into one extension record.
  • Part 2: Extension 4 is not moved, because combined with previous extensions it will not fit the page.
  • Part 3: Extension 5 is not moved, because it cannot be combined with either Extension 4 or Extension 6.
  • Part 4: Extensions 6, 7, 8, and 9 have a combined length of 2570 that compacts into one extension record.

After compacting, there are four extensions instead of nine.