Field design: Difference between revisions

From m204wiki
Jump to navigation Jump to search
 
(35 intermediate revisions by 7 users not shown)
Line 76: Line 76:
<p>  
<p>  
ORDERED CHARACTER fields use the standard EBCDIC collating sequence. If a field is ORDERED CHARACTER, all values of the field are stored in the Ordered Index in EBCDIC collating sequence order.</p>
ORDERED CHARACTER fields use the standard EBCDIC collating sequence. If a field is ORDERED CHARACTER, all values of the field are stored in the Ordered Index in EBCDIC collating sequence order.</p>
 
=====ORDERED NUMERIC fields=====
=====ORDERED NUMERIC fields=====
<p>  
<p>  
Line 119: Line 119:
   [<b>LRES</b>ERVE <var class="term">r</var>] [<b>NRES</b>ERVE <span class="term">n</span>]
   [<b>LRES</b>ERVE <var class="term">r</var>] [<b>NRES</b>ERVE <span class="term">n</span>]
</p>
</p>
<blockquote class="note"><b>Note:</b>
<p class="note">
<p>
<b>Note:</b> The default designation for a field is <var>NON-ORDERED</var>. Ordered Index tree parameters do not apply for a <var>NON-ORDERED</var> field. </p>
The default designation for a field is NON-ORDERED. Ordered Index tree parameters do not apply for a NON-ORDERED field. </p>
<p>
<p>
<var>CHARACTER</var> and <var>NUMERIC</var> order types are described above.</p>
<var>CHARACTER</var> and <var>NUMERIC</var> order types are described above.</p>
<p>
<p>
<var>SPLITPCT</var>, <var>IMMED</var>, <var>LRESERVE</var>, and <var>NRESERVE</var> parameters are described in [[Table D (File architecture)#Ordered Index spacing parameters|Ordered Index spacing parameters]].       </p>
<var>SPLITPCT</var>, <var>IMMED</var>, <var>LRESERVE</var>, and <var>NRESERVE</var> parameters are described in [[Table D (File architecture)#Ordered Index spacing parameters|Ordered Index spacing parameters]]. </p>
</blockquote>
 
=====Redefining ORDERED fields=====
=====Redefining ORDERED fields=====
<p>
<p>
Line 152: Line 150:
   
   
=====CHUNK attribute=====  
=====CHUNK attribute=====  
However, as of Model 204 version 7.5, you can improve the efficiency of data range retrieval on ORDERED NUMERIC fields by defining the fields with the <var>[[DEFINE FIELD command#CHUNK attribute|CHUNK]]</var> attribute.
However, as of Model 204 version 7.5, you can improve the efficiency of data range retrieval on <var>ORDERED NUMERIC</var> fields by defining the fields with the <var>[[DEFINE FIELD command#CHUNK attribute|CHUNK]]</var> attribute.
   
   
The CHUNK attribute defines a subrange ("OI chunk") of the Ordered Index data range, so the desired data can be found with fewer scans of the Ordered Index entries.
The <var>CHUNK</var> attribute defines a subrange ("OI chunk") of the Ordered Index data range, so the desired data can be found with fewer scans of the Ordered Index entries.
   
   
CHUNK fields are AUTOMATIC fields and therefore require that FILEORG X'100' be set for the file.
<var>CHUNK</var> fields are automatic fields and therefore require that FILEORG X'100' be set for the file.
   
   
The CHUNK attribute is followed by a number, defining the size of the OI chunk, and then FOR <var class="term">chunkTargetFieldName</var>, where <var class="term">chunkTargetFieldName</var> is the existing ORDERED NUMERIC field from which you want to retrieve data in OI chunks.
The <var>CHUNK</var> attribute is followed by a number, defining the size of the OI chunk, and then FOR <var class="term">chunkTargetFieldName</var>, where <var class="term">chunkTargetFieldName</var> is the existing <var>ORDERED NUMERIC</var> field from which you want to retrieve data in OI chunks.
   
   
So, to illustrate:
So, to illustrate:
Line 165: Line 163:
</p>
</p>
<p>  
<p>  
A CHUNK field is automatically maintained based on updates to the target (FOR) field (here, YYYYMMDD). The stored value is the value of the target field rounded down to the CHUNK size. So, for example, if you stored a value of 20121225 into YYYYMMDD, the underlying code would automatically store the value 20121200 into YYYYMM. Similarly, if you stored 20121203 into YYYYMMDD, a 20121200 value would also be stored into YYYYMM.
A <var>CHUNK</var> field is automatically maintained based on updates to the target (FOR) field (here, YYYYMMDD). The stored value is the value of the target field rounded down to the CHUNK size. So, for example, if you stored a value of 20121225 into YYYYMMDD, the underlying code would automatically store the value 20121200 into YYYYMM. Similarly, if you stored 20121203 into YYYYMMDD, a 20121200 value would also be stored into YYYYMM.
</p>
</p>
<p>  
<p>  
When doing a range find, Model 204 decomposes the find into range finds based on the CHUNK field and the target field.  
When doing a range find, Model&nbsp;204 decomposes the find into range finds based on the <var>CHUNK</var> field and the target field.  
So, for example, a find for values between 20121211 and 20130205 would be decomposed into a find for:
So, for example, a find for values between 20121211 and 20130205 would be decomposed into a find for:
</p>
</p>
<ul>
<ul>
<li>YYYYMMDD between 20121211 and 20121299 (ignoring for the moment issues with non-integer values and demonstrating that CHUNK doesn't really know or care if the field is a date)</li>
<li>YYYYMMDD between 20121211 and 20121299 (ignoring for the moment issues with non-integer values and demonstrating that <var>CHUNK</var> doesn't really know or care if the field is a date)</li>


<li>OR YYYYMM between 20130100 and 20130100 (or more precisely, exactly equal to 20130100)</li>
<li>OR YYYYMM between 20130100 and 20130100 (or more precisely, exactly equal to 20130100)</li>
Line 182: Line 180:
Results would often be much better than that. For example, it probably would not be atypical for a date range to land on month boundaries, resulting in improvements closer to a factor of 30. And, as the range got bigger and bigger, more and more of the range processing would occur on the CHUNK field, with additional improvement in processing.
Results would often be much better than that. For example, it probably would not be atypical for a date range to land on month boundaries, resulting in improvements closer to a factor of 30. And, as the range got bigger and bigger, more and more of the range processing would occur on the CHUNK field, with additional improvement in processing.
</p>
</p>
 
=====Multiple CHUNK fields=====
=====Multiple CHUNK fields=====
<p>  
<p>  
To get even better performance, you could define multiple CHUNK fields for a target field:
To get even better performance, you could define multiple <var>CHUNK</var> fields for a target field:
</p>
</p>
<p class="syntax">DEFINE FIELD YYYYMMDD WITH ORDERED NUMERIC
<p class="syntax">DEFINE FIELD YYYYMMDD WITH ORDERED NUMERIC
Line 193: Line 191:
</p>
</p>
<p>  
<p>  
The OI chunk size for a CHUNK field must be a greater-than-1 integral multiple of all smaller OI chunk sizes for the same target field. For example, specifying "7" as the first CHUNK size in the above example would be invalid because 100 is not an integral multiple of 7.</p>  
The OI chunk size for a <var>CHUNK</var> field must be a greater-than-1 integral multiple of all smaller OI chunk sizes for the same target field. For example, specifying "7" as the first <var>CHUNK</var> size in the above example would be invalid because 100 is not an integral multiple of 7.</p>  
<p>
<p>
Note that you could reference the CHUNK fields directly in non-updating statements, as shown in this example:</p>
Note that you could reference the <var>CHUNK</var> fields directly in non-updating statements, as shown in this example:</p>
   
   
<p class="code">fd to %recset
<p class="code">fd to %recset
Line 202: Line 200:
</p>
</p>
<p>
<p>
In any case, the number and size of the CHUNK fields allows you to trade off the overhead of doing updates against the efficiency of doing range finds. </p>
In any case, the number and size of the <var>CHUNK</var> fields allows you to trade off the overhead of doing updates against the efficiency of doing range finds. </p>
 
====Choosing between ORDERED and FRV====
====Choosing between ORDERED and FRV====
<p>
<p>
Line 409: Line 407:
<!--Caution: <div>s above-->
<!--Caution: <div>s above-->


<p class="note"><b>Note:</b> The MINLOBE attribute is available as of Model 204 version 7.5.</p>
<p>The <var>BLOB</var> and <var>CLOB</var> attributes describe [[Record (File architecture)#Large objects|Large Object data]] that can be used to store long strings in the database, for example, long XML strings. This means that <var class="product">Model&nbsp;204</var> User Language threads can exchange Large Object data with a partner process by way of the Universal Buffer and/or the [[MQ/204]] Interface. The Universal Buffer is a one-per-user, temporary storage area that, like the [[MQ/204 data conversion and handling#Data handling|MQ buffer]], automatically expands to accommodate its data contents.</p>
<p>
The BLOB and CLOB attributes describe Large Object data that can be used to store long strings in the database, for example, long XML strings. This means that <var class="product">Model&nbsp;204</var> User Language threads can exchange Large Object data with a partner process by way of the Universal Buffer and/or the MQ/204 Interface. </p>
<ul>
<ul>
<li>With the Universal Buffer, Large Object data can be sent and received using User Language READ IMAGE and WRITE IMAGE statements, of $SOCKET READ and WRITE calls.</li>
<li>With the Universal Buffer, Large Object data can be sent and received using SOUL <var>READ IMAGE</var> and <var>WRITE IMAGE</var> statements, or <var>[[Program Communication facilities#.24SOCKET function|$SOCKET]]</var> <var>READ</var> and <var>WRITE</var> calls.</li>


<li>With the MQ/204 Interface, Large Object data can be sent and received by MQPUT and MQGET statements, stored and retrieved from the database with User Language STORE, ADD, and assignment statements using the BUFFER reserved area name.</li>
<li>With the MQ/204 Interface, Large Object data can be sent and received by <var>MQPUT</var> and <var>MQGET</var> statements, stored and retrieved from the database with SOUL <var>STORE</var>, <var>ADD</var>, and assignment statements using the <var>BUFFER</var> reserved area name.</li>
</ul>
</ul>


<p class="note"><b>Note:</b> Files created in V6R1.0 and later are not compatible with earlier versions of <var class="product">Model&nbsp;204</var>, because of Large Object support and the concomitant changes to the FPL page.</p>
<p class="note"><b>Note:</b> Files created in V6R1 and later are not compatible with earlier versions of <var class="product">Model&nbsp;204</var>, because of Large Object support and the concomitant changes to the FPL page.</p>
   
   
====Introducing Large Object field types====
====Introducing Large Object field types====
Line 428: Line 424:
</ul>
</ul>


<p class="note"><b>Note:</b> Because Large Object fields require that Table E be present in a file, you cannot simply add a Large Object field to an existing file created in a pre-6.1 version of Model 204. You would need to rebuild the existing file by downloading the data and procedures; recreate the file to establish Table E; define the Large Object fields to incorporate; and load the data.</p>
<p class="note"><b>Note:</b> Because Large Object fields require that Table&nbsp;E be present in a file, you cannot simply add a Large Object field to an existing file created in a pre-6.1 version of Model&nbsp;204. You would need to rebuild the existing file by downloading the data and procedures; recreate the file to establish Table E; define the Large Object fields to incorporate; and load the data.</p>


====Limitations of Large Object support====
====Limitations of Large Object support====
Line 443: Line 439:
====Handling Large Object data====
====Handling Large Object data====
<p>
<p>
The BLOB and CLOB attributes are supported as follows:</p>
The <var>BLOB</var> and <var>CLOB</var> attributes are supported as follows:</p>
<ul>
<ul>
<li>You can create, open, and recover a file with a BLOB or CLOB attribute. The Large Object field can be multiply occurring. </li>
<li>You can create, open, and recover a file with a <var>BLOB</var> or <var>CLOB</var> attribute. The Large Object field can be multiply occurring. </li>


<li>The DELETE fieldname statement supports the Large Object data. This statement frees the Table B and Table E data.</li>
<li>The <var>DELETE <i>fieldname</i></var> statement supports Large Object data. This statement frees the Table&nbsp;B and Table&nbsp;E data.</li>


<li>You can store and add a Large Object data to the file you have created from the Universal Buffer, BUFFER. Using an assignment statement you can transfer Large Object data from a field into the Universal Buffer.</li>
<li>You can store and add a Large Object data to the file you have created from the Universal Buffer, BUFFER. Using an assignment statement you can transfer Large Object data from a field into the Universal Buffer.</li>


<li>You can set the number of Table E pages during file creation with PARAMETER ESIZE= n.</li>
<li>You can set the number of Table E pages during file creation with <code>[[ESIZE parameter|ESIZE]]=<i>n</i></code>.</li>


<li>You can view the EHIGHPG parameter to determine the highest number of Table E pages in use.</li>
<li>You can view the <var>EHIGHPG</var> parameter to determine the highest number of Table E pages in use.</li>


<li>You can modify the size of Table E with an INCREASE or DECREASE command.</li>
<li>You can modify the size of Table E with an <var>INCREASE</var> or <var>DECREASE</var> command.</li>


<li>The DELETE FIELD fieldname command supports the Large Object field type, so you can delete a Large Object field name. However, the space in Table B and Table E is not automatically available for reuse. </li>
<li>The <code>DELETE FIELD <i>fieldname</i></code> command supports the Large Object field type, so you can delete a Large Object field name. However, the space in Table&nbsp;B and Table&nbsp;E is not automatically available for reuse. </li>


<li>[[$LOBLEN|$LOBLEN]](<i>lob-field-name</i>) function is available for your use.</li>
<li>The [[$LobLen]](<i>lob-field-name</i>) function is available for your use.</li>
<li>Dictionary/204 and the Cross-Reference facility support Large Object field types.</li>
 
<li>[[Dictionary/204 overview|Dictionary/204]] and the Cross-Reference facility support Large Object field types.</li>


<li>NULL Large Object data is supported.</li>
<li>NULL Large Object data is supported.</li>
Line 467: Line 464:


<li>Field-level security is supported.</li>
<li>Field-level security is supported.</li>
<li>A string field cannot be converted to a CLOB via PAI unload followed by FLOD/FILELOAD.
  <ul>
  <li>The new CLOB field must be populated, from the STRING field, using a SOUL program and the universal buffer.</li> 
  <li>Or, if using Fast/Unload, a STRING field can be converted to a CLOB during UAI/LAI. See [[Fast/Unload_BLOB/CLOB_processing_considerations#NEW_statement_option_for_Lobs|NEW statement option for Lobs]] for more details.</li>
  </ul>
</li>
</ul>
</ul>


====BLOB and CLOB compatibility with other attributes====
====BLOB and CLOB compatibility with other attributes====
<p>
<p>
See the table in <var>[[DEFINE_FIELD_command|DEFINE FIELD]]</var> for a list of attributes that are not allowed with BLOB or CLOB. Use of conflicting attributes produces the following message:</p>
See the [[Field attribute combinations]] page for the attributes that are not allowed with <var>BLOB</var> or <var>CLOB</var>. Use of conflicting attributes produces the following message:</p>
<p class="syntax">M204.0411: CONFLICTING ATTRIBUTES: ([BLOB | CLOB]) <var class="term">attribute2</var>
<p class="code">M204.0411: Conflicting attributes: ([BLOB | CLOB]) and <var class="term">attribute2</var>
</p>
</p>


<div id="Using the MINLOBE attribute"></div>
====<b id="Using the MINLOBE attribute"></b>Using the MINLOBE (MLBE) attribute====
====Using the MINLOBE (MLBE) attribute====
<p class="note"><b>Note:</b> The <var>MINLOBE</var> attribute is available as of Model&nbsp;204 version 7.5.</p>
<p>
<p>
You can use the MINLOBE attribute only with a BINARY LARGE OBJECT (BLOB) or a CHARACTER LARGE OBJECT (CLOB) field in a file with FILEORG X'100'. The MINLOBE attribute cannot be paired with the OCCURS attribute. You can redefine the MINLOBE attribute value.</p>
The <var>MINLOBE</var> attribute defines the minimum size of a <var>BLOB</var> or <var>CLOB</var> field value that will be stored in Table&nbsp;E.</p>
<p>
You can use the <var>MINLOBE</var> attribute only with a BINARY LARGE OBJECT (BLOB) or a CHARACTER LARGE OBJECT (CLOB) field in a <var>[[FILEORG parameter|FILEORG]]</var> X'100' file. The <var>MINLOBE</var> attribute cannot be paired with the <var>OCCURS</var> attribute. You can redefine the <var>MINLOBE</var> attribute value.</p>


<b>Syntax</b>
<b>Syntax</b>
Line 484: Line 489:
</p>
</p>
<p>
<p>
The default and smallest value for MINLOBE is 0, which means that all values of the field are stored in Table E. The maximum value of MINLOBE is 200.</p>
The default and smallest value for <var>MINLOBE</var> is 0, which means that all values of the field are stored in Table&nbsp;E. The maximum value of <var>MINLOBE</var> is 200.</p>
<p>
<p>
If a field has the MINLOBE attribute greater than 0:</p>
If a field has a <var>MINLOBE</var> attribute greater than 0:</p>
<ul>
<ul>
<li>And the value of the field is not longer than MINLOBE, then the value is stored in Table B or Table X, instead of Table E.</li>
<li>And the value of the field is not longer than <var>MINLOBE</var>, then the value is stored in Table B or Table X, instead of Table E.</li>


<li>The User Language CHANGE statement is not allowed and the following error is displayed:
<li>The SOUL <var>CHANGE</var> statement is not allowed, and the following error is displayed:
<p class="code">M204.2869 CHANGE NOT ALLOWED: MINLOBE > 0 FOR FIELD <i>fieldname</i>
<p class="code">M204.2869 Change refused: MINLOBE > 0 for File <i>filename</i> field <i>fieldname</i></p>
</p>
<p>
<p>
Any CHARACTER LARGE OBJECT or BINARY LARGE OBJECT field which does not use the CHANGE statement benefits from setting MINLOBE 200.</p>
Any <var>CHARACTER LARGE OBJECT</var> or <var>BINARY LARGE OBJECT</var> field that does not use the <var>CHANGE</var> statement benefits from setting <code>MINLOBE 200</code>.</p>
</li>
</li>


<li>The RESERVE clause is ignored on the User Language ADD statement.</li>
<li>The <var>[[Large Object field processing for non-FILEORG X'100' files#Adding Large Object fields|RESERVE]]</var> clause is ignored on the SOUL <var>ADD</var> statement.</li>
</ul>
</ul>


====Storing and updating LOBs====
====Storing and updating LOBs====
<p>
<p>
All large object data (LOBs) in a FILEORG X'100' file are chained. There are four bytes per Table E page overhead for chained LOBs. The pages used by a chained LOB are not contiguous.</p>
All large object data (LOBs) in a <var>FILEORG</var> X'100' file are chained. There are four bytes per Table&nbsp;E page overhead for chained LOBs. The pages used by a chained LOB are not contiguous.</p>
<p>
<p>
Handling LOBs in FILEORG X'100' files also has the following effects:</p>
Handling LOBs in <var>FILEORG</var> X'100' files also has the following effects:</p>
<ul>
<ul>
<li>The RESERVE clause is ignored in a LOB field ADD statement processing, as well as the STORE RECORD statement processing of fieldname=value pairs.
<li>The <var>RESERVE</var> clause is ignored in a LOB field <var>ADD</var> statement processing, as well as the <var>STORE RECORD</var> statement processing of fieldname=value pairs.
<p>
<p>
Consequently, the CHANGE statement does not fail because of  insufficient reserved space. If the CHANGE statement requires that a LOB field be extended, it is.</p>
Consequently, the <var>CHANGE</var> statement does not fail because of  insufficient reserved space. If the <var>CHANGE</var> statement requires that a LOB field be extended, it is.</p>
</li>
</li>


<li>The value of the [[EHIGHPG parameter|EHIGHPG]] parameter is always one less than the high water mark of the number of pages used to hold LOBs. (Unless none were ever added, in which case it is zero, not -1).</li>
<li>The value of the <var>[[EHIGHPG parameter|EHIGHPG]]</var> parameter is always one less than the high water mark of the number of pages used to hold LOBs. (Unless none were ever added, in which case it is zero, not -1).</li>


<li>The value of the [[EPGSUSED parameter|EPGSUSED]] parameter is always the number of pages currently being used to hold LOBs.</li>
<li>The value of the <var>[[EPGSUSED parameter|EPGSUSED]]</var> parameter is always the number of pages currently being used to hold LOBs.</li>


<li>The [[COMPACTE command|COMPACTE]] command does not process FILEORG X'100' files, just as it does not process a file created in V6R1 or earlier. Thus, issuing a COMPACTE command for a FILEORG X'100' file produces an error message.</li>
<li>The <var>[[COMPACTE command|COMPACTE]]</var> command does not process <var>FILEORG</var> X'100' files, just as it does not process a file created in V6R1 or earlier. Thus, issuing a <var>COMPACTE</var> command for a <var>FILEORG</var> X'100' file produces an error message.</li>


<li>The [[TABLEE command|TABLEE]] command effectively does a [[VIEW command|VIEW]] ESIZE EHIGHPG EPGSUSED for a FILEORG X'100' file. Consequently there are no TABLEE overhead pages in a FILEORG X'100' file.</li>
<li>The <var>[[TABLEE command|TABLEE]]</var> command effectively does a <code>[[VIEW command|VIEW]] ESIZE EHIGHPG EPGSUSED</code> for a <var>FILEORG</var> X'100' file. Consequently there are no <var>TABLEE</var> overhead pages in a <var>FILEORG</var> X'100' file.</li>
</ul>
</ul>


Line 531: Line 535:
If FLOAT is not specified when a field is defined, the STRING attribute is assumed.</p>
If FLOAT is not specified when a field is defined, the STRING attribute is assumed.</p>
   
   
<b>Defining floating point fields</b>
====Defining floating point fields====
<p>
<p>
Field values are stored in floating-point by specifying the FLOAT attribute in the DEFINE FIELD command and supplying a LENGTH of 4, 8, or 16. The length supplied determines the precision of the floating-point field being defined. For example, to define a long precision field, specify:</p>
Field values are stored in floating-point by specifying the FLOAT attribute in the DEFINE FIELD command and supplying a LENGTH of 4, 8, or 16. The length supplied determines the precision of the floating-point field being defined. For example, to define a long precision field, specify:</p>
Line 541: Line 545:
<table>
<table>
<tr class="head">
<tr class="head">
<th>
<th>Type of floating-point field</th>
Type of floating-point field</th>
<th>Occupies</th>
<th>
<th>Holds a maximum of</th>
<th>Model&nbsp;204 maintains a <br>maximum precision of</th>
Occupies...</th>
<th>
Holds a maximum of...</th>
<th>Model&nbsp;204 maintains a maximum precision of...</th>
</tr>
</tr>
   
   
Line 572: Line 572:
</tr>
</tr>
</table>
</table>
<b>Acceptable values</b>
 
====Acceptable values====
<p>
<p>
The floating-point data type uses IBM hardware's floating-point representation.</p>
The floating-point data type uses IBM hardware's floating-point representation.</p>
Line 578: Line 579:
A value of negative zero (X'800...0') cannot be stored in a preallocated FLOAT field. </p>
A value of negative zero (X'800...0') cannot be stored in a preallocated FLOAT field. </p>


<b>FLOAT with NUMERIC VALIDATION files</b>
====FLOAT with NUMERIC VALIDATION files====
<p>
<p>
If you define a FLOAT field in a NUMERIC VALIDATION file model file ([[FILEMODL parameter|FILEMODL]]=X'01'), <var class="product">Model&nbsp;204</var> provides numeric data type validation. Trying to store non-conforming data in a FLOAT field in a NUMERIC VALIDATION file causes <var class="product">Model&nbsp;204</var> to cancel the request with the following message:</p>
If you define a FLOAT field in a NUMERIC VALIDATION file model file ([[FILEMODL parameter|FILEMODL]]=X'01'), <var class="product">Model&nbsp;204</var> provides numeric data type validation. Trying to store non-conforming data in a FLOAT field in a NUMERIC VALIDATION file causes <var class="product">Model&nbsp;204</var> to cancel the request with the following message:</p>
<p class="code">M204.2124: VALUE SPECIFIED VIOLATES FLOAT DATA TYPE VALIDATION IN RECORD <i>recno</i>, <br>   FIELD IGNORED: <i>fieldname</i> = <i>value</i>
<p class="code">M204.2124: VALUE SPECIFIED VIOLATES FLOAT DATA TYPE VALIDATION IN RECORD <i>recno</i>, <br>           FIELD IGNORED: <i>fieldname</i> = <i>value</i>
</p>
</p>


<b>Compatibility with other attributes</b>
====Compatibility with other attributes====
<p>
<p>
STRING, BINARY, NUMERIC RANGE, and INVISIBLE cannot be specified for a FLOAT field. </p>
STRING, BINARY, NUMERIC RANGE, and INVISIBLE cannot be specified for a FLOAT field. </p>
Line 659: Line 660:
<p>
<p>
If, however, you have a field that is used only to retrieve records, and space is a consideration, you might want to define the field as INVISIBLE.</p>
If, however, you have a field that is used only to retrieve records, and space is a consideration, you might want to define the field as INVISIBLE.</p>
<p clsas="note"><b>Note</b>
<p class="note"><b>Note:</b>
If the INVISIBLE attribute is not specified when the field is defined, the VISIBLE attribute is assumed. </p>
If the INVISIBLE attribute is not specified when the field is defined, the VISIBLE attribute is assumed. </p>
<b>Using the VISIBLE attribute</b>
<b>Using the VISIBLE attribute</b>
Line 675: Line 676:
A field that has the INVISIBLE attribute, on the other hand, takes up no storage space in Table B and cannot be printed, noted, or sorted. These fields can be added, deleted, or changed in the logical record.</p>
A field that has the INVISIBLE attribute, on the other hand, takes up no storage space in Table B and cannot be printed, noted, or sorted. These fields can be added, deleted, or changed in the logical record.</p>
<p>In a library catalog file, for example, each record can have an AUTHOR field that contains the author's full name. In addition, there can be a LAST NAME field that contains only the author's last name. This creates a convenient index to the catalog and relieves the user of the burden of knowing the entire name when making retrieval requests. Because the author's last name already is part of the (VISIBLE) AUTHOR field, some space can be saved in Table B by selecting the INVISIBLE attribute for LAST NAME.</p>
<p>In a library catalog file, for example, each record can have an AUTHOR field that contains the author's full name. In addition, there can be a LAST NAME field that contains only the author's last name. This creates a convenient index to the catalog and relieves the user of the burden of knowing the entire name when making retrieval requests. Because the author's last name already is part of the (VISIBLE) AUTHOR field, some space can be saved in Table B by selecting the INVISIBLE attribute for LAST NAME.</p>
<b>Reorganizing INVISIBLE fields</b>
<b>Reorganizing INVISIBLE fields</b>
<p>
<p>
Because INVISIBLE fields are not stored in Table B, they require special care if the file is ever reorganized. When designing a file, be sure to take reorganization requirements for these fields into consideration. See the <i>File reorganization and table compaction</i> page, [[File reorganization and table compaction#Reorganizing INVISIBLE fields|Reorganizing INVISIBLE fields]] section for more information.  </p>
Because INVISIBLE fields are not stored in Table B, they require special care if the file is ever reorganized. When designing a file, be sure to take reorganization requirements for these fields into consideration. See the <i>File reorganization and table compaction</i> page, [[File reorganization and table compaction#Reorganizing INVISIBLE fields|Reorganizing INVISIBLE fields]] section, for more information.  </p>
 
<b>Maintaining INVISIBLE fields</b>
<b>Maintaining INVISIBLE fields</b>
<p>
<p>
Line 751: Line 754:
A more precise rule governing the choice of the CODED attribute is difficult to arrive at since first it must be determined (based on the number and length of the field values, and on the number of records) if any disk space is actually saved by the CODED attribute. Then, any space saved must be weighed against the extra time taken by CODED retrievals. And the relative importance of disk space and time are likely to vary with the application as well as with the installation.</p>
A more precise rule governing the choice of the CODED attribute is difficult to arrive at since first it must be determined (based on the number and length of the field values, and on the number of records) if any disk space is actually saved by the CODED attribute. Then, any space saved must be weighed against the extra time taken by CODED retrievals. And the relative importance of disk space and time are likely to vary with the application as well as with the installation.</p>
<p>
<p>
The formulas in [[File size calculation]] show more explicitly the effects of the CODED field attribute on Table A and B size.</p>  
The formulas in [[File size calculation in detail]] show more explicitly the effects of the CODED field attribute on Table A and B size.</p>  
<p>
<p>
Field value encoding is entirely transparent to the user. Data is returned exactly as it was entered. Codes are system-generated, internal codes that have no meaning to a user.  </p>
Field value encoding is entirely transparent to the user. Data is returned exactly as it was entered. Codes are system-generated, internal codes that have no meaning to a user.  </p>
Line 757: Line 760:
<!-- Obviously, these were formerly <h3>s; the first has refs within this page; who knows about refs from other pages? -->
<!-- Obviously, these were formerly <h3>s; the first has refs within this page; who knows about refs from other pages? -->
<div id="Preallocated fields"></div><div id="OCCURS attribute"></div>
<div id="Preallocated fields"></div><div id="OCCURS attribute"></div>
===Preallocated fields (OCCURS attribute)===
===Preallocated fields (OCCURS attribute)===
<p>
<p>
Line 763: Line 767:
Preallocation means that space is reserved in each new record added to a file. As occurrences of preallocated fields are stored in a record, they are placed in the reserved space. Space is used even for occurrences that have not been stored, whereas non-preallocated fields use no space for missing occurrences. Preallocation also imposes restrictions on the number and length of values that can be stored in a record. A field is preallocated if its description includes an OCCURS clause. </p>
Preallocation means that space is reserved in each new record added to a file. As occurrences of preallocated fields are stored in a record, they are placed in the reserved space. Space is used even for occurrences that have not been stored, whereas non-preallocated fields use no space for missing occurrences. Preallocation also imposes restrictions on the number and length of values that can be stored in a record. A field is preallocated if its description includes an OCCURS clause. </p>
<p>
<p>
[[File size calculation]] provides specific rules for the amount of space used by preallocated and non-preallocated fields. Use these rules to determine whether or not to preallocate a particular field.  </p>
[[File size calculation in detail]] provides specific rules for the amount of space used by preallocated and non-preallocated fields. Use these rules to determine whether or not to preallocate a particular field.  </p>
<b>Using an OCCURS clause</b>
<b>Using an OCCURS clause</b>
<p>
<p>
Line 858: Line 862:
</ul>
</ul>
<p>
<p>
The <var>DEFAULT-VALUE</var> attribute is available as of Model&nbsp;204 version 7.5. For more information about <var>DEFAULT-VALUE</var> fields that are members of a field group, see [[Processing multiply occurring fields and field groups#Missing data in field groups|Missing data in field groups]]. </p>
The <var>DEFAULT-VALUE</var> attribute is available as of Model&nbsp;204 version 7.5. For more information about <var>DEFAULT-VALUE</var> fields that are members of a field group, see [[Processing multiply occurring fields and field groups#Handling references to missing occurrences|Handling references to missing occurrences]]. </p>
<p>
<p>
The value of the <var>STORE-DEFAULT</var> attribute determines whether the <var>DEFAULT-VALUE</var> is physically stored on the record or if it is just used as the default value when the field is missing:</p>
The value of the <var>STORE-DEFAULT</var> attribute determines whether the <var>DEFAULT-VALUE</var> is physically stored on the record or if it is just used as the default value when the field is missing:</p>
Line 879: Line 883:
</table>
</table>


<p class="note"><b>Note:</b> <var>STORE-NULL NONE</var> is not allowed with <var>DEFAULT-VALUE</var>.</p>
<p class="note"><b>Note:</b> Neither <var>STORE-NULL NONE</var> nor <var>STORE-NULL LIT</var> is allowed with <var>DEFAULT-VALUE</var>.</p>


<P>The <var>DEFAULT-VALUE</var> attribute allows values as long as 31 bytes. An attempt to use more results in:</p>
<P>The <var>DEFAULT-VALUE</var> attribute allows values as long as 31 bytes. An attempt to use more results in:</p>
Line 885: Line 889:
'1234567890123456789012345678901234567890' TOO BIG, MUST BE &lt;32 BYTES
'1234567890123456789012345678901234567890' TOO BIG, MUST BE &lt;32 BYTES
</p>
</p>
<p class="note"><b>Note:</b> The minimum length of a <var>DEFAULT-VALUE</var> value is 1 byte. A null string value is not allowed. </p>
<p>
<p>
The value of <var>DEFAULT-VALUE</var> is not constrained by attributes such as <var>LEQ</var>, <var>LLE</var>, and <var>LGE</var>.  
The value of <var>DEFAULT-VALUE</var> is not constrained by attributes such as <var>LEQ</var>, <var>LLE</var>, and <var>LGE</var>.  
Line 903: Line 908:
<li><var>ALL</var></li>
<li><var>ALL</var></li>
</ul>
</ul>
====STORE-DEFAULT example====
<p>
<p>
The following example shows the effect of <var>DEFAULT-VALUE</var> on fields defined with the different <var>STORE-DEFAULT</var> types (<var>ALL</var>, <var>NONE</var>, and <var>LITERAL</var>).  
The following example shows the effect of <var>DEFAULT-VALUE</var> on fields defined with the different <var>STORE-DEFAULT</var> types (<var>ALL</var>, <var>NONE</var>, and <var>LITERAL</var>).  
Line 943: Line 950:
The <var>STORE-DEFAULT</var> options determine how the field will be physically stored if the <var>DEFAULT-VALUE</var> is stored in the field. </p>
The <var>STORE-DEFAULT</var> options determine how the field will be physically stored if the <var>DEFAULT-VALUE</var> is stored in the field. </p>
<p>
<p>
This example shows what is physically stored in the record when the <var>DEFAULT-VALUE</var> is stored for each of these fields. The <var>DEFAULT-VALUE</var> in this case is <code>NONAME</code>. First, the actual <var>DEFAULT-VALUE</var> of <code>NONAME</code> is added by using a variable, then by using the literal <code>'NONAME'</code>. Then a field group occurrence is added where the field is missing. </p>
This example shows what is physically stored in the record when the <var>DEFAULT-VALUE</var> is stored for each of the fields defined above for <code>FIELDGROUP X</code>. The <var>DEFAULT-VALUE</var> defined for each is <code>NONAME</code>. First, the actual <var>DEFAULT-VALUE</var> of <code>NONAME</code> is added by using a variable, then by using the literal <code>'NONAME'</code>. Then a field group occurrence is added where the field is missing. </p>
<p class="code">ADD.X: SUBROUTINE
<p class="code">ADD.X: SUBROUTINE
   IN FILEX  FRN %CURREC
   IN FILEX  FOR RECORD NUMBER %CURREC
       * add first field group occurrence using the -variable %NAME:
       * add first field group occurrence using the variable %NAME:
       %NAME='NONAME'
       %NAME='NONAME'
       ADD FIELDGROUP X
       ADD FIELDGROUP X
Line 953: Line 960:
           NAME_SD_ALL  = %NAME
           NAME_SD_ALL  = %NAME
       END ADD
       END ADD
       * add second field group occurrence using the - literal 'NONAME':
       * add second field group occurrence using the literal 'NONAME':
       ADD FIELDGROUP X
       ADD FIELDGROUP X
           NAME_SD_NONE = 'NONAME'
           NAME_SD_NONE = 'NONAME'
Line 1,016: Line 1,023:
</p>
</p>
<p class="output"># NAME_SD_NONE:  NAME_SD_LIT:  NAME_SD_ALL:
<p class="output"># NAME_SD_NONE:  NAME_SD_LIT:  NAME_SD_ALL:
NONAME        NONAME        NONAME
1 NONAME        NONAME        NONAME
NONAME        NONAME        NONAME
2 NONAME        NONAME        NONAME
NONAME        NONAME        NONAME
3 NONAME        NONAME        NONAME
</p>
</p>
<p class="note"><b>Note:</b> The same behavior occurs whether the field is <var>AT-MOST-ONE</var> or <var>EXACTLY-ONE</var>. However, a <var>PAI</var> of an <var>EXACTLY-ONE</var> field always shows a value. The actual physical storage on the record is the same. </p>
<p class="note"><b>Note:</b> The same behavior occurs whether the field is <var>AT-MOST-ONE</var>, as in this example, or <var>EXACTLY-ONE</var>. However, within fieldgroup context, a <var>PAI</var> of an <var>EXACTLY-ONE</var> field always shows a value. The actual physical storage on the record is the same for both attributes. </p>
</blockquote>
</blockquote>
====STORE-NULL example====
<p>
<p>
The options on the <var>STORE-NULL</var> attribute react similarly. When the fields are defined as follows:</p>
The options on the <var>STORE-NULL</var> attribute react similarly to those of <var>STORE-DEFAULT</var>. When the fields are defined as follows:</p>
<p class="code">DEFINE FIELDGROUP X
<p class="code">DEFINE FIELDGROUP X
   DEFINE FIELD NAME_SN_NONE  WITH FIELDGROUP X AND ORD AT-MOST-ONE      -
   DEFINE FIELD NAME_SN_NONE  WITH FIELDGROUP X AND ORD AT-MOST-ONE      -
Line 1,034: Line 1,043:
<p>
<p>
And the code used to store the field groups is:</p>
And the code used to store the field groups is:</p>
<p class="code">IN FILEX  FRN %CURREC
<p class="code">IN FILEX  [[Record loops#FOR RECORD NUMBER processing|FRN]] %CURREC
   * add first occurrence using the variable %NAME:
   * add first occurrence using the variable %NAME:
   %NAME=''
   %NAME=''
Line 1,042: Line 1,051:
       NAME_SN_ALL  = %NAME
       NAME_SN_ALL  = %NAME
   END ADD
   END ADD
   * add second occurrence using the literal '':
   * add second occurrence using the literal &apos;':
   ADD FIELDGROUP X
   ADD FIELDGROUP X
       NAME_SN_NONE = ''
       NAME_SN_NONE = &apos;'
       NAME_SN_LIT  = ''
       NAME_SN_LIT  = &apos;'
       NAME_SN_ALL  = ''
       NAME_SN_ALL  = &apos;'
   END ADD
   END ADD
   * add third occurrence with missing values:
   * add third occurrence with missing values:
Line 1,073: Line 1,082:
       FEO FIELDGROUP X
       FEO FIELDGROUP X
         PRINT OCCURRENCE IN FEO AND              -
         PRINT OCCURRENCE IN FEO AND              -
               NAME_SN_NONE AND NAME_SN_LIT AT 16 -
               NAME_SN_NONE AND NAME_SN_LIT AT 18 -
               AND NAME_SN_ALL AT 30
               AND NAME_SN_ALL AT 32
       END FOR
       END FOR
   END FOR
   END FOR
Line 1,144: Line 1,153:
<ul>
<ul>
<li>If a new field violates the AT-MOST-ONE rules, and there is no ON FCC unit, the request is canceled with the following error message:
<li>If a new field violates the AT-MOST-ONE rules, and there is no ON FCC unit, the request is canceled with the following error message:
<p class="code">M204.2119: AT-MOST-ONE CONSTRAINT VIOLATION IN RECORD <var class="term">recordnumber</var>, FIELD IGNORED: <var class="term">fieldname</var>=<var class="term">value</var>
<p class="code">M204.2119: AT-MOST-ONE CONSTRAINT VIOLATION IN RECORD <var class="term">recordnumber</var>,  
          FIELD IGNORED: <var class="term">fieldname</var>=<var class="term">value</var>
</p></li>
</p></li>


Line 1,171: Line 1,181:


<li>The following error message is issued for each violation:
<li>The following error message is issued for each violation:
<p class="code">M204.2119: AT-MOST-ONE CONSTRAINT VIOLATION IN RECORD <var class="term">recordnumber</var>, FIELD IGNORED: <var class="term">fieldname</var>=<var class="term">value</var>
<p class="code">M204.2119: AT-MOST-ONE CONSTRAINT VIOLATION IN RECORD <var class="term">recordnumber</var>,  
          FIELD IGNORED: <var class="term">fieldname</var>=<var class="term">value</var>
</p></li>
</p></li>
</ol>
</ol>
Line 1,339: Line 1,350:
The $UPDSTAT function takes no arguments.      </p>
The $UPDSTAT function takes no arguments.      </p>
<p>
<p>
See information about the [[#AT-MOST-ONE and REPEATABLE attributes|AT-MOST-ONE and REPEATABLE attributes]].</p>
See information about the [[#AT-MOST-ONE.2C_REPEATABLE.2C_and_EXACTLY-ONE_attributes|AT-MOST-ONE and REPEATABLE attributes]].</p>
<b>UNIQUE and ORDERED fields</b>
<b>UNIQUE and ORDERED fields</b>
<p>
<p>
Line 1,372: Line 1,383:
If you attempt to define a UNIQUE key under any of these circumstances, you receive an error message and the field is not defined.</p>
If you attempt to define a UNIQUE key under any of these circumstances, you receive an error message and the field is not defined.</p>
   
   
===Content constraints===
===<b id="contentConstr"></b>Content constraints===
   
   
====Setting a pattern for a field value: the LIKE attribute====
====Setting a pattern for a field value: the LIKE attribute====
Line 1,584: Line 1,595:
<p>
<p>
The value stored for field must be a date/time value that satisfies the constraint. For example:</p>
The value stored for field must be a date/time value that satisfies the constraint. For example:</p>
<p class="code">DEFINE FIELD BIRTHDATE WITH DATETIME-GE 18900101 DATETIME-LT 20100101                        
<p class="code">DEFINE FIELD BIRTHDATE WITH DATETIME-GE 18900101 DATETIME-LT 20100101</p>
</p>
<p>
<p>
The BIRTHDATE field is required to have a date/time value greater than or equal to January 1, 1890 and less than January 1, 2010. As with any other date/time values, the values can be extended with HH, HHMM, HHMMSS, HHMMSSX, HHMMSSXX, and so on.</p>
The BIRTHDATE field is required to have a date/time value greater than or equal to January 1, 1890 and less than January 1, 2010. As with any other date/time values, the values can be extended with HH, HHMM, HHMMSS, HHMMSSX, HHMMSSXX, and so on.</p>
Line 1,694: Line 1,704:
</p>
</p>
<p>
<p>
If we then do:</p>
If this request follows:</p>
<p class="code">BEGIN
<p class="code">BEGIN
IN POLICIES STORE RECORD
IN POLICIES STORE RECORD
Line 1,707: Line 1,717:
</p>
</p>
<p>
<p>
We get:</p>
The result is:</p>
<p class="code">A_CONCAT_TEST1 = 1ST+VALUE+
<p class="output">A_CONCAT_TEST1 = 1ST+VALUE+
A_CONCAT_TEST2 = 2ND+VALUE+
A_CONCAT_TEST2 = 2ND+VALUE+
A_CONCAT_TEST3 = 1ST/+VALUE/++2ND+VALUE+
A_CONCAT_TEST3 = 1ST/+VALUE/++2ND+VALUE+
Line 1,729: Line 1,739:
<p>
<p>
This is the output:</p>
This is the output:</p>
<p class="code">A_CONCAT_TEST1 = 1ST/VALUE+
<p class="output">A_CONCAT_TEST1 = 1ST/VALUE+
A_CONCAT_TEST2 = 2ND+VALUE+
A_CONCAT_TEST2 = 2ND+VALUE+
A_CONCAT_TEST3 = 1ST//VALUE/++2ND+VALUE+
A_CONCAT_TEST3 = 1ST//VALUE/++2ND+VALUE+
</p>
</p>
<p>
<p>
If you do not want either the separator or the escape character in the concatenated field, you can enforce this by assigning the value <code>CANCEL</code> as the escape character as follows:</p>
If you do not want the separator or the escape character in the concatenated field, you can enforce this by assigning the value <code>CANCEL</code> as the escape character as follows:</p>
<p class="code">DEFINE FIELD A_CONCAT_TEST1 WITH CONCATENATION-OF -
<p class="code">DEFINE FIELD A_CONCAT_TEST1 WITH CONCATENATION-OF -
             A_CONCAT_TEST1 WITH                  -
             A_CONCAT_TEST1 WITH                  -
Line 1,742: Line 1,752:
</p>
</p>
<p>
<p>
In which case, the concatenation will be ignored and our STORE record results in the following:</p>
In which case, the concatenation is ignored, and the Store Record results in the following:</p>
<p class="code">A_CONCAT_TEST1 = 1ST/VALUE+
<p class="output">A_CONCAT_TEST1 = 1ST/VALUE+
A_CONCAT_TEST2 = 2ND+VALUE+
A_CONCAT_TEST2 = 2ND+VALUE+
</p>
</p>
<p class="note"><b>Note:</b> There are no error messages associated with this condition.</p>
<p class="note"><b>Note:</b> There are no error messages associated with this condition.</p>
<p>
<p>
If you specify the ESCAPE CANCEL attribute for a field and the SEPARATOR character for that field is in one of the field values for the concatenated string, the update is cancelled. For an example, see [[Record_retrievals#Limitations_when_using_ESCAPE_CANCEL|Limitations when using ESCAPE CANCEL]].</p>
If you specify the ESCAPE CANCEL attribute for a field and the SEPARATOR character for that field is in one of the field values for the concatenated string, the update is cancelled. For an example, see [[Record retrievals#Limitations when using ESCAPE CANCEL|Limitations when using ESCAPE CANCEL]].</p>


===SEPARATOR (SEP) attribute===
===SEPARATOR (SEP) attribute===
Line 1,760: Line 1,770:
<p>
<p>
If you define a field with the SEP attribute, you must supply a value. Otherwise, the following error is issued:</p>
If you define a field with the SEP attribute, you must supply a value. Otherwise, the following error is issued:</p>
<p class="code">DEFINE MAKE_MODEL WITH CONCATENATION-OF MAKE WITH MODEL - AND ORD SEP ESC CANCEL
<p class="code">DEFINE MAKE_MODEL WITH CONCATENATION-OF MAKE WITH MODEL -
AND ORD SEP ESC CANCEL
<b></b>*** 1 M204.0405: INVALID DEFINE OPTION: SEP ESC
<b></b>*** 1 M204.0405: INVALID DEFINE OPTION: SEP ESC
</p>
</p>
<p>
<p>
If you define a field without the SEP attribute, it defaults to X:00:, as shown in the following code:</p>
If you define a field without the SEP attribute, it defaults to X'00', as shown in the following code:</p>
<p class="code">DEFINE MAKE_MODEL WITH CONCATENATION-OF MAKE WITH MODEL -
<p class="code">DEFINE MAKE_MODEL WITH CONCATENATION-OF MAKE WITH MODEL AND ORD ESC CANCEL
AND ORD ESC CANCEL
   
   
D FIELD (ABBREV) MAKE_MODEL
D FIELD (ABBREV) MAKE_MODEL
Line 1,772: Line 1,782:
  (DEF NFRV NKEY NCOD STR NNR VIS UP ORD CHAR LRES 15 -
  (DEF NFRV NKEY NCOD STR NNR VIS UP ORD CHAR LRES 15 -
  NRES 15 SPLT 50 IMM 1 NUNIQ ONE CAT MAKE WITH MODEL AND -
  NRES 15 SPLT 50 IMM 1 NUNIQ ONE CAT MAKE WITH MODEL AND -
  SEP X'00' EXC CANCEL)
  SEP X'00' ESC CANCEL)
</p>
</p>
<p>
<p>
Line 1,779: Line 1,789:
The SEPARATOR character cannot match the ESCAPE character. The SEPARATOR character is always displayed on DISPLAY FIELD output.</p>
The SEPARATOR character cannot match the ESCAPE character. The SEPARATOR character is always displayed on DISPLAY FIELD output.</p>
<p>
<p>
If the SEPARATOR NONE attribute is specified on a field definition, the ESCAPE attribute is not allowed.</p>See the Record retrievals topic for notes on [[Record_retrievals#Limitations_when_using_SEPARATOR_NONE|limitations when using SEPARATOR NONE]].
If the SEPARATOR NONE attribute is specified on a field definition, the ESCAPE attribute is not allowed. See [[Record retrievals#Limitations when using SEPARATOR NONE|Limitations when using SEPARATOR NONE]] on the "Record retrievals" page. </p>
   
   
===Defining concatenated fields===
===Defining concatenated fields===
<p>
<p>
Using the fields defined in [[|]] and the following fields, you can define concatenated fields.</p>
Using the attributes described in the preceding subsections, you can define these concatenated fields: </p>
<p class="code">IN FILE REDSOX DEFINE FIELD COLMOD WITH CONCATENATION-OF COLOR -
<p class="code">IN FILE REDSOX DEFINE FIELD COLMOD WITH CONCATENATION-OF COLOR -
WITH MODEL AND SEP C'-' ESC C'+'
WITH MODEL AND SEP C'-' ESC C'+'
Line 1,828: Line 1,838:
SEP C'+' ORDERED INVISIBLE
SEP C'+' ORDERED INVISIBLE
</p>
</p>
<p class="note"><b>Note:</b> A field in a concatenated field value does not have a terminating separator set only if it is:</p>
<blockquote class="note">
<p><b>Note:</b> A field in a concatenated field value does not have a terminating separator set only if it is:</p>
<ul>
<ul>
<li>Last field in the concatenation, or</li>
<li>Last field in the concatenation, or</li>
<li>Fixed length (LENGTH-EQ set) and the default value has the same length as LENGTH-EQ.</li>
<li>Fixed length (LENGTH-EQ set) and the default value has the same length as LENGTH-EQ.</li>
</ul>
</ul></blockquote>
   
   
<div id="Automatic Fields">
==<b id="Automatic Fields"></b><b id="autoFlds"></b>Automatic fields==
 
<p class="note"><b>Note:</b> Automatic fields are available as of Model&nbsp;204 version 7.5, and they require the <var>[[FILEORG parameter|FILEORG]]</var> X'100' setting.</p>
==Automatic fields==
<p class="note"><b>Note:</b> Automatic fields are available as of Model 204 version 7.5 and require the <var>[[FILEORG_parameter|FILEORG]]</var> X'100' setting.</p>
<p>
<p>
<var class="product">Model&nbsp;204</var> lets you define a field whose value is automatically maintained. For example, a field might count occurrences of another field so that every store or delete of the field occurrence changes the count in the automatic field. An automatic field is defined with one of the following attributes: </p>
<var class="product">Model&nbsp;204</var> lets you define a field whose value is automatically maintained. For example, a field might count occurrences of another field so that every store or delete of the field occurrence changes the count in the automatic field. An automatic field is defined with one of the following attributes: </p>
Line 1,851: Line 1,860:
</ul>
</ul>
<p>
<p>
The value of an automatic field is updated at the start of a transaction by <var class="product">Model&nbsp;204</var> and you cannot set it explicitly by a program. Any valid update causes the appropriate time and user stamps to be updated. For example, DELETE FOO(8). when there are no occurrences of FOO in the record.</p>
The value of an automatic field is updated at the start of a transaction by <var class="product">Model&nbsp;204</var>, and you cannot set it explicitly by a program. Any valid update causes the appropriate time and user stamps to be updated. For example, <code>DELETE FOO(8)</code>, when there are no occurrences of <code>FOO</code> in the record.</p>
<p>
<p>
Once you define an automatic value for a field, you cannot redefine the automatic value.</p>  
Once you define an automatic value for a field, you cannot redefine the automatic value.</p>  
<p>
<p>
User Language ADD, INSERT, DELETE, and STORE RECORD statements will be rejected with a compile error if they reference an automatic field:
SOUL <var>Add</var>, <var>Insert</var>, <var>Delete</var>, and <var>Store Record</var> statements are rejected with a compile error if they reference an automatic field:
</p>
</p>
<p class="syntax">M204.2862: ATTEMPT TO UPDATE AUTOMATIC FIELD <span class="term">fieldname</span>
<p class="syntax">M204.2862: ATTEMPT TO UPDATE AUTOMATIC FIELD <span class="term">fieldname</span>
</p>
</p>
<p>  
<p>  
If a field name variable (FNV) used on an update UL statement resolves to an automatic field,
If a field name variable (<var>FNV</var>) used on an update statement resolves to an automatic field, the request is cancelled with the following error:
the request will be cancelled with the following error:
</p>
</p>
<p class="syntax">M204.2863: ATTEMPT TO DO <span class="term">updateOperation</span> FOR AUTOMATIC FIELD: <span class="term">fieldname</span></p>
<p class="syntax">M204.2863: ATTEMPT TO DO <span class="term">updateOperation</span> FOR AUTOMATIC FIELD: <span class="term">fieldname</span></p>
   
   
On an update statement in group context, the M204.2863 error will also be issued if the field being updated is discovered to be an automatic field in one of the group's files at evaluation time.
On an update statement in group context, the [[M204.2863]] error is also issued if the field being updated is discovered to be an automatic field in one of the group's files at evaluation time.
   
   
===Automatic field context===
===Automatic field context===
Line 1,880: Line 1,888:
...</P>
...</P>
<ul>
<ul>
<li>Any update to any field in a record will automatically update RECORD.UPDT.TIME.</li>
<li>Any update to any field in a record will automatically update <code>RECORD.UPDT.TIME</code>. </li>


<li>Any update to any field in an occurrence of the ADDRESS field group will cause the ADDRESS.UPDT.TIME for that occurrence to be updated. (And, of course, the record level RECORD.UPDT.TIME as well.) </li>
<li>Any update to any field in an occurrence of the <code>ADDRESS</code> field group will cause the <code>ADDRESS.UPDT.TIME</code> for that occurrence to be updated. (And, of course, the record level <code>RECORD.UPDT.TIME</code> as well.) </li>
</ul>
</ul>
   
   
===Rules for the definition of automatic fields===
===Rules for the definition of automatic fields===
   
   
'''CHUNK attribute'''
====CHUNK attribute====
<p>
<p>
Fields defined with the CHUNK field attribute cannot be redefined or deleted.</p>
Fields defined with the CHUNK field attribute cannot be redefined or deleted.</p>
   
   
'''COUNT-OCCURRENCES-OF (CTO) attribute'''
====COUNT-OCCURRENCES-OF (CTO) attribute====
<p>
<p>
The COUNT-OCCURRENCES-OF (CTO) attribute is not allowed for fields defined as REPEATABLE.</p>
The COUNT-OCCURRENCES-OF (CTO) attribute is not allowed for fields defined as REPEATABLE.</p>
Line 1,897: Line 1,905:
Because one of the major advantages of CTO fields is to avoid unnecessary record scans, it is highly recommended that the field be preallocated. Otherwise the field is stored and updated in Table B in the normal way.</p>  
Because one of the major advantages of CTO fields is to avoid unnecessary record scans, it is highly recommended that the field be preallocated. Otherwise the field is stored and updated in Table B in the normal way.</p>  
<p>
<p>
See "<var>[[#cto|CTO ]]</var>[[#cto|fields]]" for details on its use.</p>
See [[#cto|CTO fields]] for details on its use.</p>
   
   
'''CREATE* and UPDATE* attributes'''
====CREATE* and UPDATE* attributes====
<p>
<p>
The CREATE-TIME, CREATE-TIMEUTC, CREATE-USER, UPDATE-TIME, UPDATE-TIMEUTC, and UPDATE-USER automatic fields capture the add/update user/time as of the start of the transaction. Any of these attributes can be added to empty or non-empty fields (using a <var>REDEFINE</var> command). The value is maintained by <var class="product">Model&nbsp;204</var> processing, so you cannot change the value.</p>
The CREATE-TIME, CREATE-TIMEUTC, CREATE-USER, UPDATE-TIME, UPDATE-TIMEUTC, and UPDATE-USER automatic fields capture the add/update user/time as of the start of the transaction. Any of these attributes can be added to empty or non-empty fields (using a <var>REDEFINE</var> command). The value is maintained by <var class="product">Model&nbsp;204</var> processing, so you cannot change the value.</p>
Line 1,918: Line 1,926:
<li>CREATE-USER</li>
<li>CREATE-USER</li>
</ol>
</ol>
<div id="cto"></div>


<div id="Counting occurrences of a field"></div>
<div id="Counting occurrences of a field"></div>
===CTO fields: counting occurrences of a field===
===<b id="cto"></b>CTO fields: counting occurrences of a field===
<!--Caution: <div>s above-->
<!--Caution: <div> above-->
 
<p>
<p>
To create an automatic count field you use the COUNT-OCCURRENCES-OF (CTO) keyword on a field definition, followed by the name of the field or field group for which the count is automatically maintained. For example, if you have field group DRUG you could define an automatic count field as follows:</p>
To create an automatic count field you use the COUNT-OCCURRENCES-OF (CTO) keyword on a field definition, followed by the name of the field or field group for which the count is automatically maintained. For example, if you have field group DRUG you could define an automatic count field as follows:</p>
Line 1,967: Line 1,972:
<p class="code">%DRUGCT = DRUGCT
<p class="code">%DRUGCT = DRUGCT
</p>
</p>
<b>Automatic optimization with COUNT OCCURRENCES OF fields</b>
 
====Automatic optimization with COUNT OCCURRENCES OF fields====
<p>
<p>
The use of CTO fields automatically optimizes many of the record scan processes:</p>
The use of CTO fields automatically optimizes many of the record scan processes:</p>
Line 1,973: Line 1,979:
<li>[[Processing multiply occurring fields and field groups#FOR EACH OCCURRENCE OF loops|FOR EACH OCCURRENCE OF]] loops.
<li>[[Processing multiply occurring fields and field groups#FOR EACH OCCURRENCE OF loops|FOR EACH OCCURRENCE OF]] loops.
<p>
<p>
If a CTO field exists for the field or field group that is being looped over via an FEO or FAO statement, the value of that field limits the number of times the loop is processed, which reduces the need to scan to the end of record for every FEO or FAO statement. Note that when a record is updated or might be updated mid-loop, <var class="product">Model&nbsp;204</var> normal concerns about subscript handling in these statements need to be considered.</p>
If a CTO field exists for the field or field group that is being looped over via an FEO or FAO statement, the value of that field limits the number of times the loop is processed, which reduces the need to scan to the end of record for every FEO or FAO statement. Note that when a record is updated or might be updated mid-loop, <var class="product">Model&nbsp;204</var> normal concerns about subscript handling in these statements need to be considered.</p></li>
</li>


<li>FOR FIELDGROUP clause
<li>FOR FIELDGROUP clause
Line 1,998: Line 2,003:
In all cases, the greatest benefit from the CTO field optimization is when a record contains no occurrences of the requested field or field group. <var class="product">Model&nbsp;204</var> immediately determines that no occurrence is on the record and stops scanning the record.</p>
In all cases, the greatest benefit from the CTO field optimization is when a record contains no occurrences of the requested field or field group. <var class="product">Model&nbsp;204</var> immediately determines that no occurrence is on the record and stops scanning the record.</p>


<b>Defining an efficient CTO field</b>
====Defining an efficient CTO field====
<p>
<p>
The counter fields are always physically present in the record. Using a DEFAULT-VALUE of zero for such a field defeats its purpose as you need to scan the entire record to determine whether to use the default value. When you issue a STORE RECORD statement, all CTO fields are stored as 0 in the record. For non-OCCURS, fields are always the first variable length fields in the record.</p>
The counter fields are always physically present in the record. Using a DEFAULT-VALUE of zero for such a field defeats its purpose as you need to scan the entire record to determine whether to use the default value. When you issue a STORE RECORD statement, all CTO fields are stored as 0 in the record. For non-OCCURS, fields are always the first variable length fields in the record.</p>
Line 2,009: Line 2,014:
<p>
<p>
CTO fields can even be in field groups and zeros are stored for CTO fields when the ADD FIELDGROUP or INSERT FIELDGROUP statement is processed.</p>
CTO fields can even be in field groups and zeros are stored for CTO fields when the ADD FIELDGROUP or INSERT FIELDGROUP statement is processed.</p>
<b>Displaying CTO fields</b>
 
====Displaying CTO fields====
<p>
<p>
By default, the PRINT ALL INFORMATION (PAI), AUDIT ALL INFORMATION (AAI), PRINT ALL FIELD GROUP INFORMATION (PAFGI), and AUDIT ALL FIELDGROUP INFORMATION (AAFGI) statements do not display CTO (counter fields). When you are using PAI [INTO] to reorganize your files or to copy records, the automatic fields are likely to cause problems.</p>
By default, the PRINT ALL INFORMATION (PAI), AUDIT ALL INFORMATION (AAI), PRINT ALL FIELD GROUP INFORMATION (PAFGI), and AUDIT ALL FIELDGROUP INFORMATION (AAFGI) statements do not display CTO (counter fields). When you are using PAI [INTO] to reorganize your files or to copy records, the automatic fields are likely to cause problems.</p>
Line 2,017: Line 2,023:
===Tracking updates using automatic fields===
===Tracking updates using automatic fields===
<p>
<p>
<var class="product">Model&nbsp;204</var> can now track the date/time when a record was created or updated and what user ID created or updated it.
Versions 7.5 and higher of <var class="product">Model&nbsp;204</var> can track the date/time when a record was created or updated, and what user ID created or updated it.
This tracking can be accomplished by using one of the following field tracking attributes: </p>
This tracking can be accomplished by using one of the following field tracking attributes: </p>
<table>
<table>
Line 2,065: Line 2,071:
All of the automatic fields that contain date/time information also have the DATETIME attribute. </p>
All of the automatic fields that contain date/time information also have the DATETIME attribute. </p>
<p>
<p>
You cannot update tracking fields with User Language.</p>
You cannot update tracking fields with SOUL.</p>
<p>
<p>
A record or field group can contain only a single field of each update tracing type (CREATE-USER, UPDATE-USER, CREATE-TIME, UPDATE-TIME). An attempt to create a second invokes an error message.</p>
A record or field group can contain only a single field of each update tracing type (CREATE-USER, UPDATE-USER, CREATE-TIME, UPDATE-TIME). An attempt to create a second invokes an error message.</p>
Line 2,081: Line 2,087:
</p>
</p>
<p>
<p>
The UPDATE tracking fields for the records are updated when any field in the record changes and the create tracking fields are added to the record when it is initially stored. </p>
The <var>UPDATE</var> tracking fields for the records are updated when any field in the record changes and the create tracking fields are added to the record when it is initially stored. </p>


<b>Tracking fields for field groups</b>
====Tracking fields for field groups====
<p>
<p>
Automatic tracking fields can also provide a date/time and user stamp of the creator and last update of each occurrence of a field group.</p>
Automatic tracking fields can also provide a date/time and user stamp of the creator and last update of each occurrence of a field group.</p>
<p>
<p>
If an UPDATE-TIME tracking field is defined as a part of the field group as shown below:</p>
If an <var>UPDATE-TIME</var> tracking field is defined as a part of field group <code>X</code> as shown below, then each occurrence of that field group will contain the tracking field:</p>
<p class="code">DEFINE FIELD MY_UPDATE_TIME WITH FIELDGROUP X AND ORD NUM UPDATE-TIME
<p class="code">DEFINE FIELD MY_UPDATE_TIME WITH FIELDGROUP X AND ORD NUM UPDATE-TIME
</p>
</p>
<p>
then each occurrence of field group X will contain the field.</p>


<b>Setting a date/time field</b>
====<b id="dtValues"></b>DATETIME (DT) field values====
<p>
 
The DATETIME attribute indicates the format of the data stored in Table B.</p>
This section explains the format and other considerations for fields with the <var>DATETIME</var> attribute, <b>whether they are automatic update tracking fields or not</b>.
 
A <var>DATETIME</var> value is a date of the format:
 
<p class="code">YYYYMMDDHHMISSXXXXXX</p>
 
Of such a 20-digit value:
<ul>
<li>The <b>date</b> portion (<code>YYYYMMDD</code>) is required; therefore the minimum length is 8 digits. It must specify a valid date. </li>
 
<li>The <b>time</b> portion (up to 12 digits) is optional; any of the leading portion (hours, minutes, seconds, and decimal fraction-of-seconds) can be specified. It must specify a valid time: if <code>HH</code>, <code>MI</code>, or <code>SS</code> is specified, they must each be two digits. Additionally, up to 6 decimal fraction-of-seconds digits may be specified (that is, up to 1 microsecond). </li>
</ul>
 
====Setting a date/time field====
<p>
<p>
You can now define a field with a DATETIME attribute:</p>
The <var>DATETIME</var> field attribute indicates the format of the data stored in Table B.</p>
<p class="code">DEFINE FIELD WHEN WITH DATETIME
</p>
<p>
<p>
DATETIME fields use seven bytes to store the value (plus four bytes overhead, if the field is not preallocated). A binary representation is used to store the 20-character full date format so that it can be held in seven bytes. the DATETIME attribute cannot be combined with KEY. You assign a string to it in <var>YYYYMMDDHHMMSSXXXXXX</var> format:</p>
Once you define a field as <var>DATETIME</var>, you assign a string to it in <code>YYYYMMDDHHMMSSXXXXXX</code> format:</p>
<p class="code">IN FILE FOO DEFINE FIELD WHEN WITH DATETIME
<p class="code">IN FILE FOO DEFINE FIELD WHEN WITH DATETIME
...
...
BEGIN
BEGIN
   IN FILE FOO STORE RECORD
   IN FILE FOO STORE RECORD
Line 2,111: Line 2,126:
</p>
</p>
<p>
<p>
If you attempt to store the field with invalid data such as, WHEN = '72', the request will be cancelled and you will receive an error:</p>
If you attempt to store the field with invalid data such as <code>WHEN = '72'</code>, the request is cancelled and you receive an error:</p>
<p class="code">***  1  CANCELLING REQUEST: M204.2865: ATTEMPT TO ADD INVALID DATETIME VALUE: WHEN = 72
<p class="code">***  1  CANCELLING REQUEST: M204.2865: ATTEMPT TO ADD INVALID DATETIME VALUE: WHEN = 72
</p>
</p>
<p>
<p>
As most timestamps do not require microseconds, you can specify 10ths, 100ths, 1,000ths, 10,000ths and 100,000ths of a second. The value is simply padded on the right with zeros. For date/time field WHEN, the following are all equivalent:</p>
As most timestamps do not require microseconds, you can specify 10ths, 100ths, 1,000ths, 10,000ths and 100,000ths of a second. The value is simply padded on the right with zeros. For date/time field <code>WHEN</code>, the following are all equivalent:</p>
<p class="code">ADD WHEN = '20090704134706000000'
<p class="code">ADD WHEN = '20090704134706000000'
ADD WHEN = '2009070413470600000'
ADD WHEN = '2009070413470600000'
Line 2,136: Line 2,151:
</p>
</p>


<b>Storing dates or date/times efficiently</b>
====Storing dates or date/times efficiently====
<p>
<var>DATETIME</var> fields use seven bytes to store their value (plus four bytes overhead, if the field is not preallocated). A binary representation is used to store the 20-character full date format so that it can be held in seven bytes. </p>
<p>
<p>
The shortest valid format for a field with the DATETIME attribute contains YYYYMMDD. If the values you are storing are simple dates, they can be stored in four bytes as binary values, so requires three fewer bytes per value than a date/time field. However, for anything with a time in it (even just hours), a DATETIME field takes less space than the alternative field types (FLOAT or STRING).</p>
The shortest valid format for a field with the <var>DATETIME</var> attribute is <code>YYYYMMDD</code>. If the values you are storing are simple dates, they can be stored in four bytes as binary values, three fewer bytes per value than a date/time field. However, for anything with a time in it (even just hours), a <var>DATETIME</var> field takes less space than the alternative field types (<var>FLOAT</var> or <var>STRING</var>).</p>


<b>Automatic validation of date/times</b>
====Automatic validation of date/times====
<p>
<p>
In addition to the compact storage of date/time stamps, DATETIME fields also provide automatic validation of the values stored into them. That is, you cannot store numbers into them that are not valid dates. For example, you cannot store '9999111' into a DATETIME field, nor can you store '20070931' (because September does not have 31 days), nor can you store '20070229' (because 2007 is not a leap year, so February only has 28 days).</p>
In addition to the compact storage of date/time stamps, <var>DATETIME</var> fields also provide automatic validation of the values you attempt to store in them &mdash; you cannot store numbers into them that are not valid dates. For example, you cannot store <code>'9999111'</code> into a <var>DATETIME</var> field, nor can you store <code>'20070931'</code> (because September does not have 31 days), nor can you store <code>'20070229'</code> (because 2007 is not a leap year, so February only has 28 days).</p>
<p>
<p>
DATETIME fields also provide for field content validation for DATETIME fields. </p>
Additional <var>DATETIME</var>-formatted fields also provide for [[Field design#Setting a date/time value constraint|field content validation]] for date/time fields. </p>
<p class="note"><b>Note:</b> It is possible to set a DATETIME DEFAULT-VALUE to an invalid date/time value, such as '_', and to set a DATETIME field to that value. That value will not be stored in the database and the value returned for an EXACTLY-ONE field value that had not been set would be the default value in such a case.</p>
 
<p class="note"><b>Note:</b> It is possible to set a <var>DEFAULT-VALUE</var> for a <var>DATETIME</var> field to an invalid date/time value, such as a hyphen <code>_</code>. That value will not be stored in the database, but it would be the value returned for a reference to a missing <var>EXACTLY-ONE</var> <var>DATETIME</var> field value.</p>
<p>
<p>
The value returned for DATETIME fields is always in:</p>
The value returned for <var>DATETIME</var> fields is always in this format:</p>
<p class="code">YYYYMMDDHHMMSSXXXXXX
<p class="code">YYYYMMDDHHMMSSXXXXXX
</p>
</p>
<p>
<p>
If you are interested in only part of the time stamp, you can either use the $SUBSTR function to get the part you want or make the %variable that will hold the time stamp length you are interested in. So, if you want %WHEN to hold a date/time in YYYYMMDDHHMISS format, simply declare %WHEN as:</p>
If you are interested in only part of the time stamp, you can use either of these: </p>
<p class="code">%WHEN IS STRING LEN 14  
<ul>
...  
<li>The <var>$Substr</var> function, to get the part you want. </li>
%WHEN = WHEN
 
</p>
<li>A %variable, specified to be the length of the part of the time stamp you are interested in.  
<p>
So, to use <code>%when</code> to hold a date/time in <code>YYYYMMDDHHMMSS</code> format: </p>
<p class="code">%when is string len 14  
...  
%when= WHEN
</p></li>
</ul>
<p>
<p>
DATETIME values can be indexed either as ORDERED CHARACTER or ORDERED NUMERIC. When indexed as ORDERED NUMERIC, only the first 15 digits (<var>YYYYMMDDHHMISSX</var>) are indexed. When a DATETIME field is indexed as ORDERED CHARACTER, it is indexed as the string representation of the date/time value in the full <var>YYYYMMDDHHMISSXXXXXX</var> format.</p>
<var>DATETIME</var> values can be indexed as <var>ORDERED CHARACTER</var> or <var>ORDERED NUMERIC</var>. If indexed as <var>ORDERED NUMERIC</var>, only the first 15 digits (<code>YYYYMMDDHHMMSSX</code>) are indexed. If <var>ORDERED CHARACTER</var>, it is indexed as the string representation of the date/time value in the full <code>YYYYMMDDHHMMSSXXXXXX</code> format.</p>
<p class="note"><b>Note:</b> Making DATETIME fields KEY fields is not allowed.</p>
<p class="note"><b>Note:</b> Making <var>DATETIME</var> fields <var>KEY</var> fields is not allowed.</p>
 
====Automatic update tracking field considerations====


<b>Update processing</b>
=====Automatic DATETIME field values=====
<p>
<p>
Date/time values are the date/time of the start of the update processing, so all records, fields and field groups updated by a single update get the same time stamp. Furthermore, the code that starts an update unit ensures that the date/time stamp of each update unit is unique.</p>
Automatic <var>DATETIME</var> fields are set to the date/time of the start of the update processing, so all records, fields, and field groups updated by a single update transaction get the same time stamp. Furthermore, the code that starts an update unit ensures that the date/time stamp of each update unit is unique.</p>
<p>
<p>
The M204.0173 and M204.0172 messages were expanded to include the date/time stamp.</p>
The <code>M204.0173</code> and <code>M204.0172</code> messages include the time of the update as follows:</p>
<p class="code">M204.0173: START OF UPDATE 12 AT 10:11:00.51 (20090705101100511734)
<p class="code">M204.0173: START OF UPDATE 12 AT 10:11:00.51  
</p>
</p>
<p>
<p>
and</p>
and:</p>
<p class="code">M204.0172: END OF UPDATE 12 AT 10:11:00.56 (20090705101100511734)
<p class="code">M204.0172: END OF UPDATE 12 AT 10:11:00.56  
</p>
</p>


<b>Defining various UPDATE-USER fields</b>
=====Defining various UPDATE-USER fields=====
<p>
<p>
To indicate whether an update tracking field applies to a record or field group, specify or do not specify the containing field group on the field definition. For example:</p>
To indicate whether an update tracking field applies to a record or field group, specify or do not specify the containing field group on the field definition. For example:</p>
Line 2,178: Line 2,205:
</p>
</p>
<p>
<p>
The WHO field contains the user ID of the last user that updated anything in the record. The following example illustrates defining an UPDATE-USER field in a field group:</p>
The <code>WHO</code> field contains the user ID of the last user that updated anything in the record. The following example illustrates defining an <var>UPDATE-USER</var> field in a field group:</p>
<p class="code">DEFINE FIELD DR.WHO WITH UPDATE-USER FIELDGROUP DR
<p class="code">DEFINE FIELD DR.WHO WITH UPDATE-USER FIELDGROUP DR
</p>
</p>
<p>
<p>
The DR.WHO field contains the user ID of the last user that updated an occurrence of field group DR.</p>
The <code>DR.WHO</code> field contains the user ID of the last user that updated an occurrence of field group <code>DR</code>.</p>


<b>Defining various UPDATE-TIME fields</b>
=====Defining various UPDATE-TIME fields=====
<p>
<p>
You can specify an update tracking field as FIELDGROUP *, which means that all field groups and the primary record all use the same update tracking field. For example:</p>
You can specify an update tracking field as <code>FIELDGROUP *</code>, which means that all field groups and the primary record all use the same update tracking field. For example:</p>
<p class="code">DEFINE FIELD WHEN WITH UPDATE-TIME FIELDGROUP *
<p class="code">DEFINE FIELD WHEN WITH UPDATE-TIME FIELDGROUP *
</p>
</p>
<p>
<p>
The WHEN field is placed in the primary record and each field group to track the last update time. If a file has a FIELDGROUP * update tracking field and a field group or record has a different comparable updating tracking field, the field group or the record specific tracking field applies to the field. For example:</p>
The <code>WHEN</code> field is placed in the primary record and each field group to track the last update time. If a file has a <code>FIELDGROUP *</code> update tracking field, and a field group or record has a different comparable updating tracking field, the field group or the record-specific tracking field applies to the field. For example:</p>
<p class="code">DEFINE FIELD WHEN WITH UPDATE-TIME FIELDGROUP *
<p class="code">DEFINE FIELD WHEN WITH UPDATE-TIME FIELDGROUP *
   
   
Line 2,198: Line 2,225:
</p>
</p>
<ul>
<ul>
<li>The field WHEN would be updated for changes to any field group occurrences in field groups that do not have their own UPDATE-TIME field defined.</li>
<li>The field <code>WHEN</code> would be updated for changes to any field group occurrences in field groups that do not have their own <var>UPDATE-TIME</var> field defined.</li>


<li>The Field MAIN.WHEN would be set for any update to any field in the record.</li>
<li>The field <code>MAIN.WHEN</code> would be set for any update to any field in the record.</li>


<li>The field FOO.WHEN would be set in field group FOO for any update to a field group occurrence.</li>
<li>The field <code>FOO.WHEN</code> would be set in field group <code>FOO</code> for any update to a field group occurrence.</li>
</ul>
</ul>
<p>
<p>
The UPDATE-TIME field in a field group that has a nested field group will be updated when the nested field group is updated, even though no fields in the containing field group were updated.</p>
The <var>UPDATE-TIME</var> field in a field group that has a nested field group will be updated when the nested field group is updated, even though no fields in the containing field group were updated.</p>


===An alternate approach to adding a date-time stamp field to your files===  
===System-wide approach to adding a date-time stamp field to your files===  
<p>
<p>
There is a more general way you can include a date-time stamp field in your files. The date/time stamp feature lets you put a date-time stamp (DTS) on each record that was updated in a transaction. You can then use the data in the <var class="term">date/time stamp</var> field in an end-user written application to find and process all the rows of data that were changed.</p>
You can establish a single date-time stamp field with the <var>[[DTSFN parameter|DTSFN]]</var> parameter. The <var>DTSFN</var> field is used in all files in your Online that have the X'10' bit set in their <var>[[FOPT parameter|FOPT]]</var> parameter. The date-time stamp feature puts a date-time stamp on each record that was updated in a transaction. You can then use the data in the <var>DTSFN</var> field in a user-written application to find and process all the rows of data that were changed.</p>
<p>
<p>
An application that processes these records can track the <var class="term">date/time stamp</var> field values that have been processed to date, or it might take another action, including updating the date/time stamp field.</p>
An application that processes these records can track the <var class="term">date-time stamp</var> field values that have been processed to date, or it might take another action, including updating the date-time stamp field.</p>
<p>
<p>
To install and use this functionality, see [[Setting up the date-time stamp field]]. Once installed, the date-time stamp feature works only in files created in V6R1.0 or later. See [[Adding a date-time stamp field]]</p>
To install and use this functionality, see [[Setting up the date-time stamp field]]. Once installed, the date-time stamp feature works only in files created in V6R1 or later. See also [[Adding a date-time stamp field]].</p>
   
   
<p class="note"><b>Note:</b> There is no support in PQO/204 for the data/time stamp feature.</p>
<p class="note"><b>Note:</b> There is no support in [[PQO|PQO/204]] for the date-time stamp feature.</p>
   
   
====Difference between the system wide date-time stamp field and an UPDATE-TIME field====
====Difference between the system wide date-time stamp field and an UPDATE-TIME field====
Line 2,221: Line 2,248:
Besides the obvious difference that the date-time stamp field will (and must) have the same name in every file in which it is used, the following are the notable differences between the two approaches:</p>
Besides the obvious difference that the date-time stamp field will (and must) have the same name in every file in which it is used, the following are the notable differences between the two approaches:</p>
<ul>  
<ul>  
<li>The UPDATE-TIME attribute can be set to track updates to Repeating Field Groups, as well as at the record level.
<li>The <var>UPDATE-TIME</var> attribute can be set to track updates to physical field groups, as well as at the record level. </li>


<li>The UPDATE-TIME captures the time at the start of an update unit, the date-time stamp field captures the time at the commit point.
<li>The <var>UPDATE-TIME</var> captures the time at the start of an update unit. The date-time stamp field captures the time at the commit point. </li>


<li>The date-time stamp field updates can be [[Adding a date-time stamp field#Suspending date-time stamp updates|temporarily suspended]].
<li>The date-time stamp field updates can be [[Adding a date-time stamp field#Suspending date-time stamp updates|temporarily suspended]]. </li>
</ul>
</ul>


Line 2,232: Line 2,259:
===FIELDGROUP attribute===  
===FIELDGROUP attribute===  
<p>
<p>
No default, can be abbreviated as <var>FG</var>. <var>FIELDGROUP</var> is available as of Model 204 version 7.5.</p>  
No default, can be abbreviated as <var>FG</var>. <var>FIELDGROUP</var> is available as of Model&nbsp;204 version 7.5.</p>  
<p>
<p>
The <var>FIELDGROUP</var> attribute specifies the name of the field group that the
The <var>FIELDGROUP</var> attribute specifies the name of the field group that the defined field is associated with (contained in). Once you define a <var>FIELDGROUP</var> value for a field, you cannot redefine the <var>FIELDGROUP</var> value.</p>  
defined field is associated with (contained in). Once you define a <var>FIELDGROUP</var>
value for a field, you cannot redefine the <var>FIELDGROUP</var> value.</p>  
<p>
<p>
The <var>FIELDGROUP</var> attribute cannot refer to a field group name that has not yet been defined to the file. The <var>[[DEFINE FIELDGROUP command|DEFINE FIELDGROUP]]</var> command must occur before the <var>[[DEFINE FIELD command|DEFINE FIELD]]</var> command.</p>
The <var>FIELDGROUP</var> attribute cannot refer to a field group name that has not yet been defined to the file. The <var>[[DEFINE FIELDGROUP command|DEFINE FIELDGROUP]]</var> command must occur before the <var>[[DEFINE FIELD command|DEFINE FIELD]]</var> command.</p>
Line 2,246: Line 2,271:
The <var>FIELDGROUP</var> attribute does not allow:  
The <var>FIELDGROUP</var> attribute does not allow:  
<ul>
<ul>
<li>Record security
<li>Record security </li>
<li>Use for <var>SORT</var> or <var>HASH</var> file
<li>Use for <var>SORT</var> or <var>HASH</var> file </li>
<li>1NF file model
<li>1NF file model </li>
</ul>
</ul>


The <var>FIELDGROUP</var> attribute can be used with the <var>STORE-NULL LITERAL</var>
The <var>FIELDGROUP</var> attribute can be used with the <var>STORE-NULL LITERAL</var> attribute.
attribute.
   
   
===Using FIELDGROUP * ===
===Using FIELDGROUP * ===
<p>
<p>
The <var>FIELDGROUP *</var> attribute means that the field will be included into all
The <var>FIELDGROUP *</var> attribute means that the field will be included into all field groups. The <var>EXACTLY-ONE</var> attribute conflicts with the <var>FIELDGROUP *</var> attribute.</p>
field groups. The <var>EXACTLY-ONE</var> attribute conflicts with the <var>FIELDGROUP *</var> attribute.</p>


<div id="Defining a LEVEL clause"></div>
<div id="Defining a LEVEL clause"></div>
<div id="LEVEL attribute"></div>
==<b id="LEVEL attribute"></b>LEVEL attribute - prevent unauthorized field usage==
==LEVEL attribute - prevent unauthorized field usage==
To secure a field against unauthorized access, include a LEVEL clause in the field's description. [[Model 204 security features#Field-level security|Field-level security]] discusses field-level security and describes the types of field access: SELECT, READ, UPDATE, and ADD.
To secure a field against unauthorized access, include a LEVEL clause in the field's description. [[Security#Field-level security|Field-level security]] discusses field-level security and describes the types of field access: SELECT, READ, UPDATE, and ADD.
<p>
<p>
Field-level security has a negligible impact on both performance and storage usage.</p>
Field-level security has a negligible impact on both performance and storage usage.</p>

Latest revision as of 16:22, 24 April 2020

Overview

The field structures in Model 204 are at the heart of providing applications which perform the functions that you require in the most efficient manner possible.

This page discusses exactly how the data structures should be defined.

Designing records, field groups, and fields

When setting up a Model 204 file, you must first decide what fields the records will contain. A record in a personnel file, for instance, might contain a social security number field, a last name field, a spouse's name field, and so on.

Variable record formats

The record in Model 204 is a very loose concept. Physically, it is simply a collection of fields (data items).

Any field in the Table A dictionary can appear in any record in the file. It is quite possible that you create a file with many different logical record types with only a few (or even none) appearing in all of the records.

Thus, each record is not required to have a value for every field; so that record lengths are variable, and it can be argued that the entire concept of a record "format" does not apply to Model 204 files. If an employee's address is unknown or doesn't exist in a particular logical record, for example, the address field will be left out of the record altogether. The exception to this is preallocated fields. If they are are defined for a file, space is reserved in each record for all such fields, even if they would never exist for that logical record.

Multiply occurring fields

Most types of fields can be multiply occurring, that is, have more than one value per record. For example, the CHILD field can appear no times, once, or several times in one record.

Field descriptions and attributes

When you describe a field in a Model 204 file with the DEFINE FIELD or REDEFINE FIELD command, you specify a field description that consists of the field name followed by a list of field attributes. These attributes determine how a field can be accessed and how it is stored internally. This section describes the attributes supported for Model 204 fields.

Within a record, there is usually no restriction on the number of different fields that can have the same attribute (the exception being the key fields for Sorted and Hash Key files). For example, several fields within the same record can have ORD CHAR indices built for them.

Attribute assignments

The attribute assignments apply whether the field was defined separately or as part of a field group.

If you do not assign a particular attribute to a field, the default attribute is assigned. For example, if you do not assign the KEY attribute to a field definition, the field is assigned NON-KEY.

If a required attribute is not assigned a value, the default value is used. For example, if you define a CONCATENATION-OF field, which requires the SEPARATOR attribute, and you do not enter the SEPARATOR attribute, the default value of X'00' is used as the SEPARATOR value.

Multiply occurring fields

For most field attributes there is almost no restriction on the number of times a particular field can occur within a record. This makes file/record design very flexible. If you have a record which has child details as repeating fields or groups, for example, any number can be supported.

Singly occurring fields

There are a few attributes which require a field to have no more than one occurrence per record:

  • NUMERIC RANGE
  • Sort key fields
  • Hash key fields

Others require a field to have no more than one occurrence per record or field group (depending on the context in which they are defined):

  • AT-MOST-ONE
  • EXACTLY-ONE

Indexing fields

Model 204 files can become quite large, and so it is almost always preferable to directly access the record or records which have particular characteristics. As part of your design process, you should have identified which field(s) are likely to be used for direct retrieval of sets of records.

For example, if you have a personnel file, you are likely to have a unique identifier for each employee, and it is also likely that you will often want to process only the record or records with that ID.

Where retrievals are attempted against non indexed fields, a search of Table B must be done, and the danger of request cancellation because MBSCAN is exceeded exists.

Model 204 supports a few different ways to implement these direct searches, and the recommended default choice for indexing is the ORDERED attribute.

ORDERED and NON-ORDERED attributes

Fields used in retrieval specifications should normally be defined as ORDERED (unless one of the other indexing methods, described below) is chosen. NON-ORDERED, the default, means that all retrieval requests need to scan the records directly in Table B to find the selected records.

For the architectural approach to the implementation of the ordered index, refer to understanding the Ordered Index.

Choosing an order type

An ORDERED field can have an index with either CHARACTER or NUMERIC ordering. This is called its order type. The field's order type and updating characteristics are declared with the DEFINE command. You can specify only one order type for a field. ORDERED NUMERIC fields contain numeric values stored as standardized floating-point numbers in one section of the index and non-numeric values in EBCDIC order in another section.

ORDERED CHARACTER fields

ORDERED CHARACTER fields use the standard EBCDIC collating sequence. If a field is ORDERED CHARACTER, all values of the field are stored in the Ordered Index in EBCDIC collating sequence order.

ORDERED NUMERIC fields

A string field value is considered NUMERIC if it consists of digits, with one optional decimal point, and an optional plus or minus sign at the beginning of the string. The value of the number cannot be greater than 10 to the power 75 or less than 10 to the power -75.

If an ORDERED NUMERIC field is also FLOAT, exponential format is allowed. Values for BINARY, FLOAT (8), and FLOAT (16) fields are rounded to 15 significant digits. Values for FLOAT (4) fields are rounded to 6 significant digits.

An invalid value is stored as an unconverted string.

In ORDERED NUMERIC fields, leading zeros are ignored. For example, 001 and 1 are considered equivalent and are maintained in the ORDERED NUMERIC portion of the B-Tree as the same value. For fields that require a meaningful leading zero (such as zip code), use ORDERED CHARACTER.

Values of an ORDERED NUMERIC field that do not satisfy the definition of NUMERIC are stored in a separate section of the Ordered Index and processed in EBCDIC order. During FOR EACH VALUE and FOR EACH RECORD IN ORDER BY FIELD processing of an ORDERED NUMERIC field, these values are processed after the valid NUMERIC values unless you restrict the range of values.

Default order types

If a field is defined as ORDERED without specifying which type of ordering to create, a default order type is used. The default depends on whether the field is STRING, BINARY, or FLOAT:

For...The default order type is...
STRINGORDERED CHARACTER
BINARY or FLOATORDERED NUMERIC

ORDERED field definition

The ORDERED attribute has options that allow you to determine the type of ordering the Ordered Index is to have, as well as some of the characteristics of the Ordered Index tree as it expands or is rebuilt. These options are described in the discussions of order type and Ordered Index tree structure that follow.

Defining ORDERED field attributes

The DEFINE command is used to specify a field's attributes. The format of the DEFINE command with the ORDERED attribute options is:

Syntax

DEFINE FIELD fieldname WITH ORDERED [CHARACTER | NUMERIC][SPLITPCT s] [IMMED i] [LRESERVE r] [NRESERVE n]

Note: The default designation for a field is NON-ORDERED. Ordered Index tree parameters do not apply for a NON-ORDERED field.

CHARACTER and NUMERIC order types are described above.

SPLITPCT, IMMED, LRESERVE, and NRESERVE parameters are described in Ordered Index spacing parameters.

Redefining ORDERED fields

To change the type of ordering of an ORDERED field, use the REDEFINE command followed by the new ordering type, thereby deleting the old type. To change the spacing characteristics, specify the current parameter (LRESERVE, NRESERVE, SPLITPCT, or IMMED) with a new value. This takes effect only for fields added after the change.

Choosing between ORDERED and NUMERIC RANGE

The ORDERED attribute has several advantages over the NUMERIC RANGE attribute, and as such should be the normal choice. Specifically:

  • Ordered Index allows for multiple occurrences of the same numeric field in a record. The NUMERIC RANGE attribute issues an error message whenever an attempt is made to store more than one occurrence of the same NUMERIC RANGE field in the same record.
  • Cost of updating an ORDERED field is lower than that of a NUMERIC RANGE field. The NUMERIC RANGE field requires multiple attributes to be maintained to describe the characteristic of a number. The cost of an update to the Ordered Index is always proportional to the depth of the tree.
  • In most cases, especially with large numbers, the Ordered Index uses less space to maintain the index information.
  • Ordered Index allows for more accurate range requests, with numbers representable by IBM double-word floating-point numbers.
Range retrievals

Although range retrievals are usually faster for ORDERED fields than for NUMERIC RANGE fields, the size of the range interval and the density of the distribution of values over that range are the principal determining factors.

In the past, NUMERIC RANGE retrievals have been faster for very large ranges with densely distributed data values.

CHUNK attribute

However, as of Model 204 version 7.5, you can improve the efficiency of data range retrieval on ORDERED NUMERIC fields by defining the fields with the CHUNK attribute.

The CHUNK attribute defines a subrange ("OI chunk") of the Ordered Index data range, so the desired data can be found with fewer scans of the Ordered Index entries.

CHUNK fields are automatic fields and therefore require that FILEORG X'100' be set for the file.

The CHUNK attribute is followed by a number, defining the size of the OI chunk, and then FOR chunkTargetFieldName, where chunkTargetFieldName is the existing ORDERED NUMERIC field from which you want to retrieve data in OI chunks.

So, to illustrate:

DEFINE FIELD YYYYMMDD WITH ORDERED NUMERIC DEFINE FIELD YYYYMM WITH ORDERED NUMERIC INVISIBLE CHUNK 100 FOR YYYYMMDD

A CHUNK field is automatically maintained based on updates to the target (FOR) field (here, YYYYMMDD). The stored value is the value of the target field rounded down to the CHUNK size. So, for example, if you stored a value of 20121225 into YYYYMMDD, the underlying code would automatically store the value 20121200 into YYYYMM. Similarly, if you stored 20121203 into YYYYMMDD, a 20121200 value would also be stored into YYYYMM.

When doing a range find, Model 204 decomposes the find into range finds based on the CHUNK field and the target field. So, for example, a find for values between 20121211 and 20130205 would be decomposed into a find for:

  • YYYYMMDD between 20121211 and 20121299 (ignoring for the moment issues with non-integer values and demonstrating that CHUNK doesn't really know or care if the field is a date)
  • OR YYYYMM between 20130100 and 20130100 (or more precisely, exactly equal to 20130100)
  • OR YYYYMMDD between 20130200 and 20130205

This reduces the number of scanned ordered index entries by a factor of 2. Results would often be much better than that. For example, it probably would not be atypical for a date range to land on month boundaries, resulting in improvements closer to a factor of 30. And, as the range got bigger and bigger, more and more of the range processing would occur on the CHUNK field, with additional improvement in processing.

Multiple CHUNK fields

To get even better performance, you could define multiple CHUNK fields for a target field:

DEFINE FIELD YYYYMMDD WITH ORDERED NUMERIC DEFINE FIELD YYYYMMD WITH INVISIBLE ORDERED NUMERIC CHUNK 10 FOR YYYYMMDD DEFINE FIELD YYYYMM WITH INVISIBLE ORDERED NUMERIC CHUNK 100 FOR YYYYMMDD DEFINE FIELD YYYY WITH INVISIBLE ORDERED NUMERIC CHUNK 10000 FOR YYYYMMDD

The OI chunk size for a CHUNK field must be a greater-than-1 integral multiple of all smaller OI chunk sizes for the same target field. For example, specifying "7" as the first CHUNK size in the above example would be invalid because 100 is not an integral multiple of 7.

Note that you could reference the CHUNK fields directly in non-updating statements, as shown in this example:

fd to %recset yyyy eq %year end find

In any case, the number and size of the CHUNK fields allows you to trade off the overhead of doing updates against the efficiency of doing range finds.

Choosing between ORDERED and FRV

The ORDERED attribute is usually a better alternative to the FRV attribute. Although FRV fields with small numbers of values might perform faster in value loop processing, they lose this advantage if the found values are required to be in order, and as the number of values increase. FRV ordered retrievals require additional sorting. For retrievals in ascending or descending order, ORDERED fields are more efficient. For (presumably rare) right-adjusted character strings in order, ORDERED fields require additional sorting.

Compatibility of ORDERED with other field attributes

Use the following guidelines when specifying the ORDERED attribute:

  • A field cannot be both ORDERED and FRV.
  • A field cannot be both ORDERED NUMERIC and NUMERIC RANGE.
  • A field can be both KEY and ORDERED (but serves little or no purpose).
  • A field can be both INVISIBLE and ORDERED.
  • A field can be both DEFERRABLE and ORDERED. If the field is DEFERRABLE, the index updates can be written out to a (see DEFERRABLE and NON-DEFERRABLE attributes for details).
  • UNIQUE field must be ORDERED.
  • Hash and sort key fields and record security fields can be ORDERED.
  • Fields which are defined as KEY can be converted to ORD with no code changes required (as direct retrievals will work with either type of ORD field).

For an architectural view of the ordered index, see the Ordered Index.

KEY and NON-KEY attributes

The KEY and NON-KEY attributes govern whether to create a hash index for a field or not. If KEY is specified, an index is created for the field. Retrieval commands will access the index for that field. If NON-KEY is specified, every record in the table is examined. The NON-KEY attribute is a default field description. This section provides information about space and performance for each of these attributes.

KEY attribute

When storing fields that have the KEY attribute, Model 204 makes special entries in the hash index. During retrieval, Model 204 goes directly to the appropriate index entry to find which records satisfy the selection criteria, without searching through other records in the file. This makes retrieval based on KEY fields extremely fast.

However, these index entries must be updated, when User Language statements or Host Language Interface functions add, change, or delete fields. Processing might take slightly longer for KEY fields than for NON-KEY fields. In addition, the space occupied by the index entries can be considerable, especially in large files and in fields that take on many values.

NON-KEY attribute

No index entries are maintained in Tables C and D for NON-KEY fields. When a retrieval is performed based on the value of NON-KEY fields, the data in Table B must be searched directly. The direct search can have a significantly adverse effect on performance, if a large number of records must be searched.

However, the cost of the direct search can be greatly reduced in some cases where both KEY and NON-KEY fields are specified in the retrieval conditions. Model 204 reduces the number of records to be searched directly by performing the indexed selection first. Records that are definitely eliminated, based on the KEY conditions, are not searched directly.

Choosing between KEY and NON-KEY

Keep the following guidelines in mind when deciding to designate a field as KEY or NON-KEY:

  • If space is not an overriding consideration, designate fields that are used frequently for retrievals of equality as KEY.
  • If a field is updated frequently and only occasionally used for retrieval, define it as NON-KEY.
  • If a field is used frequently for retrievals and updated frequently, the choice is less obvious. If you reserve sufficient space in Tables C and D for adding inversion entries, the field initially can be defined as NON-KEY and changed later with the REDEFINE command.

FOR EACH VALUE (FRV) and NON-FRV attributes

The FRV attribute is useful when creating fields used in value loop processing. If the FRV attribute is not specified when a field is defined, the NON-FRV attribute is assumed. If this attribute is set, the field is usually assigned the KEY attribute as well.

Table space for FRV fields

Every value of an FRV field is automatically encoded in Table A, whether or not the value is stored in Table B (that is, INVISIBLE). In addition to more Table A space, FRV fields require slightly more Table C and Table D space than ordinary key fields. Because of these additional space requirements, the FRV attribute is most frequently defined for fields in which the number of different values does not exceed approximately 200.

Using the FRV attribute

The FRV attribute behaves in a special way when combined with the BINARY attribute. This is discussed in BINARY attribute.

Fields cannot be defined with the FRV attribute in First-Normal Form (1NF) files.

FRV attribute, FRV User Language statement

Value loops

User Language provides a special construction called a value loop that executes a set of statements once for each different value of an indicated field. The loop is initiated by the FOR EACH VALUE statement; the field must have the FRV or ORDERED attribute. The FOR EACH VALUE statement has Host Language equivalents in IFFDV and IFGETV. A value loop is illustrated in the following example:

Example

EACH.STATE: FOR EACH VALUE OF STATE GET.RECS: FIND ALL RECORDS FOR WHICH STATE = VALUE IN EACH.STATE END FIND TOT.RECS: COUNT RECORDS IN GET.RECS PRINT.CT: PRINT VALUE IN EACH.STATE - WITH COUNT IN TOT.RECS AT COLUMN 20 END FOR

On the first pass through the loop, a value of STATE, such as DELAWARE, is selected. The last statement prints DELAWARE and the number of records with the field-name-equals-value pair: STATE = DELAWARE. On the next pass, another value is chosen, such as IOWA, and so on until all values of the STATE field have been processed exactly once.

Using the IN ORDER option

The IN ORDER option of the FOR EACH VALUE statement specifies that the values of an FRV KEY field be presented in an ordered sequence-ascending, descending, numerical, right adjusted. This option improves performance by avoiding Online sorts of a large number of records, because only the values are sorted instead of a large found set of records.

Value loop processing for ORDERED fields

You can also use value loop processing on fields with the ORDERED attribute. When the FOR EACH VALUE statement with the IN ORDER option is used for NON-FRV, ORDERED fields, ORDERED retrievals can be significantly faster than FRV ordered retrievals. See ORDERED and NON-ORDERED attributes for a description of the ORDERED attribute.

Value loop processing can be made more flexible and efficient by using pattern matching.

NUMERIC RANGE and NON-RANGE attributes

Prior to the introduction of ORDered NUMeric fields, Model 204 permitted range retrieval (a field value numerically equal to, less than, greater than, or between certain values) with the NUMERIC RANGE attribute. This attribute indexes the component parts of a numeric value and then joins them, to provide rapid numeric range retrievals without a direct search of the records.

If the NUMERIC RANGE attribute is not specified when a field is defined, the NON-RANGE attribute is assumed.

It is recommended that, for any new fields being defined,the ORDERED NUMERIC attribute is used in preference to NUMERIC RANGE.

The NUMERIC RANGE attribute cannot be set where the product (ASTRPPG * ATRPG) is greater than 4000 (available in FILEORG X'100' files).

NUMERIC RANGE retrieval specifications

The following retrieval specifications, with their symbolic operator equivalents in parentheses, can be used:

fieldname IS value fieldname IS EQ (=) value fieldname IS LESS THAN (LT or <) value fieldname IS GREATER THAN (GT or >) value fieldname IS BETWEEN (GT or >) value AND (LT or <) value fieldname IS GREATER THAN (GT or >) value AND LESS THAN (LT or <) value fieldname IS LESS THAN (LT or <) value AND GREATER THAN (GT or >) value

Examples

The following examples show numeric retrieval specifications:

ALL: FIND ALL RECORDS FOR WHICH AMOUNT IS 53.00 DATE IS BETWEEN 050500 AND 050532 END FIND

These specifications retrieve records for which AMOUNT has a value of 53.00 (or 53 or 053, and so on) and DATE has a value in May 2005.

NUMERIC RANGE retrieval fields

The values of NUMERIC RANGE retrieval fields can be either positive or negative numbers that have as many as ten digits on either side of the decimal point (20 digits maximum). When such fields are stored, Model 204 makes many entries in Tables C and D as well as some special entries in the field name (attribute) section of Table A. These entries are quite complex, and NUMERIC RANGE retrieval fields require both more storage space and more time to store and update than do KEY fields.

In deciding whether to define a field as NUMERIC RANGE or NON-RANGE (and NON-ORDERED), use the same choice criteria described in KEY and NON-KEY attributes, keeping in mind the additional overhead caused by the extra index entries for RANGE fields.

Interpreting nonnumeric values

If a NUMERIC RANGE field is stored with a value that Model 204 cannot interpret as a number, the system stores a special index entry for the record. The field name of this special entry consists of the NUMERIC RANGE retrieval field name with ++ appended. The entry has a value of NON NUMERICAL. Thus, the following User Language retrieval retrieves all records with nonnumerical values of AGE:

ALL: FIND ALL RECORDS FOR WHICH AGE++ = NON NUMERICAL END FIND

This can be useful in locating bad input data.

Compatibility with other attributes

You can specify as many NUMERIC RANGE retrieval fields in each record as you need. However, a particular NUMERIC RANGE retrieval field cannot be multiply occurring within any one record.

You cannot specify the NUMERIC RANGE attribute in the following circumstances:

  • With any field defined as FLOAT
  • For any field in a 1NF file

For a table showing invalid attribute combinations, see the DEFINE FIELD command. If you are defining field attributes with FILEMGMT, you will not be allowed to specify conflicting attributes.

Data typing

The following attributes are described individually in the next sections. They represent the way the field is physically stored. Only one of these can be assigned to any field.

  • STRING
  • BINARY
  • BLOB, CLOB, MINLOBE
  • FLOAT
  • DATETIME
  • UTF-8 or UTF-16
  • VISIBLE, INVISIBLE

STRING attribute

STRING is the default attribute for all fields. If your data is character or alphanumeric, such as names and addresses, specify a data type of STRING. STRING fields are also useful if you plan to store mixed data types in an image.

The length of a STRING field is limited to 255 bytes, because the first byte of a STRING data field is a count byte-that is, a byte that contains the length of the rest of the string.

BINARY attribute

Fields whose values are numeric integers can alternatively be compressed and stored as 4-byte binary numbers with the BINARY attribute. Specifying BINARY saves Table B space if the average value of the field is from 4-9 digits long.

Acceptable BINARY values

To be compressed successfully, the value must be1-9 digits long and can contain a minus sign. It cannot contain a plus sign, a decimal point, or leading zeros. If the value cannot be compressed, Model 204 refers to the CODED/NON-CODED field attribute (which the BINARY attribute usually overrides) to decide how to store the value.

BINARY and FRV fields

The BINARY attribute overrides the FRV attribute, except for values that cannot be compressed. If the field cannot be compressed, it will be stored in Table A in either the FEW-VALUED or MANY-VALUED pages depending on the field definition. For instance, AGE has been defined with the BINARY and FRV attributes and takes on the values:

23, 04, -34578, 935., 20, +8754, and TWO

The statement FOR EACH VALUE OF AGE retrieves only the values:

04, 935., +8754 and TWO

If BINARY is not specified when a field is defined, the STRING attribute is assumed.

BINARY fields and NUMERIC VALIDATION files

If a BINARY field is defined in a NUMERIC VALIDATION file model file (FILEMODL = X'01'), Model 204 provides numeric data type validation. If you try to store non-numeric or other non-conforming (that is, non-compressible) data in a BINARY field in a NUMERIC VALIDATION file, Model 204 cancels the request with the following message:

M204.2123: VALUE SPECIFIED VIOLATES BINARY DATA TYPE VALIDATION IN RECORD recno,
FIELD IGNORED: fieldname = value

BLOB, CLOB, and MINLOBE attributes

The BLOB and CLOB attributes describe Large Object data that can be used to store long strings in the database, for example, long XML strings. This means that Model 204 User Language threads can exchange Large Object data with a partner process by way of the Universal Buffer and/or the MQ/204 Interface. The Universal Buffer is a one-per-user, temporary storage area that, like the MQ buffer, automatically expands to accommodate its data contents.

  • With the Universal Buffer, Large Object data can be sent and received using SOUL READ IMAGE and WRITE IMAGE statements, or $SOCKET READ and WRITE calls.
  • With the MQ/204 Interface, Large Object data can be sent and received by MQPUT and MQGET statements, stored and retrieved from the database with SOUL STORE, ADD, and assignment statements using the BUFFER reserved area name.

Note: Files created in V6R1 and later are not compatible with earlier versions of Model 204, because of Large Object support and the concomitant changes to the FPL page.

Introducing Large Object field types

The Large Object field types includes the following Character Large Object (CLOB) and Binary Large Object (BLOB) attributes.

  • CLOBs are character strings.
  • BLOBs are binary strings. No translation is performed when the object is sent or received.

Note: Because Large Object fields require that Table E be present in a file, you cannot simply add a Large Object field to an existing file created in a pre-6.1 version of Model 204. You would need to rebuild the existing file by downloading the data and procedures; recreate the file to establish Table E; define the Large Object fields to incorporate; and load the data.

Limitations of Large Object support

  • Currently there is a limit of 2,147,483,647 bytes (or two gigabytes) of data per Large Object.
  • For CLOBs the translation of ASCII to EBCDIC and back is handled by the design of your application.
  • For BLOBs no translation is performed.
  • The User Language INSERT statement is not supported.

Handling Large Object data

The BLOB and CLOB attributes are supported as follows:

  • You can create, open, and recover a file with a BLOB or CLOB attribute. The Large Object field can be multiply occurring.
  • The DELETE fieldname statement supports Large Object data. This statement frees the Table B and Table E data.
  • You can store and add a Large Object data to the file you have created from the Universal Buffer, BUFFER. Using an assignment statement you can transfer Large Object data from a field into the Universal Buffer.
  • You can set the number of Table E pages during file creation with ESIZE=n.
  • You can view the EHIGHPG parameter to determine the highest number of Table E pages in use.
  • You can modify the size of Table E with an INCREASE or DECREASE command.
  • The DELETE FIELD fieldname command supports the Large Object field type, so you can delete a Large Object field name. However, the space in Table B and Table E is not automatically available for reuse.
  • The $LobLen(lob-field-name) function is available for your use.
  • Dictionary/204 and the Cross-Reference facility support Large Object field types.
  • NULL Large Object data is supported.
  • Transaction back out is supported for Large Object field types.
  • Field-level security is supported.
  • A string field cannot be converted to a CLOB via PAI unload followed by FLOD/FILELOAD.
    • The new CLOB field must be populated, from the STRING field, using a SOUL program and the universal buffer.
    • Or, if using Fast/Unload, a STRING field can be converted to a CLOB during UAI/LAI. See NEW statement option for Lobs for more details.

BLOB and CLOB compatibility with other attributes

See the Field attribute combinations page for the attributes that are not allowed with BLOB or CLOB. Use of conflicting attributes produces the following message:

M204.0411: Conflicting attributes: ([BLOB | CLOB]) and attribute2

Using the MINLOBE (MLBE) attribute

Note: The MINLOBE attribute is available as of Model 204 version 7.5.

The MINLOBE attribute defines the minimum size of a BLOB or CLOB field value that will be stored in Table E.

You can use the MINLOBE attribute only with a BINARY LARGE OBJECT (BLOB) or a CHARACTER LARGE OBJECT (CLOB) field in a FILEORG X'100' file. The MINLOBE attribute cannot be paired with the OCCURS attribute. You can redefine the MINLOBE attribute value.

Syntax

MINLOBE nnn

The default and smallest value for MINLOBE is 0, which means that all values of the field are stored in Table E. The maximum value of MINLOBE is 200.

If a field has a MINLOBE attribute greater than 0:

  • And the value of the field is not longer than MINLOBE, then the value is stored in Table B or Table X, instead of Table E.
  • The SOUL CHANGE statement is not allowed, and the following error is displayed:

    M204.2869 Change refused: MINLOBE > 0 for File filename field fieldname

    Any CHARACTER LARGE OBJECT or BINARY LARGE OBJECT field that does not use the CHANGE statement benefits from setting MINLOBE 200.

  • The RESERVE clause is ignored on the SOUL ADD statement.

Storing and updating LOBs

All large object data (LOBs) in a FILEORG X'100' file are chained. There are four bytes per Table E page overhead for chained LOBs. The pages used by a chained LOB are not contiguous.

Handling LOBs in FILEORG X'100' files also has the following effects:

  • The RESERVE clause is ignored in a LOB field ADD statement processing, as well as the STORE RECORD statement processing of fieldname=value pairs.

    Consequently, the CHANGE statement does not fail because of insufficient reserved space. If the CHANGE statement requires that a LOB field be extended, it is.

  • The value of the EHIGHPG parameter is always one less than the high water mark of the number of pages used to hold LOBs. (Unless none were ever added, in which case it is zero, not -1).
  • The value of the EPGSUSED parameter is always the number of pages currently being used to hold LOBs.
  • The COMPACTE command does not process FILEORG X'100' files, just as it does not process a file created in V6R1 or earlier. Thus, issuing a COMPACTE command for a FILEORG X'100' file produces an error message.
  • The TABLEE command effectively does a VIEW ESIZE EHIGHPG EPGSUSED for a FILEORG X'100' file. Consequently there are no TABLEE overhead pages in a FILEORG X'100' file.

FLOAT attribute

Any field whose values are always, or almost always, numeric might be more efficiently stored in a floating-point representation than in character form. Using FLOAT can also reduce the time spent in data type conversions for fields used frequently in User Language or Host Language Interface computations.

Very large or very small numbers, or numbers with many significant digits, occupy considerably less space in floating-point form.

Floating-point fields are fully supported by User Language, the Host Language Interface, and the File Load utility. For more information about storing values in floating-point fields, see the SOUL Floating point conversion, rounding, and precision rules.

Note If FLOAT is not specified when a field is defined, the STRING attribute is assumed.

Defining floating point fields

Field values are stored in floating-point by specifying the FLOAT attribute in the DEFINE FIELD command and supplying a LENGTH of 4, 8, or 16. The length supplied determines the precision of the floating-point field being defined. For example, to define a long precision field, specify:

Syntax

DEFINE FIELD fieldname WITH FLOAT LEN 8

You can define three types of floating point fields through the FLOAT and LENGTH attributes.:

Type of floating-point field Occupies Holds a maximum of Model 204 maintains a
maximum precision of
Short precision 4 bytes 6 significant digits  
Long precision 8 bytes 15 significant digits 15 significant digits
Extended precision 16 bytes 31 significant digits 15 significant digits

Acceptable values

The floating-point data type uses IBM hardware's floating-point representation.

A value of negative zero (X'800...0') cannot be stored in a preallocated FLOAT field.

FLOAT with NUMERIC VALIDATION files

If you define a FLOAT field in a NUMERIC VALIDATION file model file (FILEMODL=X'01'), Model 204 provides numeric data type validation. Trying to store non-conforming data in a FLOAT field in a NUMERIC VALIDATION file causes Model 204 to cancel the request with the following message:

M204.2124: VALUE SPECIFIED VIOLATES FLOAT DATA TYPE VALIDATION IN RECORD recno,
FIELD IGNORED: fieldname = value

Compatibility with other attributes

STRING, BINARY, NUMERIC RANGE, and INVISIBLE cannot be specified for a FLOAT field.

Storing invalid numeric data: FLOAT or ORDERED NUMERIC

The valid range of floating point numbers is:

1E-74 through 7.2370E+75

Model 204 enforces storage rules for floating point numbers outside the valid range:

  • If 7.2371E75 is stored in a FLOAT field, the value is treated as an invalid floating point number and is stored as a data string '7.2371E75'.
  • 1E-75 is treated as an invalid floating point number and stored as a data string '1E-75'.

If an invalid value is stored for a field with the ORDERED NUMERIC attributes, the value stored in the Ordered Index is the data string. That data string is stored on the Invalid Numeric Data portion of the Ordered Index. This changes the output order of these values when the ordering is done against the Ordered Index, for example, when using the User Language IN ORDER BY or FOR EACH VALUE clauses.

DATETIME (DT) attribute

The DATETIME attributes specifies the format of the stored data in Table B, not its function. The other date/time attributes are automatically updated fields generated at the time of record or field group creation or update. A non-automatic field can have a DATETIME format, but all the automatic date/time fields are stored in DATETIME format. The DATETIME attribute requires at least eight significant digits.

Once you define a DATETIME value for a field, you cannot redefine the DATETIME value.

UTF-8 and UTF-16 attributes

Note: The UTF-8 and UTF-16 attributes are available as of Model 204 version 7.5.

The UTF-8 (or UTF8) attribute indicates that data is stored in UTF-8 format and is treated as unicode data inside SOUL programs. You can define a UTF-8 field as follows:

DEFINE FIELD UTFDATA WITH [UTF-8 | UTF8] DEFINE FIELD UTFBLOB WITH BLOB [UTF-8 | UTF8]

The UTF-8 attribute is a constraint that rejects UTF-8 encoding of Unicode characters outside the Basic Multilingual Plane (BMP).

UTF-8 is the only field constraint that is valid for CLOB or BLOB fields.

The UTF-16 (or UTF16) attribute indicates that data is stored in UTF-16 format and is treated as unicode data inside SOUL programs. You can define a UTF-16 field as follows:

DEFINE FIELD UTFDATA WITH [UTF-16 | UTF16] DEFINE FIELD UTFBLOB WITH BLOB [UTF-16 | UTF16]

Usage notes

  • Since UTF-8 or UTF-16 fields behave like unicode variables in SOUL requests, printing fields with unicode characters that cannot be converted to displayable EBCDIC results in those characters being character-entity encoded, just as they are encoded when printing unicode variables. This is also true when printing is done via a PAI statement. For example, a UTF-8 field containing the unicode value of "I like apple π" would be printed as "I like apple &#x03C0;". The ampersand symbol (&) is always encoded as &amp; when printed.
  • Just as with unicode variables, assigning an EBCDIC value to a unicode field results in the EBCDIC value being converted to unicode and then to UTF-8 or UTF-16 for storage. Simlarly, assigning a UTF-8 or UTF16 field to an EBCDIC variable results in the UTF-8 or UTF-16 value being converted to unicode and then translated to EBCDIC. In either EBCDIC to unicode or unicode to EBCDIC translation, errors result in request cancellation.
  • Unless the preponderance of data stored in UTF-8 or UTF-16 fields has unicode values greater than U+07FF (most likely if most of the data contains Asian language characters), UTF-8 is a better choice for storing unicode data, since UTF-16 requires two bytes for every unicode character while UTF-8 only requires one byte for many common characters and only two bytes for unicode characters up to U+07FF.
  • UTF-8 and UTF-16 fields can be LOB fields or they can be STRING fields (stored in table B).
  • If a UTF-8 or UTF-16 field is defined as an OCCURS field, the length indicates the length in bytes, not the length in characters. Since UTF-8 encoding can use anywhere from one to three bytes to represent a single unicode character in the BMP, the only unicode characters supported by SOUL, it can be tricky using UTF-8 fields as OCCURS fields since setting the defined length requires an educated guess about the types of unicode characters that will be store in the field or (probably) oversizing a UTF-8 field to three bytes per character. In the latter case, it would make much more sense to define such a field as UTF-16 as that requires exactly two bytes for each character.

Usage notes on mapping

  • Characters outside the BMP are extremely unlikely in business applications.
  • An EBCDIC blank (X'40') is a valid UTF-8 character.
  • All numerics (X'F0' through X'F9') are UTF-8 characters for mapping characters outside the BMP. Lowercase characters generally fall into the continuation byte range and uppercase characters fall into the sequence start range. Because of this, a short, mixed case sequence, such as Tom or Bo, might slip by UFT-8 validation. However, most attempts to store EBCDIC tests into a UFT-8 field will fail, and therefore catch application errors.

Creating UTF-8 data

You can create UTF-8 data using the SOUL function $Ascii, because 7-bit ASCII is a subset of UTF-8. For example:

ADD UTFFIELD=$ascii(?Hello?)

VISIBLE and INVISIBLE attributes

In general, you want to be able to retrieve, and then display or print fields, or use the field values in a report. These are fields that you want defined as VISIBLE.

If, however, you have a field that is used only to retrieve records, and space is a consideration, you might want to define the field as INVISIBLE.

Note: If the INVISIBLE attribute is not specified when the field is defined, the VISIBLE attribute is assumed.

Using the VISIBLE attribute

Once an application retrieves a record with a User Language FIND statement or a Host Language IFFIND call, it generally processes one or more fields in User Language PRINT, NOTE, or SORT statements, arithmetic expressions, or Host Language IFGET calls. For example:

CHECK.SALARY: IF SALARY GT 1000*AGE THEN PRINT NAME END IF

A field referred to by the PRINT statement must have the VISIBLE attribute in its field description. This means that the field-name-equals-value pair is actually stored in the logical record in Table B.

Using the INVISIBLE attribute

A field that has the INVISIBLE attribute, on the other hand, takes up no storage space in Table B and cannot be printed, noted, or sorted. These fields can be added, deleted, or changed in the logical record.

In a library catalog file, for example, each record can have an AUTHOR field that contains the author's full name. In addition, there can be a LAST NAME field that contains only the author's last name. This creates a convenient index to the catalog and relieves the user of the burden of knowing the entire name when making retrieval requests. Because the author's last name already is part of the (VISIBLE) AUTHOR field, some space can be saved in Table B by selecting the INVISIBLE attribute for LAST NAME.

Reorganizing INVISIBLE fields

Because INVISIBLE fields are not stored in Table B, they require special care if the file is ever reorganized. When designing a file, be sure to take reorganization requirements for these fields into consideration. See the File reorganization and table compaction page, Reorganizing INVISIBLE fields section, for more information.

Maintaining INVISIBLE fields

INVISIBLE fields must also have the KEY, NUMERIC RANGE, or ORDERED attribute, and they are normally used only to retrieve records.

The User Language FILE RECORDS statement and the Host Language Interface IFFILE function can be used to rapidly add an INVISIBLE KEY field name = value pair to an entire set of records. For example:

ALL.RECS: FIND ALL RECORDS FOR WHICH NAME = SMITH OR SMYTHE OR SMYTH END FIND FILE: FILE RECORDS IN ALL.RECS UNDER HOMONYM = SMYTH

Model 204 makes entries in Tables C and D to allow retrievals with the specification HOMONYM = SMYTH but does not change Table B. For further information about adding, changing, or deleting INVISIBLE fields, see the SOUL description of the INVISIBLE attribute.

Table B considerations

The values of VISIBLE fields can be stored in the logical record in Table B in one of four formats depending upon the selection of STRING/BINARY, CODED/NON-CODED, or FLOAT field attributes. The choice affects space requirements and, except for a slight increase in the time required for updates and some types of retrievals (see CODED and NON-CODED attributes), is transparent to the user.

The values of fields that have the INVISIBLE attribute are not stored in Table B, and physical storage attributes cannot be specified in their descriptions.

Using INVISIBLE fields in 1NF files

The First-Normal form relational model cannot be enforced for INVISIBLE fields. However, INVISIBLE fields are useful within Model 204 for list processing and to support SQL multicolumn keys (see the Model 204 SQL Server User's Guide for more information). Therefore, the INVISIBLE field attribute is allowed, but INVISIBLE fields are not mapped to SQL-accessible columns, and there are other limitations with SQL DML operations against INVISIBLE fields, as described in the SQL Server User's Guide. The following rules and restrictions apply to INVISIBLE fields in 1NF files:

  • Because the INVISIBLE and AT-MOST-ONE attribute are incompatible, INVISIBLE fields must be defined as REPEATABLE.
  • You cannot REDEFINE a field as INVISIBLE within a 1NF file; the INVISIBLE and REPEATABLE attributes can be specified only with a DEFINE command. In the case of a record security key field, use the INITIALIZE command to specify the field as INVISIBLE and REPEATABLE.
  • INVISIBLE fields in 1NF files must also be ORDERED or KEY (NUMERIC RANGE is not legal in a 1NF file and results in an error message).

Compatibility with other attributes

The INVISIBLE attribute cannot be specified for a field defined with the FLOAT or AT-MOST-ONE attributes.

Data content

In addition to the data type (above), there are additional attributes which further control how the data is stored and manipulated:

  • CODED, NON-CODED
  • OCCURS
  • LENGTH
  • PAD
  • FEW-VALUED, MANY-VALUED
  • UPDATE IN PLACE, UPDATE AT END
  • DEFAULT-VALUE
  • STORE-DEFAULT, STORE-NULL
  • AT-MOST-ONE, REPEATABLE and EXACTLY-ONE
  • DEFERRABLE/NON-DEFERRABLE

CODED and NON-CODED attributes

The CODED and NON-CODED attributes let you choose between saving disk space (with CODED) or speeding up update and retrieval time (with NON-CODED). This section discusses how these attributes work.

Note: If the CODED attribute is not specified when a field is defined, the NON-CODED attribute is assumed.

NON-CODED attribute

If a field is designated NON-CODED without an OCCURS clause (see the following discussion), its value is stored in Table B as a simple character string that has an additional byte to indicate its length.

NON-CODED is the best choice for any field whose values have an average length of 1-3 bytes whether or not they are compressible numbers.

CODED attribute

Choose the CODED option to save Table B space where the values of the field are not suitable for BINARY compression and where their average length is four or more characters.

When a value that has the CODED attribute is defined, the character string is stored in Table A-the internal file dictionary-and a 4-byte value code pointing to that character string is stored in the logical record in Table B. Space is saved when there are several records that contain the same value. The string is stored only once in Table A, and only the 4-byte code is stored in each of the several records in Table B.

Choosing CODED vs. NON-CODED

The coding and decoding of these value codes takes time, which can slow down updates and retrievals. If speed is important and if disk space is relatively plentiful, do not choose the CODED attribute.

If disk space is somewhat tight, choose the CODED attribute for fields where the values are long and where there are a small number of values relative to the number of records in the file. A STATE field is a prime example.

A more precise rule governing the choice of the CODED attribute is difficult to arrive at since first it must be determined (based on the number and length of the field values, and on the number of records) if any disk space is actually saved by the CODED attribute. Then, any space saved must be weighed against the extra time taken by CODED retrievals. And the relative importance of disk space and time are likely to vary with the application as well as with the installation.

The formulas in File size calculation in detail show more explicitly the effects of the CODED field attribute on Table A and B size.

Field value encoding is entirely transparent to the user. Data is returned exactly as it was entered. Codes are system-generated, internal codes that have no meaning to a user.

Preallocated fields (OCCURS attribute)

Any VISIBLE field can be a preallocated field. However, a VISIBLE field that appears a fixed number of times in most of the records in a file, with fairly uniform lengths for the values, is most suited for preallocation in Table B. Preallocated fields, if selected appropriately, require less Table B space than other fields and can be updated and retrieved slightly faster.

Preallocation means that space is reserved in each new record added to a file. As occurrences of preallocated fields are stored in a record, they are placed in the reserved space. Space is used even for occurrences that have not been stored, whereas non-preallocated fields use no space for missing occurrences. Preallocation also imposes restrictions on the number and length of values that can be stored in a record. A field is preallocated if its description includes an OCCURS clause.

File size calculation in detail provides specific rules for the amount of space used by preallocated and non-preallocated fields. Use these rules to determine whether or not to preallocate a particular field.

Using an OCCURS clause

A field is preallocated if its description includes an OCCURS clause. The OCCURS clause indicates the number of occurrences of the field that are preallocated or reserved in each Table B record. From 1 to 255 occurrences can be specified. Any number of values, up to the specified limit, n, can be stored in the record.

Only as many occurrences as have been stored can be retrieved. Space allocated for unstored values is wasted.

If OCCURS is not included in a field description, the field still can occur more than once in a record. Space for the occurrences is not preallocated. There is no limit to the number of values that can be stored for a non-preallocated field.

Restrictions for preallocated fields

The OCCURS option can be specified only for fields that can be preallocated with a fixed length in Table B. This includes only fields that are defined as CODED, BINARY, or LENGTH m (see LENGTH and PAD clauses). If a field is defined as BINARY but does not contain an OCCURS clause, values that cannot be compressed to a binary representation still can be stored in the field. If a field is defined as both BINARY and OCCURS, only compressible values can be stored in it.

Model 204 does not allow incompressible values to be stored in such fields. To avoid such compression problems, include CODED with BINARY and OCCURS in the field description.

Displaying the set of preallocated fields

The DISPLAY RECORD command will show all of the preallocated fields in a file. This information is stored on a page in Table D where it is used to create the preallocated field space at the beginning of a record when it is initially stored.

Note: Because Model 204 cannot distinguish values padded with X'00' from zero-length values explicitly stored in the file, Model 204 does not allow zero-length values to be explicitly stored in preallocated fields.

OCCURS cannot be specified on a field group field.

LENGTH and PAD clauses

Using the LENGTH clause

Use the LENGTH clause to specify the length of FLOAT fields, as well as NON-CODED preallocated (OCCURS) string fields.

LENGTH must be specified for NON-CODED preallocated string fields: the LENGTH must be included in the description of a NON-CODED preallocated string field to set the maximum length of the field. The length must be in the range 1 through 255. The specified length is preallocated for the number of occurrence specified by the OCCURS clause.

LENGTH with FLOAT fields: LENGTH is required with FLOAT to indicate the precision of the floating-point field being defined. The LENGTH options in this instance are 4, 8, or 16. (See FLOAT attribute.)

Specifying the LENGTH clause: The LENGTH clause limits the length of values that can be stored in the field. Model 204 does not store values that are longer than the specified length. The LENGTH clause does not impose any minimum length. Values that are shorter than the specified length are padded with the PAD character to the maximum length as they are stored in Table B and are returned to their original length as they are retrieved. See the description of padding in LENGTH and PAD clauses.

Field occurrences that have been preallocated but not stored contain zero-length values.

Using the PAD clause

You can include a PAD clause in a field description to select the character that is used to pad field values that are shorter than the length specified in the LENGTH clause (see LENGTH and PAD clauses) for the field. If no PAD character is specified in a field description, the default character, X'00', is used.

Padding characters are invisible to the user and are not included in a retrieved value. Make sure that the padding character does not appear as part of a field value. If it does, incorrect results can occur.

FEW-VALUED and MANY-VALUED attributes

If a field that has the CODED or FRV attribute is expected to take on fewer than about 50 different values, choose the FEW-VALUED attribute. Otherwise, define the field as MANY-VALUED.

This option determines whether the value string is stored in the FEW-VALUED or MANY-VALUED section of Table A, and is invalid for fields that are neither CODED nor FRV. Keep the number of values stored in the FEW-VALUED section small to ensure that this section is kept in core during retrieval and update operations. This practice reduces the number of disk accesses and increases retrieval speed.

Note: If the FEW-VALUED attribute is not specified when a field is defined, the MANY-VALUED attribute is assumed.

UPDATE IN PLACE (UP) and UPDATE AT END (UE) attributes

Model 204 provides the following field attributes that control the way that the value of a field occurrence is changed in Table B:

If this attribute is specified... Then a change in the value of a field occurrence...
UPDATE IN PLACE

Does not change its position relative to other occurrences of the same field in Table B.

This attribute is the default.

UPDATE AT END Is accomplished by deleting the existing occurrence and adding a new one following the others.

Choosing an update option

Specify either of these options as part of a field description. The UPDATE AT END approach can be useful in certain data aging applications: for example, in applications in which the first occurrence of a field represents the oldest update to the file, and the last occurrence is the most recent update. UPDATE IN PLACE generally is more efficient.

The choice of update option determines the position of a field occurrence within a record only in relation to other occurrences of the same field. The ordering of different fields within a record is determined by a variety of factors, of which the update method is only one. The order of fields within a record normally is not under the control of the file manager.

SOUL update statements other than CHANGE are not affected by the choice of an update option. The ADD and STORE statements always cause values to be added as the last occurrences in a record. The DELETE statement always removes the deleted value while maintaining the relative ordering of the remaining occurrences.

Compatibility with other attributes

The values of fields that have the INVISIBLE attribute are not stored in Table B, and UPDATE IN PLACE or UPDATE AT END cannot be specified in their descriptions.

DEFAULT-VALUE (DV) attribute

DEFAULT-VALUE specifies the value of the field if the value was not explicitly added in the STORE RECORD, ADD FIELDGROUP, or INSERT FIELDGROUP initial data block.

A field defined with a DEFAULT-VALUE (DV) attribute value:

  • Must also be defined with the EXACTLY-ONE or the AT-MOST-ONE attribute.
  • Must be contained in a file that has the FILEORG parameter X'100' bit set.

The DEFAULT-VALUE attribute is available as of Model 204 version 7.5. For more information about DEFAULT-VALUE fields that are members of a field group, see Handling references to missing occurrences.

The value of the STORE-DEFAULT attribute determines whether the DEFAULT-VALUE is physically stored on the record or if it is just used as the default value when the field is missing:

STORE-DEFAULT valueEffect on record storage
ALLAlways store the DEFAULT-VALUE on the record.
NONENever store the DEFAULT-VALUE on the record.
LITERALStore the DEFAULT-VALUE if it was literally entered on the store statement (not entered as a variable).

STORE-NULL has the same options as STORE-DEFAULT, but they are applied to storing nulls:

STORE-NULL valueEffect on null storage
ALLAlways store a null value on the record.
NONENever store a null value on the record.
LITERALStore the null value if it was literally entered on the Store statement (not entered as a variable).

Note: Neither STORE-NULL NONE nor STORE-NULL LIT is allowed with DEFAULT-VALUE.

The DEFAULT-VALUE attribute allows values as long as 31 bytes. An attempt to use more results in:

M204.2851: DEFAULT-VALUE VALUE '1234567890123456789012345678901234567890' TOO BIG, MUST BE <32 BYTES

Note: The minimum length of a DEFAULT-VALUE value is 1 byte. A null string value is not allowed.

The value of DEFAULT-VALUE is not constrained by attributes such as LEQ, LLE, and LGE. For example, the entire default value is returned for a field that has been added, even if it exceeds the LEQ setting.

STORE-DEFAULT (SD) and STORE-NULL (SN) attributes

The STORE-DEFAULT and STORE-NULL attributes specify whether to physically store the default value and the null value, respectively, for the field in each record. These attributes are available as of Model 204 version 7.5.

You can redefine the STORE-DEFAULT and/or STORE-NULL attribute values.

Note: A field with the STORE-DEFAULT attribute must also have the DEFAULT-VALUE attribute.

The STORE-DEFAULT and STORE-NULL attributes offer these options:

  • LITERAL (the default)
  • NONE
  • ALL

STORE-DEFAULT example

The following example shows the effect of DEFAULT-VALUE on fields defined with the different STORE-DEFAULT types (ALL, NONE, and LITERAL). Field NAME_SD_NONE is defined with the STORE-DEFAULT option NONE, field NAME_SD_LIT with the option LITERAL, and field NAME_SD_ALL with the option ALL. The DEFAULT-VALUE for all of the fields is NONAME.

In this example, the fields belong to a field group, but the STORE-DEFAULT (and STORE-NULL) attribute can also be assigned to a field that is not part of a field group. The only requirement is that the field cannot be defined as REPEATABLE; that is, it must be AT-MOST-ONE or EXACTLY-ONE.

DEFINE FIELDGROUP X DEFINE FIELD NAME_SD_NONE WITH FIELDGROUP X AND ORD AT-MOST-ONE - STORE-DEFAULT NONE DEFAULT-VALUE 'NONAME' DEFINE FIELD NAME_SD_LIT WITH FIELDGROUP X AND ORD AT-MOST-ONE - STORE-DEFAULT LITERAL DEFAULT-VALUE 'NONAME' DEFINE FIELD NAME_SD_ALL WITH FIELDGROUP X AND ORD AT-MOST-ONE - STORE-DEFAULT ALL DEFAULT-VALUE 'NONAME'

There are basically two ways to store a field in a record:

  • Using a literal:

    STORE RECORD FIELD.A = 'ABC' END STORE

  • Using a variable:

    %X = 'ABC' STORE RECORD FIELD.A = %X END STORE

Additionally, a field might simply be missing (in the following example, the record has a FIELD.B, but FIELD.A is missing):

STORE RECORD FIELD.B = 'XXX' END STORE

The STORE-DEFAULT options determine how the field will be physically stored if the DEFAULT-VALUE is stored in the field.

This example shows what is physically stored in the record when the DEFAULT-VALUE is stored for each of the fields defined above for FIELDGROUP X. The DEFAULT-VALUE defined for each is NONAME. First, the actual DEFAULT-VALUE of NONAME is added by using a variable, then by using the literal 'NONAME'. Then a field group occurrence is added where the field is missing.

ADD.X: SUBROUTINE IN FILEX FOR RECORD NUMBER %CURREC * add first field group occurrence using the variable %NAME: %NAME='NONAME' ADD FIELDGROUP X NAME_SD_NONE = %NAME NAME_SD_LIT = %NAME NAME_SD_ALL = %NAME END ADD * add second field group occurrence using the literal 'NONAME': ADD FIELDGROUP X NAME_SD_NONE = 'NONAME' NAME_SD_LIT = 'NONAME' NAME_SD_ALL = 'NONAME' END ADD * add third occurrence of the field group with all values missing: ADD FIELDGROUP X * none of these fields added to this occurrence END ADD END FOR

A subsequent PAI of the field group shows what is physically stored on the record:

The field defined with... is...
STORE-DEFAULT NONE Never stored on the record.
STORE-DEFAULT ALL Always stored on the record.
STORE-DEFAULT LITERAL Not stored in the first occurrence, because a %variable was used. It was stored in the second occurrence, because the field was added using a literal string.

The PAI output:

\X = 1 NAME_SD_ALL = NONAME /X = 1 \X = 2 NAME_SD_LIT = NONAME NAME_SD_ALL = NONAME /X = 2 \X = 3 /X = 3

Note: When the record above is printed (using the following):

FR PRINT '# NAME_SD_NONE: NAME_SD_LIT: NAME_SD_ALL:' FEO: FEO FIELDGROUP X PRINT OCCURRENCE IN FEO AND NAME_SD_NONE AND - NAME_SD_LIT AT 16 AND NAME_SD_ALL AT 30 END FOR END FOR

The DEFAULT-VALUE is printed whether the field is physically stored on the record or not.

# NAME_SD_NONE: NAME_SD_LIT: NAME_SD_ALL: 1 NONAME NONAME NONAME 2 NONAME NONAME NONAME 3 NONAME NONAME NONAME

Note: The same behavior occurs whether the field is AT-MOST-ONE, as in this example, or EXACTLY-ONE. However, within fieldgroup context, a PAI of an EXACTLY-ONE field always shows a value. The actual physical storage on the record is the same for both attributes.

STORE-NULL example

The options on the STORE-NULL attribute react similarly to those of STORE-DEFAULT. When the fields are defined as follows:

DEFINE FIELDGROUP X DEFINE FIELD NAME_SN_NONE WITH FIELDGROUP X AND ORD AT-MOST-ONE - STORE-NULL NONE DEFINE FIELD NAME_SN_LIT WITH FIELDGROUP X AND ORD AT-MOST-ONE - STORE-NULL LITERAL DEFINE FIELD NAME_SN_ALL WITH FIELDGROUP X AND ORD AT-MOST-ONE - STORE-NULL ALL

And the code used to store the field groups is:

IN FILEX FRN %CURREC * add first occurrence using the variable %NAME: %NAME= ADD FIELDGROUP X NAME_SN_NONE = %NAME NAME_SN_LIT = %NAME NAME_SN_ALL = %NAME END ADD * add second occurrence using the literal '': ADD FIELDGROUP X NAME_SN_NONE = '' NAME_SN_LIT = '' NAME_SN_ALL = '' END ADD * add third occurrence with missing values: ADD FIELDGROUP X END ADD END FOR

The resulting PAI is:

\X = 1 NAME_SN_ALL = /X = 1 \X = 2 NAME_SN_LIT = NAME_SN_ALL = /X = 2 \X = 3 /X = 3

And this request:

BEGIN FR PRINT '# NAME_SN_NONE: NAME_SN_LIT: NAME_SN_ALL:' FEO: FEO FIELDGROUP X PRINT OCCURRENCE IN FEO AND - NAME_SN_NONE AND NAME_SN_LIT AT 18 - AND NAME_SN_ALL AT 32 END FOR END FOR END

Results in nulls displayed for each field:

# NAME_SN_NONE: NAME_SN_LIT: NAME_SN_ALL: 1 2 3

AT-MOST-ONE, REPEATABLE, and EXACTLY-ONE attributes

You can use the AT-MOST-ONE, REPEATABLE, and EXACTLY-ONE attributes, which are mutually exclusive, to define an individual field or a field that is part of a field group definition.

Note: The EXACTLY-ONE attribute is available as of Model 204 version 7.5.

Using the AT-MOST-ONE attribute

The AT-MOST-ONE attribute allows you to ensure that there is only one occurrence of a field in any record, without preallocating space for the field.

In addition, the AT-MOST-ONE attribute is required for fields in First-Normal Form (1NF) files. AT-MOST-ONE and REPEATABLE fields in 1NF files are discussed in this section.

Note: In non-1NF files, if the AT-MOST-ONE attribute is not specified when a field is defined, the REPEATABLE attribute is assumed.

If a field is defined with the AT-MOST-ONE attribute, Model 204 prevents multiple occurrences of that field in any given record. The AT-MOST-ONE attribute is useful if, for example, you know that there will be one variable-length field per record, or if you have a field that is often left blank but does not have more than one value per record (such as SPOUSE NAME).

Unlike fields with the OCCURS attribute, AT-MOST-ONE fields are not specifically preallocated. The AT-MOST-ONE attribute lets you store these kinds of records without wasting the space required by OCCURS.

Using the REPEATABLE attribute

If a field is not defined as AT-MOST-ONE, then it is REPEATABLE. Fields with the REPEATABLE attribute can accept multiple occurrences of the field in the record. Unlike the OCCURS attribute, REPEATABLE does not preallocate fields, nor does REPEATABLE let you specify the number of occurrences of the field.

Using the EXACTLY-ONE (EXONE) attribute

As the name suggests an EXACTLY-ONE attribute in a field definition means that the field always has exactly one occurrence in its record or field group context after the initial data block for the record or field group (STORE RECORD, ADD FIELDGROUP, or INSERT FIELDGROUP). If no explicit value is stored for the field, its value has either the DEFAULT-VALUE for the field or the null string (''), if no DEFAULT-VALUE was specified for the field.

You cannot add, insert, or delete an EXACTLY-ONE field, although you can change an EXACTLY-ONE field.

The EXACTLY-ONE attribute that also has the DEFAULT-VALUE attribute will conflict with the STORE-NULL NONE attribute.

The EXACTLY-ONE attribute is also not allowed with the STORE-NULL LITERAL attribute.

EXACTLY-ONE fields with a field group

The EXACTLY-ONE attribute for a field within a field group means that the field can occur only once inside a single field group occurrence. However, since field groups can occur multiple times in a field group record, an EXACTLY-ONE field inside a field group can occur multiple times inside a field group record.

For more information about EXACTLY-ONE fields in a field group, see Updating fields in a field group.

Defaults for AT-MOST-ONE, REPEATABLE and EXACTLY- ONE fields

REPEATABLE is the default except for 1NF files, where AT-MOST-ONE is required on all fields, except INVISIBLE. See VISIBLE and INVISIBLE attributes for information.

The AT-MOST-ONE attribute is required on all fields in a 1NF file because multiply occurring fields violate relational First-Normal Form. The one exception to this rule is for INVISIBLE/REPEATABLE fields in 1NF files. See AT-MOST-ONE in First Normal Form files for more information.

The EXACTLY-ONE attribute is the default frequency of occurrence for a field defined within a field group.

Enforcing AT-MOST-ONE constraints

Model 204 ensures that AT-MOST-ONE field constraints are not violated for the following User Language requests and the Host Language and FLOD counterparts:

  • ADD
  • INSERT
  • STORE RECORD

Before the indexes are updated and a new field is added or inserted into the Model 204 record, all existing fields in the record are checked to ensure that the new field does not create a second occurrence:

  • If a new field violates the AT-MOST-ONE rules, and there is no ON FCC unit, the request is canceled with the following error message:

    M204.2119: AT-MOST-ONE CONSTRAINT VIOLATION IN RECORD recordnumber, FIELD IGNORED: fieldname=value

  • Model 204 validates a field if you REDEFINE it to be AT-MOST-ONE.
  • If a violation occurs and there is an ON FCC unit, Model 204 processes the ON unit.

In the Host Language Interface, the 202 return code is passed back to Model 204; in FLOD, the file is marked logically inconsistent. However, Model 204 does not cancel the request in either case.

AT-MOST-ONE and transaction back out files

Transaction back out must be active for files when defining AT-MOST-ONE fields. If you try to define an AT-MOST-ONE field in a non-TBO file, the DEFINE fails and you receive an error message.

Once the field is defined, you can turn Transaction back out off in non-1NF files. However, Model 204 customer support strongly recommends that you keep Transaction back out set in files containing fields that can cause constraint conflicts (currently AT-MOST-ONE and UNIQUE). If you turn Transaction back out off in a file with an AT-MOST-ONE or UNIQUE field, Model 204 sends you a warning message and a "Do You Really Want To?" prompt.

Using FLOD or FILELOAD with AT-MOST-ONE fields

When you use FLOD or FILELOAD to load or modify records that have AT-MOST-ONE fields, Model 204 ensures that the new or modified records do not violate AT-MOST-ONE constraints. Model 204 detects and reports all violations of AT-MOST-ONE in one pass of the input data. If Model 204 finds any fields that violate the constraints, the following occurs:

  1. The error is reported and the new or duplicate field is rejected.
  2. FLOD continues processing the input data.
  3. At the end of the run, the file is marked "logically inconsistent;" that is, the FISTAT parameter is set to X'40'. See Logically inconsistent files for more information.
  4. The following error message is issued for each violation:

    M204.2119: AT-MOST-ONE CONSTRAINT VIOLATION IN RECORD recordnumber, FIELD IGNORED: fieldname=value

AT-MOST-ONE in First Normal Form (1NF) files

Using OCCURS 1 in 1NF files: For 1NF files, where AT-MOST-ONE is required, use OCCURS 1 to save processing time if you are storing a large number of records in which the field is always exactly the same length (such as a Social Security Number).

INVISIBLE fields in 1NF files: The First-Normal form relational model cannot be enforced for INVISIBLE fields. However, they are useful within Model 204 for list processing and to support SQL multicolumn keys. (See the Model 204 SQL Server User's Guide for more information.) Therefore, the INVISIBLE field attribute is allowed, but INVISIBLE fields are not mapped to SQL-accessible columns. The following rules and restrictions apply to INVISIBLE fields in 1NF files:

  • Because the INVISIBLE and AT-MOST-ONE attributes are incompatible, INVISIBLE fields must be defined as REPEATABLE.
  • You cannot REDEFINE a field as INVISIBLE within a 1NF file; you can specify the INVISIBLE and REPEATABLE attributes only with a DEFINE command. In the case of a record security key field, specify the field as INVISIBLE and REPEATABLE using the INITIALIZE command.
  • INVISIBLE fields in 1NF files must also be ORDERED or KEY. NUMERIC RANGE is not legal in a 1NF file and results in an error message)

Defining AT-MOST-ONE and REPEATABLE fields in non-1NF files

You can DEFINE or REDEFINE a field to be AT-MOST-ONE in any non-1NF Model 204 file created after Release 9.0.

When redefining a field to AT-MOST-ONE that does not currently have the OCCCURS 1 attribute, Model 204 scans Table B to verify that no multiple occurrences of the field exist in any record. If any field violates the AT-MOST-ONE constraints, the REDEFINE fails and the following error message is displayed for each violation:

M204.2122: AT-MOST-ONE CONSTRAINT VIOLATION FOR FIELD fieldname IN RECORD recordnumber

Using the AT-MOST-ONE or the OCCURS 1 attribute

Although both the AT-MOST-ONE and OCCURS 1 attributes prevent more than one occurrence of a field in a record, they achieve their goal in different ways. For non-1NF files, use either the AT-MOST-ONE or the OCCURS 1 attribute. Using both causes redundant processing. Understanding the advantages of each should help you decide which attribute you want to use.

AT-MOST-ONE attribute:

  • Does not preallocate the field. Therefore, if the field values between records are of very different lengths or if you do not know the length of the longest record, use AT-MOST-ONE.
  • Allows you to correct violations before the record is stored with the ON FCC unit. If you do not use an ON FCC unit, then a constraint violation cancels the request. Therefore, if it is important to allow users to correct violations, or it is important that the request not be canceled, use AT-MOST-ONE.

OCCURS 1 attribute:

  • Preallocates space for the field in every record. If you are storing a large number of records and the length of the value is generally the same, pre-allocating the field with OCCURS 1 can save you processing time.
  • Cancels the request when a violation occurs. With OCCURS 1, you cannot use an ON FCC unit to correct violations.

Compatibility with other field attributes

AT-MOST-ONE is not compatible with the following field attributes:

  • INVISIBLE
  • OCCURS greater than 1
  • REPEATABLE

    REPEATABLE is compatible with all field attributes except AT-MOST-ONE.

Coordinating AT-MOST-ONE and UNIQUE attributes

Both AT-MOST-ONE and UNIQUE are examples of field-level constraints. Although the function of the UNIQUE and AT-MOST-ONE attributes appear similar, their actual operations differ:

This attribute Affects...
UNIQUE Value of the field
AT-MOST-ONE Number of fields per record

For example, if a Social Security field within an EMPLOYEE file is both UNIQUE and AT-MOST-ONE:

This attribute Ensures that...
UNIQUE Social security number for every employee is different.
AT-MOST-ONE Each employee has only one social security number.

Coordinating the use of the AT-MOST-ONE and the UNIQUE attributes increases the integrity and accuracy of your records.

DEFERRABLE and NON-DEFERRABLE attributes

The DEFERRABLE and NON-DEFERRABLE attributes let you specify whether the updates to the index (Tables C and D) for a KEY ORDERED field can be deferred; that is, stored or updated at a later time.

The DEFERRABLE attribute is recommended and is the default.

Note: If the NON-DEFERRABLE attribute is not specified when a field is defined, the DEFERRABLE attribute is assumed.

Choosing between DEFERRABLE and NON-DEFERRABLE

Under normal circumstances, the storing and updating of records in the Model 204 file is done all at once. That is, Tables B, C, D, E, and X are changed simultaneously. When there is a high volume of updates (for example, a Batch2 process adding large numbers of records), efficiency and space reductions can be gained by deferring the updates to Tables C and D.

Under some circumstances, it might be necessary to override this deferral of updates to the index for particular fields. For instance, consider a situation in which new records are added Online to a personnel file. You might be willing to wait a day for the ability to retrieve the new records. The updates to Tables C and D are deferred until a batch processing window (and the application of deferred updates does require exclusive access to the file), when they are done in a batch run. However, when adding new records, you want to ensure that a record containing a duplicate Social Security number (for example) is not added to the file.

You can define the Social Security number field as NON-DEFFERABLE and write a procedure to retrieve any records already in the file that have the new Social Security number and display an error message if one is found. If updates to the index on this field have been deferred, you would not find a duplicate added earlier in the same day. By selecting the NON-DEFERRABLE attribute for the Social Security number field, you can ensure that Tables C and D are updated immediately.

Note: To provide immediate updating in BATCH204 runs, special action must be taken. See "Overriding deferred updates" for more information.

You can also detect duplicate Social Security numbers by defining the field with the UNIQUE and ORDERED attributes. See UNIQUE and NON-UNIQUE attributes.

Compatibility with other attributes

The DEFERRABLE/NON-DEFERRABLE attribute is invalid for a field that has these attributes: BLOB, CLOB, NON-KEY, NON-ORDERED, and NON-RANGE.

For a table showing invalid attribute combinations, see the DEFINE FIELD command. If you are defining field attributes with FILEMGMT, you will not be allowed to specify conflicting attributes.

Field and content constraints

You can set up constraints to prevent "invalid" updates to the file. The updates might be invalid because to permit them would violate rules about the field (a second occurrence to an AT-MOST-ONE field, for example). Or you can set up "content" constraints to prevent values outside of those permitted to be stored.

In addition, you can use the LEVEL attribute to prevent use of a field by unauthorized users.

Field constraints

ON FIELD CONSTRAINT CONFLICT (ON FCC) unit

To help control processing with UNIQUE and AT-MOST-ONE fields, use the User Language ON FCC unit. ON FCC is invoked whenever a field-level constraint conflict, such as an attempt to add a second occurrence of an AT-MOST-ONE field.

An ON FCC unit is used with a number of $functions that can help you determine exactly where the field constraint conflict has occurred. The $UPDSTAT function is discussed below. Refer to the SOUL wiki pages for information on other $functions for ON FCC.

$UPDSTAT function

The $UPDSTAT function is used in conjunction with the ON FIELD CONSTRAINT CONFLICT unit and field attributes which have field-level constraints of UNIQUE and AT-MOST-ONE. You must use $UPDSTAT in the ON FCC unit when writing procedures for files that have (or might have in the future) more than one type of field-level constraint defined.

If $UPDSTAT is invoked from an ON FCC unit following the detection of a field-level constraint conflict, it returns a numeric value denoting the type of conflict that has occurred:

Value Meaning
0 No violation occurred
1 Uniqueness violation occurred
2 AT-MOST-ONE violation occurred

The $UPDSTAT function takes no arguments.

See information about the AT-MOST-ONE and REPEATABLE attributes.

UNIQUE and ORDERED fields

Because Model 204 uses the Ordered Index to process UNIQUE values, fields defined as UNIQUE must also be defined as ORDERED. For more information about the Ordered Index see ORDERED and NON-ORDERED attributes.

UNIQUE and NON-UNIQUE attributes

Defining a field with the UNIQUE attribute allows Model 204 to ensure that a given field name = value pair occurs only once in a file. A Social Security number field is one example that might benefit from having the UNIQUE attribute. The UNIQUE attribute specifies that there can be only one occurrence of a field in the file. Multiple occurrences are not allowed.

When storing a field specified as UNIQUE, Model 204 first queries the Ordered Index, looking for an identical field-name-equals-value pair. If it finds an identical value, Model 204 issues an error message and does not perform the update. This is true both within a record for multiply occurring fields and between records.

If there is not a uniqueness conflict, that is, if the field-name-equals-value pair being stored is unique, Model 204 completes the update.

Note If the UNIQUE attribute is not specified when a field is defined, the NON-UNIQUE attribute is assumed.

Unique values in multiply occurring fields

In a multiply-occurring field, Model 204 ensures that each occurrence of the field is unique.

Deleting UNIQUE fields

Because UNIQUE values use the Ordered Index, do not use the DELETE ALL RECORDS statement for deleting records. Use DELETE RECORD instead.

Compatibility with other attributes

A field can only be specified as UNIQUE with the following restrictions:

  • UNIQUE fields must be specified as ORDERED.
  • UNIQUE fields cannot be specified as DEFERRABLE.
  • UNIQUE fields cannot be defined in non-TBO (Transaction back out) files.

If you attempt to define a UNIQUE key under any of these circumstances, you receive an error message and the field is not defined.

Content constraints

Setting a pattern for a field value: the LIKE attribute

The LIKE attribute is available as of Model 204 release 7.5.

Setting a field pattern lets you specify a pattern that a field value must conform to, otherwise it cannot be stored. The patterns you can specify are the Model 204 LIKE patterns.

For example:

IN FILE PITCHERS DEFINE FIELD G WITH LIKE '@@@####'

This example specifies that field G must start with three alphabetic characters followed by four numeric characters. An attempt to store a field value that does not conform to the field's pattern results in request cancellation.

You can limit a field value to a list of values. For, example, if a field must have a value of 'M' or 'F' you could code:

IN FILE PITCHERS DEFINE FIELD SEX WITH LIKE 'M,F'

And if you want a BALL field to have a value 'SPLITTER', 'CURVE', 'KNUCKLE', or 'CHANGE UP' you could code:

IN FILE PITCHERS DEFINE FIELD BALL WITH LIKE - 'SPLITTER,CURVE,KNUCKLE,CHANGE UP'

The maximum pattern length is 47 characters. The maximum default value length is 31 characters.

As with the IS LIKE operator in SOUL, the patterns are always applied to numbers after conversion to a string.

You cannot use the LIKE attribute to define an automatic field.

Range constraints

Range constraints are available as of Model 204 version 7.5.

You can set range constraints on fields using the range attributes. Each set of range attributes is comprised of three or four attributes that you can use to establish a range for field length values, integer values, float values, or date/time stamp values. The types of range attributes are mutually exclusive. For example, you cannot define a field with the LENGTH-GE and INTEGER-LE attributes.

The range constraints are self-editing. If you define a field with LENGTH LE 6 and LENGTH GE 8, the definition is rejected and an error message is issued.

When a range constraint is redefined, it replaces the existing field constraint.

Note: The range constraints do not have to match the data type of the stored field. That is, you can have a date/time constraint for a STRING field or an integer constraint for a FLOAT field, and so on.

Defining fields with length constraints
Length constraint attributes Range
LENGTH-EQ Number from 0 to 255 that indicates the required length of a field
LENGTH-GE Number from 0 to 255 that indicates the minimum length of a field
LENGTH-LE Number from 0 to 255 that indicates the maximum length of a field

The following field definitions illustrate using the length constraint attributes, LENGTH-EQ, LENGTH-GE, and LENGTH-LE, with each other and/or with other attributes.

IN FILE REDSOX DEFINE FIELD A WITH LENGTH-EQ 8 BINARY OCCURS 1 IN FILE REDSOX DEFINE FIELD B WITH LENGTH-GE 4 FLOAT LEN 8 IN FILE REDSOX DEFINE FIELD C WITH LENGTH-LE 10 IN FILE REDSOX DEFINE FIELD D WITH LENGTH-LE 10 LENGTH-GE 3 IN FILE REDSOX DEFINE FIELD E WITH LENGTH-LE 6 LENGTH-GE 6 BINARY IN FILE REDSOX DEFINE FIELD F WITH LENGTH-LE 255 LENGTH-GE 0 IN FILE REDSOX DEFINE FIELD COLOR WITH EXACTLY-ONE

Note: The length constraints apply to the string representation of non-string types as they would be stored in the database. This means that the length of a FLOAT field with value 34.000 would be 2 because after being stored in the database it will return 34 on a retrieval.

LENGTH-EQ (LEQ) attribute

The LENGTH-EQ attribute is followed by a number from 0 to 255 that indicates the required length of a field.

The LENGTH-EQ attribute is not allowed for automatic fields.

LENGTH-GE (LGE) attribute

The LENGTH-GE attribute is followed by a number from 0 to 255 that indicates the minimum length of a field.

The LENGTH-GE attribute is not allowed for automatic fields. You can pair the LGE attribute with the LLE attribute, creating a length range. However, if the range is not valid or overlaps, an error message is invoked.

LENGTH-LE (LLE) attribute

The LENGTH-LE attribute is followed by a number from 0 to 255 that indicates the maximum length of a field.

The LENGTH-LE attribute is not allowed for automatic fields. You can pair the LLE attribute with the LGE attribute, creating a length range. However, if the range is not valid or overlaps, an error message is invoked.

Setting an integer value constraint

For Integer values you can define a field with:

Integer attribute Specifies that the integer value must be...
INTEGER-GE Greater than or equal to the value that follows
INTEGER-GT Greater than the value that follows
INTEGER-LE Less than or equal to the value that follows
INTEGER-LT Less than the value that follows

The value stored for a field must be an integer with a value that satisfies the constraint. For example:

DEFINE FIELD NCHILDREN WITH INTEGER-LE 100

The NCHILDREN field can have an integer value of less than or equal to 100.

Note: An integer constraint will not accept a value that would be stored with a decimal place. If you have an integer constraint INTEGER-GE 10 and try to store value 10.0, it will fail because 10.0 is not an integer.

INTEGER-GE (INTGE), INTEGER-GT (INTGT), INTEGER-LE (INTLE) and INTEGER-LT (INTLT) attributes

The integer range attributes (INTGE, INTGT, INTLE, and INTLT) establish a range of acceptable integer values for a field. These attributes can be defined singly or as a pair on the field. The integer range attributes If a record is added with an integer value that is not within the range, the record is rejected.

If you specify another type of range attribute on the same field, such as DATETIME-LE or FLOAT-GE, an error message is invoked.

Setting a float value constraint

For float values you can define a field with:

Float attribute Specifies that the float value must be...
FLOAT-GE Greater than or equal to the value that follows
FLOAT-GT Greater than the value that follows
FLOAT-LE Less than or equal to the value that follows
FLOAT-LT Less than the value that follows

The value stored for a field must be a floating point value that satisfies the constraint. For example:

DEFINE FIELD WEIGHT WITH FLOAT-GT 10.5

The WEIGHT field must have a float value greater than 10.5.

FLOAT-GE (FLTGE), FLOAT-GT (FLTGT), FLOAT-LE (FLTLE) and FLOAT-LT (FLTLT) attributes

The float range attributes (FLTGE, FLTGT, FLTLE, and FLTLT) establish a range of acceptable float values for a field. These attributes can be defined singly or as a pair on the field. If a record is added with a float value that is not within the range, the record is rejected.

If you specify another type of range attribute on the same field, such as DATETIME-LE or INTEGER-GE, an error message is invoked.

Setting a date/time value constraint

For date/time values you can define a field with:

Date/time attribute Specifies that the date/time value must be...
DATETIME-GE Later than or the same as the date/time value that follows
DATETIME-GT Later than the date/time value that follows
DATETIME-LE Earlier or equal to the date/time value that follows
DATETIME-LT Earlier than the value that follows

The value stored for field must be a date/time value that satisfies the constraint. For example:

DEFINE FIELD BIRTHDATE WITH DATETIME-GE 18900101 DATETIME-LT 20100101

The BIRTHDATE field is required to have a date/time value greater than or equal to January 1, 1890 and less than January 1, 2010. As with any other date/time values, the values can be extended with HH, HHMM, HHMMSS, HHMMSSX, HHMMSSXX, and so on.

DATETIME-GE (DTGE), DATETIME-GT (DTGT), DATETIME-LE (DTLE), and DATETIME-LT (DTLT) attributes

The date/time range attributes-DTGE, DTGT, DTLE, DTLT-establish a range of acceptable dates for the value of the field. If a record is added with a date that is not within the range, the record is rejected. You can specify a single date/time range attribute or a pair of them on a field.

If you specify another type of range attribute on the same field, such as FLOAT-LE or INTEGER-GE, an error message is invoked.

The value stored for a field must be a date/time with a value that has the proper relationship to the following value. For example:

DEFINE FIELD BIRTHDATE WITH DATETIME-GE 18900101 DATETIME-LT 20100101

The BIRTHDATE field is required to have a date/time value greater than or equal to January 1, 1890 and less than January 1, 2010. As with any other date/time values, the values can be extended with HH, HHMM, HHMMSS, HHMMSSX, HHMMSSXX, and so on.

Concatenated fields

Note: The concatenated fields feature is available as of Model 204 version 7.5.

Fields listed after CONCATENATION OF:

  • Must be either AT-MOST-ONE, EXACTLY-ONE, or OCCURS 1 (any mixture of these field types is allowed)
  • Must all be in the same field group, or must all not be in any field group
  • Cannot have the FIELDGROUP *, BLOB, nor CLOB attribute
  • Cannot have any of the range attributes

The following syntax is used to define a concatenated field with a separator and an escape character:

Syntax

DEFINE FIELD fieldname WITH CONCATENATION-OF - field1 WITH field2 [WITH field3 [WITH field4 - [AND . . .]]] - [SEPARATOR (char | X'hex' | NONE)] - [ESCAPE (char | X'hex' | CANCEL)] [OtherAttributes . . .]

The SEPARATOR character appears between the field values that comprise the concatenated field. Up to eight fields can be defined for a concatenated field.

Concatenated fields could be longer than 255 bytes after adding separator and escape characters; in this case, the request is cancelled.

Concatenated fields have the following field attribute support.

Concatenated field attributes Purpose
CONCATENATION-OF (CAT) List the fields that make up that concatenated field
ESCAPE (ESC) Specifies a character that is used to escape separator characters that occur in a field that requires a terminating separator character
SEPARATOR (SEP) Specifies the character that terminates fields that do not have a fixed length and are not the last field in the concatenation

If a required attribute is not assigned a value, the default value is used. For example, if you define a CONCATENATION-OF field, which requires the SEPARATOR attribute, and you do not enter the SEPARATOR attribute, the default value of X'00' is used as the SEPARATOR value.

See the "Record retrievals" topic for more information on record retrieval of concatenated fields, including the use of EQ WITH.

CONCATENATION-OF (CAT) attribute

The CONCATENATION-OF (CAT) attribute is followed by a list of fields that make up that concatenated field. At least two are required and up to eight are allowed. The field names must be separated by the word WITH and subsequent field attributes must be separated from the list by the word AND or by a comma. Each field specified in the list of concatenated fields must have the EXACTLY-ONE, AT-MOST-ONE, or OCCURS 1 attribute specified.

If a field to be concatenated is an OCCURS 1 field, define its LENGTH attribute as the total of the longest possible concatenation (including separators) to avoid a "too long" result.

If LENGTH is less than 255, a "too long" result produces message M204.0738.

If LENGTH is 255, a "too long" result produces M204.2872, which cancels the request.

ESCAPE (ESC) attribute

The ESC attribute is followed by one of the following:

  • A single character in C'char' or X'hex' format that specifies the character used to escape separator characters that occur in a field requiring a terminating separator character. In addition to escaping the separator character, the escape character can also escape itself.
  • CANCEL, which means that an attempt to store a separator character in a field that requires a terminating separator results in request cancellation.

The default ESCAPE character is X'01'.

Once you define an ESCAPE value for a field, you cannot redefine the ESCAPE value.

The ESCAPE character cannot match the SEPARATOR character. The ESCAPE character is always displayed on DISPLAY FIELD output.

The ESCAPE attribute is not allowed on a field definition, if the SEPARATOR NONE attribute is specified.

Using the ESCAPE attribute

The ESC character is inserted into a concatenated field when the real data being added to the database contains the SEPARATOR or the ESCAPE character itself. When reading in stored values, programs use the ESC character to differentiate between those special characters and normal data that happens to include those special characters. When the processing encounters the ESCAPE character in a concatenated field, it knows that the next character is real data.

Note: The ESCAPE character applies to all fields except the last field in the concatenation. The programmer is expected to know how many components comprise the concatenation and therefore, special characters occurring in the last component must be real data.

Consider the following field definitions:

DEFINE FIELD A_CONCAT_TEST1 WITH AT-MOST-ONE DEFINE FIELD A_CONCAT_TEST2 WITH AT-MOST-ONE DEFINE FIELD A_CONCAT_TEST3 WITH CONCATENATION-OF - A_CONCAT_TEST1 WITH - A_CONCAT_TEST2 AND - SEPARATOR C'+' - ESCAPE C'/'

If this request follows:

BEGIN IN POLICIES STORE RECORD A_CONCAT_TEST1 = 1ST+VALUE+ A_CONCAT_TEST2 = 2ND+VALUE+ END STORE FRN $CURREC NP PAI END FOR END

The result is:

A_CONCAT_TEST1 = 1ST+VALUE+ A_CONCAT_TEST2 = 2ND+VALUE+ A_CONCAT_TEST3 = 1ST/+VALUE/++2ND+VALUE+

The plus signs in the stored records are intentionally part of the data. For the concatenated field Model 204 stores the escape character preceding the plus signs in the data for all but the last field in the concatenation. This assumes that the programmer knows how many fields there are in the concatenation and therefore knows that any separator characters in the last field are data fields.

In storing the escape character immediately before any plus signs that are part of the data proper, Model 204 tells the programmer that they are to be read as data. If there were any escape characters in the data proper, Model 204 puts an escape character before the data proper version as in the following example:

BEGIN IN POLICIES STORE RECORD A_CONCAT_TEST1 = 1ST/VALUE+ A_CONCAT_TEST2 = 2ND+VALUE+ END STORE FRN $CURREC NP PAI END FOR END

This is the output:

A_CONCAT_TEST1 = 1ST/VALUE+ A_CONCAT_TEST2 = 2ND+VALUE+ A_CONCAT_TEST3 = 1ST//VALUE/++2ND+VALUE+

If you do not want the separator or the escape character in the concatenated field, you can enforce this by assigning the value CANCEL as the escape character as follows:

DEFINE FIELD A_CONCAT_TEST1 WITH CONCATENATION-OF - A_CONCAT_TEST1 WITH - A_CONCAT_TEST2 AND - SEPARATOR C'+' - ESCAPE CANCEL

In which case, the concatenation is ignored, and the Store Record results in the following:

A_CONCAT_TEST1 = 1ST/VALUE+ A_CONCAT_TEST2 = 2ND+VALUE+

Note: There are no error messages associated with this condition.

If you specify the ESCAPE CANCEL attribute for a field and the SEPARATOR character for that field is in one of the field values for the concatenated string, the update is cancelled. For an example, see Limitations when using ESCAPE CANCEL.

SEPARATOR (SEP) attribute

The SEP attribute is followed by one of the following options:

  • A single character in C'char' or an X'hex' format that specifies the character that terminates fields that do not have a fixed length and are not the last field in the concatenation. The separator character is always used between field values, regardless of any length constraints on the component fields.
  • The NONE option that means the fields in the concatenation are concatenated end to end with no separator character and regardless of their length.

If you define a field with the SEP attribute, you must supply a value. Otherwise, the following error is issued:

DEFINE MAKE_MODEL WITH CONCATENATION-OF MAKE WITH MODEL - AND ORD SEP ESC CANCEL *** 1 M204.0405: INVALID DEFINE OPTION: SEP ESC

If you define a field without the SEP attribute, it defaults to X'00', as shown in the following code:

DEFINE MAKE_MODEL WITH CONCATENATION-OF MAKE WITH MODEL AND ORD ESC CANCEL D FIELD (ABBREV) MAKE_MODEL MAKE_MODEL (DEF NFRV NKEY NCOD STR NNR VIS UP ORD CHAR LRES 15 - NRES 15 SPLT 50 IMM 1 NUNIQ ONE CAT MAKE WITH MODEL AND - SEP X'00' ESC CANCEL)

Once you define a SEPARATOR value for a field, you cannot redefine the SEPARATOR value.

The SEPARATOR character cannot match the ESCAPE character. The SEPARATOR character is always displayed on DISPLAY FIELD output.

If the SEPARATOR NONE attribute is specified on a field definition, the ESCAPE attribute is not allowed. See Limitations when using SEPARATOR NONE on the "Record retrievals" page.

Defining concatenated fields

Using the attributes described in the preceding subsections, you can define these concatenated fields:

IN FILE REDSOX DEFINE FIELD COLMOD WITH CONCATENATION-OF COLOR - WITH MODEL AND SEP C'-' ESC C'+' IN FILE REDSOX DEFINE FIELD MODCOL WITH CONCATENATION-OF MODEL - WITH COLOR AND SEP C'$' ESC C'*' IN FILE REDSOX DEFINE FIELD MODCOLI WITH CONCATENATION-OF MODEL - WITH COLOR AND SEP X'40' ESC C'*' INVISIBLE ORDERED IN FILE REDSOX DEFINE FIELD MCY WITH CONCATENATION-OF MODEL - WITH COLOR WITH YEAR AND SEP X'40' ESC C'*' INVISIBLE ORDERED IN FILE REDSOX DEFINE FIELD COLCNT WITH CONCATENATION-OF COLOR - WITH COUNTRY AND SEP X'40' ESC C'*' INVISIBLE ORDERED IN FILE REDSOX DEFINE FIELD CNTYR WITH CONCATENATION-OF YEAR - WITH COUNTRY AND SEP X'40' ESC C'*' INVISIBLE ORDERED

Defining concatenated fields in a field group context

IN FILE REDSOX DEFINE FIELD BATTERS IN FILE REDSOX DEFINE FIELDGROUP STARTERS IN FILE REDSOX DEFINE FIELD NTRIP WITH CTO STARTERS AND EXACTLY-ONE IN FILE REDSOX DEFINE FIELD A WITH AT-MOST-ONE FIELDGROUP STARTERS AND- ORDERED CHARACTER IN FILE REDSOX DEFINE FIELD B WITH AT-MOST-ONE FIELDGROUP STARTERS AND - KEY IN FILE REDSOX DEFINE FIELD C WITH AT-MOST-ONE FIELDGROUP STARTERS AND ORDERED NUMERIC IN FILE REDSOX DEFINE FIELD AB WITH FIELDGROUP STARTERS AND - CONCATENATION-OF A WITH B AND SEP C'/' IN FILE REDSOX DEFINE FIELD ABO WITH FIELDGROUP STARTERS AND - CONCATENATION-OF B WITH A AND SEP C'+' ORDERED IN FILE REDSOX DEFINE FIELD ABI WITH CONCATENATION-OF B WITH A AND - SEP C'+' ORDERED INVISIBLE

Note: A field in a concatenated field value does not have a terminating separator set only if it is:

  • Last field in the concatenation, or
  • Fixed length (LENGTH-EQ set) and the default value has the same length as LENGTH-EQ.

Automatic fields

Note: Automatic fields are available as of Model 204 version 7.5, and they require the FILEORG X'100' setting.

Model 204 lets you define a field whose value is automatically maintained. For example, a field might count occurrences of another field so that every store or delete of the field occurrence changes the count in the automatic field. An automatic field is defined with one of the following attributes:

  • CHUNK
  • COUNT-OCCURRENCES-OF
  • CREATE-TIME
  • CREATE-TIMEUTC
  • CREATE-USER
  • UPDATE-TIME
  • UPDATE-TIMEUTC
  • UPDATE-USER

The value of an automatic field is updated at the start of a transaction by Model 204, and you cannot set it explicitly by a program. Any valid update causes the appropriate time and user stamps to be updated. For example, DELETE FOO(8), when there are no occurrences of FOO in the record.

Once you define an automatic value for a field, you cannot redefine the automatic value.

SOUL Add, Insert, Delete, and Store Record statements are rejected with a compile error if they reference an automatic field:

M204.2862: ATTEMPT TO UPDATE AUTOMATIC FIELD fieldname

If a field name variable (FNV) used on an update statement resolves to an automatic field, the request is cancelled with the following error:

M204.2863: ATTEMPT TO DO updateOperation FOR AUTOMATIC FIELD: fieldname

On an update statement in group context, the M204.2863 error is also issued if the field being updated is discovered to be an automatic field in one of the group's files at evaluation time.

Automatic field context

Automatic fields can operate in a record or field group context. For example, with the following set of fields:

... DEFINE FIELDGROUP ADDRESS ... DEFINE FIELD RECORD.UPDT.TIME (DATETIME UPDT-TIME) DEFINE FIELD ADDRESS.LINE.1 (SN NONE FG ADDRESS) ... DEFINE FIELD ADDRESS.UPDT.TIME (DATETIME FG ADDRESS AND UPDT-TIME) ...

  • Any update to any field in a record will automatically update RECORD.UPDT.TIME.
  • Any update to any field in an occurrence of the ADDRESS field group will cause the ADDRESS.UPDT.TIME for that occurrence to be updated. (And, of course, the record level RECORD.UPDT.TIME as well.)

Rules for the definition of automatic fields

CHUNK attribute

Fields defined with the CHUNK field attribute cannot be redefined or deleted.

COUNT-OCCURRENCES-OF (CTO) attribute

The COUNT-OCCURRENCES-OF (CTO) attribute is not allowed for fields defined as REPEATABLE.

Because one of the major advantages of CTO fields is to avoid unnecessary record scans, it is highly recommended that the field be preallocated. Otherwise the field is stored and updated in Table B in the normal way.

See CTO fields for details on its use.

CREATE* and UPDATE* attributes

The CREATE-TIME, CREATE-TIMEUTC, CREATE-USER, UPDATE-TIME, UPDATE-TIMEUTC, and UPDATE-USER automatic fields capture the add/update user/time as of the start of the transaction. Any of these attributes can be added to empty or non-empty fields (using a REDEFINE command). The value is maintained by Model 204 processing, so you cannot change the value.

These automatic attributes cannot be defined as REPEATABLE (the default is AT-MOST-ONE).

See Tracking updates using automatic fields for details on their use.

Displaying automatic fields

You can specify the AT-MOST-ONE or EXACTLY-ONE attribute for an automatic field. These attributes affect the order in which the fields are displayed on a PAI statement. EXACTLY-ONE fields are displayed in the order they were defined, whereas AT-MOST-ONE fields are displayed in the order they are stored in Table B.

Currently Table B stores the following automatic fields in the order shown:

  1. UPDATE-TIME
  2. UPDATE-USER
  3. CREATE-TIME
  4. CREATE-USER

CTO fields: counting occurrences of a field

To create an automatic count field you use the COUNT-OCCURRENCES-OF (CTO) keyword on a field definition, followed by the name of the field or field group for which the count is automatically maintained. For example, if you have field group DRUG you could define an automatic count field as follows:

DEFINE FIELDGROUP DRUG DEFINE FIELD DRUGCT WITH COUNT-OCCURRENCES-OF DRUG

You cannot define a CTO field in a file that has had records already added to it.

You could also make the automatic count field a binary field:

DEFINE FIELD DRUGCT WITH CTO DRUG AND BINARY OCCURS 1 DEFINE FIELD DRUGCT WITH COUNT-OCCURRENCES-OF DRUG AND - BINARY

Note: AND is a separator between a field or field group name and other field attributes.

An automatic count field can also be an OCCURS 1 field:

DEFINE FIELD DRUGCT WITH COUNT-OCCURRENCES-OF DRUG AND BINARY OCCURS 1

An automatic count field could even be ORDERED, KEY or NUMERIC RANGE:

DEFINE FIELD DRUGCT WITH CTO DRUG AND BINARY OCCURS 1 ORDERED NUMERIC

You can find all records with N occurrences of field group DRUG. Or, you can find those that have more or less than some number of occurrences.

Limitations on COUNT-OCCURRENCES-OF (CTO) fields

Fields defined with the CTO attribute cannot also have the UPDATE AT END attribute. You cannot define a CTO field that counts occurrences of UPDATE AT END fields.

You cannot use a CTO field to count occurrences of OCCURS fields, even though the OCCURS count is greater than 1.

Also, you cannot define a CTO field in a file that has had records already added to it.

There is a maximum of one CTO field per field or field group.

Retrieving the value of a CTO field

You can retrieve values of CTO fields via the field name (as with any VISIBLE field), as in:

PRINT DRUGCT

or

%DRUGCT = DRUGCT

Automatic optimization with COUNT OCCURRENCES OF fields

The use of CTO fields automatically optimizes many of the record scan processes:

  • FOR EACH OCCURRENCE OF loops.

    If a CTO field exists for the field or field group that is being looped over via an FEO or FAO statement, the value of that field limits the number of times the loop is processed, which reduces the need to scan to the end of record for every FEO or FAO statement. Note that when a record is updated or might be updated mid-loop, Model 204 normal concerns about subscript handling in these statements need to be considered.

  • FOR FIELDGROUP clause

    Before processing a FOR FIELDGROUP loop, Model 204 checks whether the requested occurrence is in the record based on the CTO field. If the User Language request is for occurrence 3, but the CTO field indicates only two occurrences in the record, Model 204 stops processing the FOR loop.

  • Field extraction

    Before extracting a field Model 204 checks whether the requested occurrence is in the record based on the CTO field. If the User Language request is occurrence 3, but the CTO field indicates only two occurrences in the record, Model 204 sets the result to null (or NOT PRESENT).

    Note that in the case of the subscripted field optimization:

    FOR %I FROM 1 TO %N %FOO = FOOFIELD(%N) END FOR

    Model 204 checks only the number of occurrences on the first iteration of the loop.

In all cases, the greatest benefit from the CTO field optimization is when a record contains no occurrences of the requested field or field group. Model 204 immediately determines that no occurrence is on the record and stops scanning the record.

Defining an efficient CTO field

The counter fields are always physically present in the record. Using a DEFAULT-VALUE of zero for such a field defeats its purpose as you need to scan the entire record to determine whether to use the default value. When you issue a STORE RECORD statement, all CTO fields are stored as 0 in the record. For non-OCCURS, fields are always the first variable length fields in the record.

Model 204 customer support recommends that you define a CTO field as BINARY OCCURS 1, as these require only four bytes per record, whereas the minimum length of a variable length field is four bytes (for a compressed 0) and will be at least five for any non-zero value. A CTO field inside a field group cannot be an OCCURS field.

The absolute maximum value for CTO fields is currently X'3FFFFFFF' or 1,073,741,823 since BINARY fields use the high order two bits to distinguish values from coded values.

You cannot update CTO fields; attempts are caught at compile-time for explicit file-context references and at run-time for group-context and field-name variables.

CTO fields can even be in field groups and zeros are stored for CTO fields when the ADD FIELDGROUP or INSERT FIELDGROUP statement is processed.

Displaying CTO fields

By default, the PRINT ALL INFORMATION (PAI), AUDIT ALL INFORMATION (AAI), PRINT ALL FIELD GROUP INFORMATION (PAFGI), and AUDIT ALL FIELDGROUP INFORMATION (AAFGI) statements do not display CTO (counter fields). When you are using PAI [INTO] to reorganize your files or to copy records, the automatic fields are likely to cause problems.

Under other circumstances you can use the argument to indicate that you do want automatic fields to be printed by these statements.

Tracking updates using automatic fields

Versions 7.5 and higher of Model 204 can track the date/time when a record was created or updated, and what user ID created or updated it. This tracking can be accomplished by using one of the following field tracking attributes:

Tracking attribute Abbreviation Tracks the...
CREATE-TIME CRTM Moment the record was created using machine time
CREATE-TIMEUTC CRTMU Moment the record was created using Coordinated Universal Time
CREATE-USER CRUS User ID that created the record
UPDATE-TIME UPTM Moment the record was updated using machine time
UPDATE-TIMEUTC UPTMU Moment the record was updated using Coordinated Universal Time
UPDATE-USER UPUS User ID that updated the record

All of the automatic fields that contain date/time information also have the DATETIME attribute.

You cannot update tracking fields with SOUL.

A record or field group can contain only a single field of each update tracing type (CREATE-USER, UPDATE-USER, CREATE-TIME, UPDATE-TIME). An attempt to create a second invokes an error message.

You can define update tracking fields (CREATE-TIME, UPDATE-TIME, CREATE-USER, UPDATE-USER) in an already populated file, even an update tracking field for the main records or a field group that already has occurrences. You can also define a DEFAULT-VALUE field, which defaults to a null string. This lets you define such fields with or without a reorganization.

Tracking fields for records

The file itself can have a single field defined for each of the tracking fields.

An attempt to define a second field of the same type on the file results in an error message, for example:

DEFINE FIELD RECORD.CREATE.TIME WITH CREATE-TIME DEFINE FIELD RECORD.CREATE-TIME2 WITH CREATE-TIME *** M204.2866: FILE ALREADY HAS A CREATE-TIME FIELD

The UPDATE tracking fields for the records are updated when any field in the record changes and the create tracking fields are added to the record when it is initially stored.

Tracking fields for field groups

Automatic tracking fields can also provide a date/time and user stamp of the creator and last update of each occurrence of a field group.

If an UPDATE-TIME tracking field is defined as a part of field group X as shown below, then each occurrence of that field group will contain the tracking field:

DEFINE FIELD MY_UPDATE_TIME WITH FIELDGROUP X AND ORD NUM UPDATE-TIME

DATETIME (DT) field values

This section explains the format and other considerations for fields with the DATETIME attribute, whether they are automatic update tracking fields or not.

A DATETIME value is a date of the format:

YYYYMMDDHHMISSXXXXXX

Of such a 20-digit value:

  • The date portion (YYYYMMDD) is required; therefore the minimum length is 8 digits. It must specify a valid date.
  • The time portion (up to 12 digits) is optional; any of the leading portion (hours, minutes, seconds, and decimal fraction-of-seconds) can be specified. It must specify a valid time: if HH, MI, or SS is specified, they must each be two digits. Additionally, up to 6 decimal fraction-of-seconds digits may be specified (that is, up to 1 microsecond).

Setting a date/time field

The DATETIME field attribute indicates the format of the data stored in Table B.

Once you define a field as DATETIME, you assign a string to it in YYYYMMDDHHMMSSXXXXXX format:

IN FILE FOO DEFINE FIELD WHEN WITH DATETIME ... BEGIN IN FILE FOO STORE RECORD WHEN = '2010070413470643516378' END STORE END

If you attempt to store the field with invalid data such as WHEN = '72', the request is cancelled and you receive an error:

*** 1 CANCELLING REQUEST: M204.2865: ATTEMPT TO ADD INVALID DATETIME VALUE: WHEN = 72

As most timestamps do not require microseconds, you can specify 10ths, 100ths, 1,000ths, 10,000ths and 100,000ths of a second. The value is simply padded on the right with zeros. For date/time field WHEN, the following are all equivalent:

ADD WHEN = '20090704134706000000' ADD WHEN = '2009070413470600000' ADD WHEN = '200907041347060000' ADD WHEN = '20090704134706000' ADD WHEN = '2009070413470600' ADD WHEN = '200907041347060' ADD WHEN = '20090704134706'

In addition, you can leave off seconds, minutes or hours so the following are all equivalent:

ADD WHEN = '20090704000000' ADD WHEN = '200907040000' ADD WHEN = '2009070400' ADD WHEN = '20090704'

which are also, of course, equivalent to

ADD WHEN = '20090704000000000000'

Storing dates or date/times efficiently

DATETIME fields use seven bytes to store their value (plus four bytes overhead, if the field is not preallocated). A binary representation is used to store the 20-character full date format so that it can be held in seven bytes.

The shortest valid format for a field with the DATETIME attribute is YYYYMMDD. If the values you are storing are simple dates, they can be stored in four bytes as binary values, three fewer bytes per value than a date/time field. However, for anything with a time in it (even just hours), a DATETIME field takes less space than the alternative field types (FLOAT or STRING).

Automatic validation of date/times

In addition to the compact storage of date/time stamps, DATETIME fields also provide automatic validation of the values you attempt to store in them — you cannot store numbers into them that are not valid dates. For example, you cannot store '9999111' into a DATETIME field, nor can you store '20070931' (because September does not have 31 days), nor can you store '20070229' (because 2007 is not a leap year, so February only has 28 days).

Additional DATETIME-formatted fields also provide for field content validation for date/time fields.

Note: It is possible to set a DEFAULT-VALUE for a DATETIME field to an invalid date/time value, such as a hyphen _. That value will not be stored in the database, but it would be the value returned for a reference to a missing EXACTLY-ONE DATETIME field value.

The value returned for DATETIME fields is always in this format:

YYYYMMDDHHMMSSXXXXXX

If you are interested in only part of the time stamp, you can use either of these:

  • The $Substr function, to get the part you want.
  • A %variable, specified to be the length of the part of the time stamp you are interested in.

    So, to use %when to hold a date/time in YYYYMMDDHHMMSS format:

    %when is string len 14 ... %when= WHEN

DATETIME values can be indexed as ORDERED CHARACTER or ORDERED NUMERIC. If indexed as ORDERED NUMERIC, only the first 15 digits (YYYYMMDDHHMMSSX) are indexed. If ORDERED CHARACTER, it is indexed as the string representation of the date/time value in the full YYYYMMDDHHMMSSXXXXXX format.

Note: Making DATETIME fields KEY fields is not allowed.

Automatic update tracking field considerations

Automatic DATETIME field values

Automatic DATETIME fields are set to the date/time of the start of the update processing, so all records, fields, and field groups updated by a single update transaction get the same time stamp. Furthermore, the code that starts an update unit ensures that the date/time stamp of each update unit is unique.

The M204.0173 and M204.0172 messages include the time of the update as follows:

M204.0173: START OF UPDATE 12 AT 10:11:00.51

and:

M204.0172: END OF UPDATE 12 AT 10:11:00.56

Defining various UPDATE-USER fields

To indicate whether an update tracking field applies to a record or field group, specify or do not specify the containing field group on the field definition. For example:

DEFINE FIELD WHO WITH UPDATE-USER

The WHO field contains the user ID of the last user that updated anything in the record. The following example illustrates defining an UPDATE-USER field in a field group:

DEFINE FIELD DR.WHO WITH UPDATE-USER FIELDGROUP DR

The DR.WHO field contains the user ID of the last user that updated an occurrence of field group DR.

Defining various UPDATE-TIME fields

You can specify an update tracking field as FIELDGROUP *, which means that all field groups and the primary record all use the same update tracking field. For example:

DEFINE FIELD WHEN WITH UPDATE-TIME FIELDGROUP *

The WHEN field is placed in the primary record and each field group to track the last update time. If a file has a FIELDGROUP * update tracking field, and a field group or record has a different comparable updating tracking field, the field group or the record-specific tracking field applies to the field. For example:

DEFINE FIELD WHEN WITH UPDATE-TIME FIELDGROUP * DEFINE FIELD MAIN.WHEN WITH UPDATE-TIME OCCURS 1 DEFINE FIELD FOO.WHEN WITH UPDATE-TIME FIELDGROUP FOO

  • The field WHEN would be updated for changes to any field group occurrences in field groups that do not have their own UPDATE-TIME field defined.
  • The field MAIN.WHEN would be set for any update to any field in the record.
  • The field FOO.WHEN would be set in field group FOO for any update to a field group occurrence.

The UPDATE-TIME field in a field group that has a nested field group will be updated when the nested field group is updated, even though no fields in the containing field group were updated.

System-wide approach to adding a date-time stamp field to your files

You can establish a single date-time stamp field with the DTSFN parameter. The DTSFN field is used in all files in your Online that have the X'10' bit set in their FOPT parameter. The date-time stamp feature puts a date-time stamp on each record that was updated in a transaction. You can then use the data in the DTSFN field in a user-written application to find and process all the rows of data that were changed.

An application that processes these records can track the date-time stamp field values that have been processed to date, or it might take another action, including updating the date-time stamp field.

To install and use this functionality, see Setting up the date-time stamp field. Once installed, the date-time stamp feature works only in files created in V6R1 or later. See also Adding a date-time stamp field.

Note: There is no support in PQO/204 for the date-time stamp feature.

Difference between the system wide date-time stamp field and an UPDATE-TIME field

Besides the obvious difference that the date-time stamp field will (and must) have the same name in every file in which it is used, the following are the notable differences between the two approaches:

  • The UPDATE-TIME attribute can be set to track updates to physical field groups, as well as at the record level.
  • The UPDATE-TIME captures the time at the start of an update unit. The date-time stamp field captures the time at the commit point.
  • The date-time stamp field updates can be temporarily suspended.

Assigning fields to field groups

FIELDGROUP attribute

No default, can be abbreviated as FG. FIELDGROUP is available as of Model 204 version 7.5.

The FIELDGROUP attribute specifies the name of the field group that the defined field is associated with (contained in). Once you define a FIELDGROUP value for a field, you cannot redefine the FIELDGROUP value.

The FIELDGROUP attribute cannot refer to a field group name that has not yet been defined to the file. The DEFINE FIELDGROUP command must occur before the DEFINE FIELD command.

Syntax

FIELDGROUP [fieldgroupname | *]

The FIELDGROUP attribute does not allow:

  • Record security
  • Use for SORT or HASH file
  • 1NF file model

The FIELDGROUP attribute can be used with the STORE-NULL LITERAL attribute.

Using FIELDGROUP *

The FIELDGROUP * attribute means that the field will be included into all field groups. The EXACTLY-ONE attribute conflicts with the FIELDGROUP * attribute.

LEVEL attribute - prevent unauthorized field usage

To secure a field against unauthorized access, include a LEVEL clause in the field's description. Field-level security discusses field-level security and describes the types of field access: SELECT, READ, UPDATE, and ADD.

Field-level security has a negligible impact on both performance and storage usage.

Field naming

The rules for field and field group naming are contained in Field names.