File sizing introduction: Difference between revisions
No edit summary |
|||
Line 13: | Line 13: | ||
==Choosing an Approach== | ==Choosing an Approach== | ||
<p>Most Model 204 file managers use the ad-hoc design approach. Often there is already a production file with similar characteristics to the new file you are creating. Simply copying its parameters as a starting point | <p>Most Model 204 file managers use the ad-hoc design approach. Often there is already a production file with similar characteristics to the new file you are creating. Simply copying its parameters as a starting point is a quick way to get a file ready for development and testing.</p> | ||
<p> | <p>Moreover, most of the sizes can be easily changed dynamically, so an extreme level of precision is not overly important.</p> | ||
<p> | <p>However, the detailed Model 204 file sizing rules do provide a level of knowledge that the file manager should be grounded in. So, to use (or just understand) the sizing rules, see [[File Size Calculation in Detail]]. </p> | ||
==Critical, Up Front Decisions== | ==Critical, Up Front Decisions== |
Revision as of 21:32, 5 June 2013
Overview - Two approaches to File Sizing
After designing the data structures you are implementing (see field and Repeating Field Group design) there are two ways for a file manager to approach the calculation of file sizes:
You can take the ad-hoc approach, by making sizing estimates and either:
- iteratively load a sampling of records to verify
- use the development and testing process to make final sizing decisions
or, of course, a combination of both.
Alternatively, you can do a detailed analysis of the data you expect the file to contain, and try and derive precise sizes for the Model 204 tables, a laborious process.
Choosing an Approach
Most Model 204 file managers use the ad-hoc design approach. Often there is already a production file with similar characteristics to the new file you are creating. Simply copying its parameters as a starting point is a quick way to get a file ready for development and testing.
Moreover, most of the sizes can be easily changed dynamically, so an extreme level of precision is not overly important.
However, the detailed Model 204 file sizing rules do provide a level of knowledge that the file manager should be grounded in. So, to use (or just understand) the sizing rules, see File Size Calculation in Detail.
Critical, Up Front Decisions
There are, however, certain decisions which are more difficult to fix and so should be as 'correct' as possible, as early as possible:
Because the Internal File Dictionary is hashed, it can only be resized by reorganizing / recreating the file. This involves an outage of the online, and so should be avoided.
You can look at the detailed calculation rules at Sizing Table A or, given how small Table A is compared to the other tables, round up on your rough estimate for ATRPG and round down (to try and fit fewer field definitions on each Table A page) for ASTRPPG.
Like the internal file dictionary, Table C is hashed and so can not be dynamically changed.
The easiest way to make sure that this is an issue is not to define any KEY or NUMERIC RANGE fields: make them ordered instead. This has the associated advantage of, if you use FILEORG x'100' files, of permitting up to 32000 field names in a file.
Number of Datasets
You can dynamically add datasets to a Model 204 file.
The reason that it is better not have to is that there may be JCL containing file references which would need to be updated at the same time.
Unless space is at a premium, it is a good idea to define dataset(s) larger than you need, which gives you the ability to automatically, or manually, increase the Tables without issue.