File sizing introduction: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
==Overview - Two approaches to File Sizing== | ==Overview - Two approaches to File Sizing== | ||
<p>After designing the data structures you are implementing (see [[Field | <p>After designing the data structures you are implementing (see [[Field design (File management)|field]] and [[Field group design (File management)|field group design]]) there are two ways for a file manager to approach the calculation of file sizes:</p> | ||
<p>You can take the ad-hoc approach, by making sizing estimates and either:</p> | <p>You can take the ad-hoc approach, by making sizing estimates and either:</p> |
Revision as of 19:56, 16 December 2013
Overview - Two approaches to File Sizing
After designing the data structures you are implementing (see field and field group design) there are two ways for a file manager to approach the calculation of file sizes:
You can take the ad-hoc approach, by making sizing estimates and either:
- iteratively load a sampling of records to verify
- use the development and testing process to make final sizing decisions
or, of course, a combination of both.
Alternatively, you can do a detailed analysis of the data you expect the file to contain, and try and derive precise sizes for the Model 204 tables, a laborious process.
Choosing an Approach
Most Model 204 file managers use the ad-hoc design approach. Often there is already a production file with similar characteristics to the new file you are creating. Simply copying its parameters as a starting point is a quick way to get a file ready for development and testing.
Moreover, most of the sizes can be easily changed dynamically, so an extreme level of precision is not overly important.
However, it is valuable for a file manager to be grounded in the principles of Model 204 file size calculation. So, to use (or just understand) the sizing rules, see File size calculation in detail.
Critical, Up Front Decisions
There are, however, certain decisions which are more difficult to fix and so should be as 'correct' as possible, as early as possible:
Because the Internal File Dictionary is hashed, it can only be resized by reorganizing / recreating the file. This involves an outage of the online, and so should be avoided.
You can look at the detailed calculation rules at Sizing Table A or, given how small Table A is compared to the other tables, round up on your rough estimate for ATRPG and round down (to try and fit fewer field definitions on each Table A page) for ASTRPPG.
Like the internal file dictionary, Table C is hashed and so can not be dynamically changed.
The easiest way to make sure that this is an issue is not to define any KEY or NUMERIC RANGE fields: make them ordered instead. This has the associated advantage of, if you use FILEORG x'100' files, of permitting up to 32000 field names in a file.
Number of Datasets
You can dynamically add datasets to a Model 204 file.
The reason that it is better not have to is that there may be JCL containing file references which would need to be updated at the same time.
Unless space is at a premium, it is a good idea to define dataset(s) larger than you need, which gives you the ability to automatically, or manually, increase the Tables without issue.