Model 204 configurations and operating environments
This page provides an overview of Model 204. The structure of the Model 204 database management system, basic Model 204 configurations, and the hardware and software environments required to run Model 204 are summarized for the view of the person managing Model 204 system resources.
For a list of all the Model 204 system management topics, see Category:System management.
About Model 204
Model 204 is a complete database management system that provides facilities for the creation, control, query, and maintenance of database files. Data intensive batch and online application systems can be developed with Model 204's self-contained SOUL language (formerly called User Language) and embedded TP monitor. Application languages, such as Assembler, COBOL, PL/I, and FORTRAN can communicate with Model 204 functions through the Model 204 Host Language Interface. Model 204 supports SQL queries using the Horizon and CRAM interfaces.
- Sequential system and work files provide system input and output.
- Tables store information necessary for running specific requests.
- Runtime parameters handle execution-time and system-input-stream specifications.
- User parameters control the environment of each online user.
A Model 204 database consists of files that utilize large fixed-length physical pages of 6184 bytes each. Each file is logically composed of four tables in which indexes and data are maintained. Entry order, unordered, sorted, and hash key file structures are supported.
Model 204 system management
Rocket Software recommends that each installation designate a person to be the Model 204 System Manager. The System Manager's responsibilities include:
- Installing and maintaining the Rocket Model 204 software
- Setting up and maintaining the OS or DOS JCL or CMS EXECs and Model 204 parameters for the Model 204 configurations used at an installation
- Setting up system security
- Acting as a liaison with Rocket Technical Support.
The System Manager's position is usually a part-time job in a small to medium size database environment. Often the System Manager also has the responsibilities of File Manager, also known as Model 204 database administrator. If you are the System Manager only, Rocket Software strongly recommends that you study the Rocket Model 204 Model 204 files topics as well as the system management topics, because you will be working closely with the File Manager.
Model 204 configurations
The basic Model 204 configurations are listed in the table below. Various configurations, such as IFAM2 and ONLINE, can be combined to meet site requirements.
Handles a single user in batch mode.
Supports a batch user and a number of online users.
Supports host language calls to Model 204 files from multiple users. Programs are run as separate tasks in a single region, partition, or virtual machine.
Supports host language calls to Model 204 files from multiple users. Each program operates in its own region, partition, or virtual machine.
Supports host language calls to Model 204 files from multiple users. Programs are run as separate tasks in a single region, partition, or virtual machine.
Establishes a User Language connection to a Model 204 ONLINE running in a separate region.
Model 204 uses the same basic system architecture, file architecture, and User Language with each compatible operating system. Unique system characteristics that must be considered when running Model 204 are discussed, where appropriate, throughout this manual.
Model 204 is compatible with all versions of z/OS, z/VM, and z/VSE currently supported by IBM. For details, see Model 204 system requirements.
The Model 204 ONLINE configurations for each operating system are explained in detail in the appropriate Rocket Model 204 Installation Guides. Refer also to Defining the runtime environment (CCAIN), Defining the user environment (CCAIN), and Controlling system operations (CCAIN) for information on system and user commands and parameters.
Model 204 runs in an IBM mainframe environment with z/OS, z/VM, and z/VSE operating systems.
Model 204 runs on all mainframes that support z/Architecture principles of operations. Mainframes with ESA/390 architecture are supported from 9672 RA4 (Generation 3) and Multiprize 2000 and up.
MP/204 is an optional enhancement to Model 204 that provides full support for multiprocessor hardware configurations under z/OS.
31-bit addressing is also supported in all operating environments. When running in 31-bit mode, Model 204 allocates several data structures above the 16-megabyte line.
Model 204 supports 64-bit real storage and captured UCB allocated above the 16-megabyte line for z/OS and z/VM operating systems.
Direct Access Storage Devices (DASD)
Model 204 supports the devices listed in this table:
337x, 9332, 9335
A shared DASD environment is one in which disk volumes may be shared between operating systems. Model 204 is supported under the three IBM operating systems: z/OS, z/VM, and z/VSE. A number of customers run Model 204, concurrently, under at least two of those three systems.
Many customers run Model 204 in the z/OS LPAR (Logical PARtition) environment where different versions of z/OS share disk volumes. In these systems, Model 204 jobs, ONLINE or BATCH, may be running concurrently in any of the defined LPARs and may be sharing disk volumes between those LPARs.
Model 204 file integrity in these multi-operating-system environments has always been provided through the Shared DASD Enqueue list, stored and maintained in the File Parameter List (FPL) page of each Model 204 file. Maintaining the Shared DASD Enqueue list, and therefore file integrity in a multi-operating-system environment, is totally dependent on the use of the device
RESERVE macro provides exclusive access to a device. Therefore, any Model 204 job running under any operating system will have exclusive access to a Model 204 file during file open. File open is the event that adds, updates, or removes entries in the SHARED DASD ENQUEUE list. These entries indicate which Model 204 job, virtual machine, or partition has the file open and whether that instance of Model 204 has share (read-only) or exclusive (update) access to the file. You can display these entries using the following command:
It is therefore critical for Model 204 file integrity that the device
RESERVE macro be allowed to function on all devices shared in a multi-operating-system environment where Model 204 jobs might run.
ENQCTL command enhanced
The ENQCTL command lists all entries in the shared DASD enqueue list that reside in a file's File Parameter List (FPL) — more than the previous maximum of eight entries.
Also, formerly when a file could not be opened due to shared DASD enqueueing conflicts, a maximum of eight entries was listed using the
M204.0582 message number. With the enhancement to the ENQCTL command the
M204.0582 message is issued as many times as necessary.
Cautions using Global Resource Serialization
The IBM Global Resource Serialization (GRS) feature available under z/OS may be configured to suppress the use of the device RESERVE macro. If this suppression is enabled, Model 204 file integrity cannot be guaranteed and data will become corrupted.
For this reason, it is imperative that the GRS Resource Name List (RNL) be configured to allow the RESERVE macro to be passed, in the channel program constructed by Model 204, to any device where a Model 204 file is allocated in a shared DASD environment.
For additional information regarding Model 204 files and shared DASD, please see:
- ENQCTL command for shared DASD enqueue control
- Shared DASD locking conflicts
- CCATEMP in system recovery
64-bit architecture support
Model 204 runs in 64-bit mode on z/OS and z/VM systems.
Understanding above the bar storage
In 64-bit mode z/OS and z/VM may create a region that has storage above the 2-gigabyte line called the bar. Address spaces with above the bar storage consist of three areas:
In 0-2 gigabyte range
Programs and data
Above 4 gigabytes
Note: In 2-4 gigabyte range
Unavailable for any purpose
Set the IBM MEMLIMIT system option
To implement above the bar storage IBM requires that you set a limit on how much of that virtual storage each address space can use. This limit is called
MEMLIMIT. If you do not set MEMLIMIT, the system default is 0, meaning no address space can use above the bar virtual storage. To allocate Model 204 data structures such as buffer pool above the bar, MEMLIMIT should be properly set when running this release.
IBM provides several options to override the system default. Use one of the following options when you install and run Model 204:
- SMF MEMLIMIT parameter
- MEMLIMIT on JOB and EXEC statement
- MEMLIMIT environment through IEFUSI
z/OS customers running on z/OS 1.10 and later are afforded a default of 2 Gigabytes of above the bar storage per address space, replacing the previous default of 0 Gigabytes. Thus customers running on this operating system level do not need to set MEMLIMIT or
REGION=0M to acquire above the bar storage.
Refer to IBM documentation about MEMLIMIT and limiting above the bar storage use in z/Architecture to implement the option that best meets your site requirements.
The use of real storage below the 2-gigabyte address is not controlled by MEMLIMIT. Only the amount of virtual storage is controlled by MEMLIMIT.
Model 204 entities in above the bar storage
The following Model 204 entities can be accessed directly in above the bar storage:
- Pages from Tables A, B, E, and X
- Index, procedure, and existence bit map pages from Table D
- CCATEMP pages with found sets, screens and images, and transaction backout log
- NUMBUFG number of ATB buffers
- Saved compiled APSY procedures
- CCASERVR for swapped out users
- FTBL (as of version 7.4)
- GTBL, NTBL, QTBL (as of version 7.5)
- RTBL, KTBL, STBL, VTBL, XTBL, ITBL, FSCB, TTBL, HTBFRS (as of version 7.6)
Handling 64-bit statistics
To support very long running Model 204 regions, Rocket Software has modified the capacity of statistical counters by increasing the size of some statistics and also exploiting 64-bit processing where appropriate. For any in-house or third-party support applications that process statistical counters, you will need to review the statistics generated.
As some of the statistics fields are now double words, check Using system statistics for the new layout of the System, Final, and Partial statistics. Also, additional Disk Buffer Monitor, MP/204, and File statistics have been updated.
Look at your in-house or third-party support applications to see if you need to make changes because of the increased length of some of the statistics. Make any changes necessary to your applications, then reassemble with this new release.
Even if your in-house or third-party support applications do not refer to any of these double word statistics, you must reassemble your applications since all statistics offsets have changed.
Model 204 storage
The Model 204 program acquires all its working storage space dynamically. Working storage includes server areas, resource locking tables, buffer space, and control blocks.
The region or partition in which Model 204 runs must be large enough to contain the Model 204 code and the dynamically acquired working storage space.
Information about tracking dynamic storage allocation is provided in [[Performance monitoring and tuning]].
z/VSE storage considerations
The following considerations apply to a z/VSE environment:
- Model 204 allocates storage in the partition GETVIS area.
When running in z/VSE, the partition must be large enough to hold the executable phase (program) and have enough GETVIS area for Model 204 to allocate working storage.
- If the JCL EXEC statement
SIZEparameter is missing, the size of the GETVIS area is equal to the default size (48K, unless defined in the IPL procedures).
- If there is insufficient storage, the SIZE parameter should specify partition space approximately equal to the program size, leaving a small (not less than 48K for z/VSE) partition GETVIS area.
In the case of an ONLINE system or an IFAM1 job, 48K of GETVIS area is not enough for an efficient system.
The SIZE parameter is strongly recommended for an ONLINE environment.
- The SIZE parameter is required when using IFAM1. If a PL/I Host Language program is used, Model 204 must take its dynamic storage from the partition GETVIS area through a SIZE parameter specification.
CMS disk considerations
Native CMS versions of Model 204 support files on variable-format disks and CMS-format disks. The disks can be partial- or full-volume minidisks, or dedicated volumes.
File location on the disks affects performance. Put all Model 204 files on variable-format disks for optimum access and performance.
- Use variable-format disks for files that experience high levels of activity. The Model 204 CMS interface supports asynchronous I/O operations through SIO-level logic and associated interrupt handling facilities. As a result, significant overlap between I/O and processing is achieved when variable-format disks are used.
Files on variable-format disks must be preallocated. A primary allocation is required. Secondary extents can be specified to permit limited extension of the file. File allocation information is recorded in the Volume Table Of Contents (VTOC).
- CMS-format disks affect performance by increasing I/O service time.
- Additional arm motion and rotational delay caused by logical blocks of data spanning multiple physical blocks on a CMS-format disk extend the duration of I/O operations.
- Synchronous DASD I/O operations (suspension of the virtual machine until the I/O operation completes) increase I/O service time.
Files on CMS-format disks do not require preallocation. Files increase dynamically as data is added, and are restricted in size only by the space available on the minidisk. New files are created automatically the first time they are referenced. Each CMS-formatted minidisk has its own Master File Directory (MFD) that contains allocation information for each file. When all the files are closed, the MFD maintained in virtual storage is transferred to disk. The CMS volume on which files exist is recorded in the MFD.
Related Rocket Software products
As the Model 204 system manager, you might be required to manage the following Rocket Software products, which can access Model 204 data.
Analytics/204 is a PC-based visual data analysis tool that helps business users get to know their data, access their data, and trust their data, and without the need for IT assistance. Working in a Windows-based point-and-click environment, business users are able to:
- Acquire a better understanding of the data,
- Create subsets or segments of the data for targeted analytical or operational purposes, and
- Display or export the data in whatever format is required for maximum decision-making.
Connect★ and Horizon
Connect★ offers a client/server environment. On the client side, people use the PC software already familiar to them. On the server side is Model 204, ensuring high-performance database management. Connect★ unites the two through SQL. Alternatively, using Rocket Software's Remote Command Line facility, desktop applications can access Model 204 data through Rocket Software's SOUL language.
JDBC for Model 204
The JCBC for Model 204 product enables you to access Model 204 using the Java language. JDBC for Model 204 specifically supports Connect★.
MP/204 enables a single copy of Model 204 to leverage multiprocessing configurations on IBM or compatible mainframes running z/OS/XA or z/OS/ESA. The result: significant performance enhancements in both system throughput and response times by utilizing parallel CPU processing.
MQ/204 provides SOUL extensions to manipulate MQSeries message queues and receive and send data throughout your enterprise. Application developers are increasingly turning to Message Queuing Middleware (MQM) for program-to-program communication. MQM is especially useful for supporting applications that are distributed across different platforms and have only intermittent or long-distance network connections between them.
Parallel Query Option/204
With Parallel Query Option/204 (PQO) you can raise the performance of Model 204 by a factor of two, three, four or more. PQO lets you break up massive queries into smaller pieces, which are then searched and sorted simultaneously in parallel. PQO finds answers faster because the requests access smaller sets of data. Requesters get their answers sooner and other users' queries don't wait as long in the queue.
You can share data between production, test, and historical data regions. You can access Model 204 data residing on one region from any number of other Model 204 regions. Furthermore, PQO lets you distribute Model 204 regions within the same machine, on different machines, or across different IBM operating systems.