MQ/204 architecture and environment
WebSphere MQ architecture
The IBM WebSphere MQ product manages queues of messages. Like an e-mail system, WebSphere MQ lets an application put and get messages on WebSphere MQ queues; the sender and recipient do not have to be active at the same time.
WebSphere MQ architecture shows WebSphere MQ providing one or more queue managers that are system processes. Each queue manager controls one or more queues. To access a queue, connect to the appropriate queue manager, then open the desired queue.
Types of WebSphere MQ implementation for z/OS
Although the WebSphere MQ for z/OS also comes with a TSO and CICS option, MQ/204 implements the z/OS batch option of the WebSphere MQ, which has the following implications:
- The z/OS batch option supports multiple queue managers.
- The z/OS batch option does not support two-phase commits.
z/OS performance considerations
System programmers for z/OS may observe delayed release of CSA storage by IBM WebSphere MQ.
MQ/204 environment requirements
The MQ/204 environment requires that you run the following minimum versions of software:
- Model 204 Version 6 Release 1.0
- WebSphere MQ Version 5.x
- MQMD Version 2
MQ/204 architecture
The z/OS WebSphere MQ provides one or more queue managers that are system processes. Each queue manager controls one or more queues. To access a queue, connect to the appropriate queue manager, then open the desired queue.
Operating-system subtasks, as shown in the following figure, issue WebSphere MQ API calls for MQ/204. Model 204 makes gets and puts to a WebSphere MQ application via z/OS system subtasks.
MQ/204 requires an z/OS subtask because all calls to WebSphere MQ are synchronous. Other Model 204 users can continue to work a set of z/OS subtasks to perform all needed communications with WebSphere MQ, thus isolating the Model 204 task to process other users.
Subtask management
WebSphere MQ API calls make use of a pool of operating-system subtasks to communicate. The size of this pool is governed by the parameters MQINTASK (initial size of subtask pool) and MQMXTASK (maximum size of subtask pool). During system initialization, a pool of MQINTASK subtasks is allocated. Additional subtasks are allocated dynamically as needed during Online execution, up to a maximum of MQMXTASK subtasks.
Each subtask can be in one of the following states:
State | Description |
---|---|
Free and unconnected | Not being used, and not connected to a queue manager. |
In use | In use by a user thread. |
Free and connected | Available and connected to a queue manager. This state permits keeping connections to queue managers active over multiple uses of a subtask as a performance optimization. |
Subtask allocation
When an application needs a connection to a new queue manager, MQ/204 utilizes the algorithm in the following figure to determine how to assign a subtask.
MQ/204 continues to allocate new subtasks according to this algorithm until MQMXTASK subtasks are assigned. Once the maximum number of subtasks is allocated, applications wait for up to MQWAIT milliseconds for an existing subtask to become available. Either a subtask frees up before the wait time expires, or a "no-subtasks-available" error is returned to the SOUL program in the $STATUS return code.
Subtask freeing
When freeing a subtask, MQ/204 tries to keep its queue manager connection, removing it from the in-use pool and adding it to the free-and-connected pool. MQ/204 disconnects a subtask from the queue manager and adds it to the free-and-unconnected pool only if the queue manager is stopped.
If the user is bumped and the MQ operation has not finished, the MQ subtask performing the operation is placed in the "delayed detach" state. The count of available subtasks is decremented and all associations with the user are removed. When the operation has finished, the subtask is detached and the MQDELDTP PST (pseudo subtask) runs to free the related storage areas.