An icon for a calendar

Published September 23, 2020

A philosophical (possibly even a theological) discussion of IBM MQ’s maximum queue depth

A philosophical (possibly even a theological) discussion of IBM MQ’s maximum queue depth

I was having a discussion the other day about IBM MQ maximum queue depth. After thinking about this, I really got to thinking about why this exists. The basic concept for a queue is to hold messages the producers create until consumers can process them.

There could be different reasons for this.

• A sudden spike in requests when concert tickets, the latest game console or new phones go on sale
• A difference between times when the producer and consumers run

It is perfectly natural to have messages on a queue. The question is “how many should be allowed before the queue is considered full?”

It is interesting that on zOS, the default maximum queue depth is 999,999,999 or more easily stated One Billion messages. I think that qualifies as quite a few messages. With default alerting configured, MQ would only notify you when this queue had 800 million messages on it.

On the other hand, distributed MQ has a default maximum queue depth of 5,000 with a warning at just 4,000. That is quite a difference.

If you read some of my previous discussions, you know that I look back at history to explain things. In this case, when MQ came out, the distributed computers were considerably smaller than the mainframe. Today, the distributed systems are more powerful than the mainframes from 25 years ago. So why is the default still so small? IBM, in general, doesn’t like to change the behavior of defaults to maintain compatibility, so it is still the same as it was.

But still the question is, what is the right limit? Is it the very large mainframe approach or the ultra-conservative distributed approach or somewhere in the middle?

Before taking proposing that, I would like to share another bit of history. In the early days of MQ, the applications were reasonably close to the queue managers. What do I mean by that? Well, they probably ran on the same server and they interacted with something close by, like a CICS region or application server. Now let’s consider what happens when the queue gets full. If the application is locally putting the message, it gets a queue full error and needs to handle it. There are a few choices to make as to what to do next. For example, it could try to repeat the put for a while. If that wasn’t successful, the application had to do something. It could fail and trigger a backout. Of course, that just would cause it to fire off again and the same queue full probably happens. Eventually, it either has to fail completely or write the message to some overflow queue (which hopefully isn’t also full). It could generate something back so the application could notify someone.

This became more complex when the applications began distributing across multiple systems. In this case, when the MQ channel was not able to put the message because it was full, it placed it on the dead letter queue (DLQ). Once placed there, it would stay forever until it expired, or a DLQ handler or Administrator put it back to the right place (maybe). But consider in today’s architecture, where the application is a microservice in a chain of events. Perhaps, triggered when you ordered a new toaster on your phone. A queue full is about the last thing you want this application to be dealing with. There’s no one for the application to easily notify and you aren’t likely to wait for an admin to intervene in your purchase. Of course, there are other errors the application has to handle, but this is one that could be avoided. Just make the queue large enough to handle the maximum amount that could ever be possible.

This gets back to the core discussion as to why have a limit. That is, there has to be limits to avoid over allocating resources should that number of messages be on the queue. . There are 3 resources in effect here, CPU for housekeeping to maintain a large queue of messages and the amount of memory/disk required to support the limit. Only recently did MQ consider the disk size aspect of this equation. That is, 5000 10MB messages are a lot bigger than 5000 10 byte messages but originally all you had was the count. With recent versions of MQ, you can now set a maximum disk size that a queue can consume. That means considering how big it should be, but I would say the approach needs to be the same and allocate enough to handle the extremes. MQ has introduced CAPEXPRY, which sets the maximum expiration time the message can set.

So perhaps we will see a transition to a very large maximum queue depth with a more practical queue data size. Clearly, there are physical limits that apply, but disk and memory are a lot cheaper than designing an application around an arbitrary sizing. This is consistent with some of the more recent entries into the messaging space, like Kafka, which defines a maximum size. However, in the case of Kafka, it’s the oldest entries that fall off into the bit bucket. A recent RFE for MQ asked for a similar type of behavior.

But regardless of which technology is being used or whether max queue depth/size is small or large, it is important to track the behavior of the messages so that you can identify the trends and any abnormal behavior. Nastel Navigator X is specifically suited for working with these environments.

Start Optimizing Your Integration Infrastructure with meshIQ!