---
title: "ActiveMQ Slow Consumer: Detection, Strategy & Prevention Guide"
date: 2026-05-01
author: "TheFrameGuy"
featured_image: "https://www.meshiq.com/wp-content/uploads/blog_ActiveMQ-Slow-Consumer-Detection_30042026.jpg"
categories:
  - name: "Apache ActiveMQ®"
    url: "/sort-by/active-mq.md"
  - name: "Middleware Optimization"
    url: "/sort-by/middleware-optimization.md"
  - name: "Observability"
    url: "/sort-by/observability.md"
  - name: "SaaS"
    url: "/sort-by/saas.md"
tags:
  - name: "monitoring"
    url: "/sort-by/tag/monitoring.md"
---

# ActiveMQ Slow Consumer: Detection, Strategy & Prevention Guide

Within minutes, every other application publishing to any topic on that broker begins experiencing producer flow control blocks. Their teams start escalating. The slow consumer’s team has no idea that their consumer is the cause.

This is the ActiveMQ slow consumer problem in its most damaging form: a local performance issue in one consumer application becomes a broker-wide incident that affects every producer. Understanding why this happens, and how to prevent and remediate it, is essential operational knowledge for anyone running ActiveMQ at enterprise scale.

This post covers the complete slow consumer lifecycle: why slow consumers cause broker-wide degradation, how to approach ActiveMQ slow consumer detection before they become incidents, and how to configure both Apache ActiveMQ® and Apache Artemis™ with the right ActiveMQ slow consumer strategy for your workload.

## Why One Slow Consumer Can Freeze Your Entire Broker

### The Memory Accumulation Chain

In ActiveMQ, non-durable topic messages must be held in memory until all active subscribers have acknowledged them. If Consumer A processes 10,000 messages/second and Consumer B processes only 100 messages/second on the same topic, the broker must keep the last 9,900 un-acknowledged messages in memory for Consumer B, even though Consumer A has long since processed them.

At scale, this accumulation is rapid. A topic receiving 5,000 messages/second with one consumer processing at 500 messages/second accumulates 4,500 messages per second in broker memory. With 1KB messages, that is 4.5MB per second filling the broker’s memoryUsage allocation.

When memoryUsage is exhausted, the default is 64MB, which fills in roughly 14 seconds at this rate, the broker activates producer flow control. Producers are blocked until the slow consumer frees memory by acknowledging messages. Every producer on the broker is affected, regardless of which destination they are publishing to, because memoryUsage is a broker-level limit shared across all destinations.

### The Asymmetry Problem

The damage is asymmetric: the slow consumer is the cause, but the producers feel the pain. The fast consumers in the same system are not directly degraded by the slow consumer, but they are degraded when the blocked producers stop sending messages. A slow consumer on a market data feed can cause delays in an unrelated order processing queue simply because the producer for the order queue is blocked by broker-wide flow control.

This asymmetry makes root cause identification difficult. The operator who monitors the order processing queue sees a sudden slowdown in message ingestion. Nothing in the order processing queue itself appears wrong. The slow consumer causing the problem is on a completely different destination, managed by a different team.

## Slow Consumer Behavior: Topics vs. Queues

The slow consumer problem manifests differently depending on destination type. Understanding this distinction determines which handling strategy applies.

### Non-Durable Topics: The Memory Crisis

On non-durable topics, the broker must hold messages for every subscriber until each one acknowledges. A slow subscriber forces the broker to retain a growing message backlog in RAM. This is the scenario described above: broker memory fills, producer flow control triggers, and the entire broker degrades.

**Broker-side protection**: pendingMessageLimitStrategy (ActiveMQ) and slow-consumer-threshold (Artemis). These are the right tools for non-durable topic slow consumers.

### Durable Topics and Queues: The Backlog Problem

For durable topics and queues, messages are persisted to disk rather than held only in memory. A slow consumer does not directly exhaust the broker’s RAM, but it causes a different set of problems:

- Message backlog grows in the persistent store (KahaDB or Artemis journal)
- KahaDB cannot garbage-collect acknowledged messages as long as unacknowledged messages exist in the same journal file, the slow consumer “pins” journal files open, causing KahaDB disk usage to grow unboundedly
- The DLQ receives messages after redelivery attempts expire, creating a secondary backlog problem

For durable destination slow consumers, the primary responses are: reduce prefetch to limit the consumer’s local buffer, add consumer instances to distribute load, and monitor DLQ accumulation rate. We covered the DLQ side of this relationship in our **[Dead Letter Queue Management Guide](https://www.meshiq.com/blog/activemq-dead-letter-queue-management/)**.

## ActiveMQ: Detection via JMX

Before configuring any handling strategy, ActiveMQ slow consumer detection starts with per-subscription statistics exposed via JMX, specifically on the

**TopicSubscriptionViewMBean** for topic subscriptions and

**QueueSubscriptionViewMBean** for queue subscriptions.

### Key JMX Attributes for Slow Consumer Detection

**Attribute****MBean****What It Tells You**PendingQueueSizeTopicSubscriptionViewMBeanMessages waiting in the broker’s dispatch buffer for this consumer. Growing continuously = slow consumerDispatchedCounterBothTotal messages dispatched to this consumer since the connection. Compare the rate vs. fast consumersDiscardedCountTopicSubscriptionViewMBeanMessages were discarded because the consumer was too slow. Non-zero = pendingMessageLimitStrategy is active and evictingMessageCountAwaitingAcknowledgeQueueSubscriptionViewMBeanMessages dispatched to this consumer but not yet acknowledged. Chronically high = slow consumerPrefetchSizeBothConsumer’s configured prefetch — context for interpreting pending queue depth

// JMX detection script — identify slow consumers by PendingQueueSize  
MBeanServerConnection connection = // connect to broker JMX  
ObjectName brokerPattern = new ObjectName(  
 “org.apache.activemq:type=Broker,brokerName=\*,” +  
 “destinationType=Topic,destinationName=\*,” +  
 “endpoint=Consumer,clientId=\*,consumerId=\*”);  
  
Set&lt;ObjectName&gt; consumers = connection.queryNames(brokerPattern, null);  
for (ObjectName consumer : consumers) {  
 long pending = (Long) connection.getAttribute(consumer, “PendingQueueSize”);  
 long discarded = (Long) connection.getAttribute(consumer, “DiscardedCount”);  
 String clientId = (String) connection.getAttribute(consumer, “ClientId”);  
  
 if (pending &gt; 1000) { // threshold appropriate to your workload  
 System.out.printf(“SLOW CONSUMER DETECTED: %s — pending: %d, discarded: %d%n”,  
 clientId, pending, discarded);  
 }  
}

### **meshIQ Console: Continuous Slow Consumer Visibility**

Writing and scheduling JMX detection scripts handles point-in-time ActiveMQ slow consumer detection, but continuous monitoring requires persistent metric collection and alerting on rate changes. meshIQ Console surfaces PendingQueueSize and DiscardedCount per subscription in a live dashboard, with configurable thresholds that trigger alerts before a slow consumer exhausts broker memory.

## ActiveMQ: Configuring pendingMessageLimitStrategy

The pendingMessageLimitStrategy controls the maximum number of messages the broker retains for a non-durable topic consumer beyond its prefetch buffer. Once this limit is reached, older messages are discarded as new messages arrive, and the slow consumer receives the most recent messages, but gaps emerge in its stream.

This is an intentional tradeoff: message loss for the slow consumer in exchange for protecting broker memory for all other consumers. It is the correct ActiveMQ slow consumer strategy for high-volume market data, telemetry feeds, and other workloads where receiving the most recent data matters more than receiving every historical message.

### Strategy 1: Constant Limit

&lt;!– activemq.xml — constant pending message limit per consumer –&gt;  
&lt;**destinationPolicy**&gt;  
 &lt;**policyMap**&gt;  
 &lt;**policyEntries**&gt;  
  
 &lt;!– High-volume market data: aggressive limit, accept gaps –&gt;  
 &lt;**policyEntry** topic=”PRICES.&gt;” producerFlowControl=”true”  
 topicPrefetch=”100″&gt;  
 &lt;**pendingMessageLimitStrategy**&gt;  
 &lt;!– Keep max 10 messages above prefetch per slow consumer –&gt;  
 &lt;**constantPendingMessageLimitStrategy** limit=”10″/&gt;  
 &lt;/**pendingMessageLimitStrategy**&gt;  
 &lt;**messageEvictionStrategy**&gt;  
 &lt;!– Evict oldest messages with lowest priority first –&gt;  
 &lt;**oldestMessageWithLowestPriorityEvictionStrategy**/&gt;  
 &lt;/**messageEvictionStrategy**&gt;  
 &lt;/**policyEntry**&gt;  
  
 &lt;!– Order events: no discarding — every message must be delivered –&gt;  
 &lt;**policyEntry** topic=”ORDERS.&gt;” producerFlowControl=”true”&gt;  
 &lt;**pendingMessageLimitStrategy**&gt;  
 &lt;!– -1 = never discard; rely on producer flow control instead –&gt;  
 &lt;**constantPendingMessageLimitStrategy** limit=”-1″/&gt;  
 &lt;/**pendingMessageLimitStrategy**&gt;  
 &lt;/**policyEntry**&gt;  
  
 &lt;!– Default for all other topics –&gt;  
 &lt;**policyEntry** topic=”&gt;” producerFlowControl=”true”&gt;  
 &lt;**pendingMessageLimitStrategy**&gt;  
 &lt;**constantPendingMessageLimitStrategy** limit=”1000″/&gt;  
 &lt;/**pendingMessageLimitStrategy**&gt;  
 &lt;/**policyEntry**&gt;  
  
 &lt;/**policyEntries**&gt;  
 &lt;/**policyMap**&gt;  
&lt;/**destinationPolicy**&gt;

### Strategy 2: Prefetch Rate Multiplier

&lt;!– Keep 2× the consumer’s prefetch limit as the pending buffer –&gt;  
&lt;**pendingMessageLimitStrategy**&gt;  
 &lt;**prefetchRatePendingMessageLimitStrategy** multiplier=”2.0″/&gt;  
&lt;/**pendingMessageLimitStrategy**&gt;

The prefetchRatePendingMessageLimitStrategy dynamically scales the limit based on each consumer’s prefetch size. A consumer with prefetch=100 would get a limit of 200; a consumer with prefetch=32766 would get a limit of 65532. This proportional approach is useful when consumers in your topic namespace have intentionally different prefetch sizes, and you want the pending limit to scale accordingly.

### The Critical Prefetch Interaction

This is the most important operational detail about any ActiveMQ slow consumer strategy using pendingMessageLimitStrategy: it only applies to messages above the consumer’s prefetch buffer.

For Apache ActiveMQ® topic consumers, the default prefetch is 32,766 messages. A constantPendingMessageLimitStrategy limit=”10″ with the default prefetch of 32,766 is essentially useless; the slow consumer will accumulate 32,776 messages before any eviction occurs.

The fix: reduce the topic prefetch alongside configuring the pending message limit.

&lt;policyEntry topic=”PRICES.&gt;”  
 topicPrefetch=”100″ &lt;!– Reduced from default 32766 –&gt;  
 producerFlowControl=”true”&gt;  
 &lt;**pendingMessageLimitStrategy**&gt;  
 &lt;**constantPendingMessageLimitStrategy** limit=”10″/&gt;  
 &lt;/**pendingMessageLimitStrategy**&gt;  
&lt;/**policyEntry**&gt;

With topicPrefetch=”100″ and limit=”10″, the maximum pending buffer per consumer is 110 messages, a limit that actually protects broker memory from unbounded accumulation.

This interaction is the most commonly missed configuration detail in slow consumer handling. Teams configure a pendingMessageLimitStrategy without reducing prefetch, see no change in broker memory behavior under load, and conclude the feature doesn’t work.

### Message Eviction Strategy

When the pending limit is reached, which message gets evicted? The default implementation evicts the oldest message in the pending buffer. Two alternative strategies are available:

- **oldestMessageWithLowestPriorityEvictionStrategy**: Evicts the oldest message among those with the lowest priority. This preserves high-priority messages in the buffer even if they are older, ideal for workloads where message priority is meaningful (order routing, alert notifications).
- **Custom eviction strategy**: Implement MessageEvictionStrategy to apply application-specific logic. For example, market data where price updates for different instruments have different importance levels, or IoT telemetry where reading criticality varies by sensor type.

## Apache ActiveMQ®: The Prefetch-Only Approach for Queues

For queue consumers that are slow, the pendingMessageLimitStrategy does not apply, queue messages persist to disk and are not held in broker RAM in the same way. The slow consumer handling approach for queues is different:

**Reduce prefetch.** A slow queue consumer with prefetch=1000 holds 1000 messages in its local buffer that other fast consumers cannot access. With prefetch=1, messages are dispatched one at a time, enabling other consumers to participate in processing immediately.

**Add consumer instances.** If one consumer is slow due to processing complexity, adding concurrent consumers is usually the right architectural response. The per-message processing load is distributed, and the overall consumption rate increases proportionally.

**Monitor MessageCountAwaitingAcknowledge.** If this value grows continuously, the consumer is not processing fast enough. If it grows then shrinks cyclically, the consumer may be experiencing GC pauses or transient blocking, investigate at the application level.

## Apache Artemis™: slow-consumer-threshold and slow-consumer-policy

Artemis takes a fundamentally different approach to slow consumer handling. Rather than a message eviction strategy, Artemis uses a rate-based detection mechanism that fires a policy action when a consumer falls below a configurable message consumption rate, making it a distinct ActiveMQ slow consumer strategy from Apache ActiveMQ®’s buffer-based approach.

### Configuration in address-settings

&lt;!– broker.xml — Artemis slow consumer detection –&gt;  
&lt;**address-settings**&gt;  
  
 &lt;!– High-volume feeds: detect and kill non-durable slow consumers –&gt;  
 &lt;**address-setting** match=”prices.#”&gt;  
 &lt;!– Consumer must acknowledge at least 10 messages/second –&gt;  
 &lt;**slow-consumer-threshold**&gt;10&lt;/**slow-consumer-threshold**&gt;  
 &lt;!– Unit: MESSAGES\_PER\_SECOND (default), MESSAGES\_PER\_MINUTE, MESSAGES\_PER\_HOUR –&gt;  
 &lt;**slow-consumer-threshold-measurement-unit**&gt;MESSAGES\_PER\_SECOND&lt;/**slow-consumer-threshold-measurement-unit**&gt;  
 &lt;!– KILL: disconnect; NOTIFY: send management notification –&gt;  
 &lt;!– Use KILL only for non-durable subs where message loss is acceptable –&gt;  
 &lt;**slow-consumer-policy**&gt;NOTIFY&lt;/**slow-consumer-policy**&gt;  
 &lt;!– Check interval in seconds — must be &gt;= 2× max processing time per message –&gt;  
 &lt;**slow-consumer-check-period**&gt;30&lt;/**slow-consumer-check-period**&gt;  
 &lt;/**address-setting**&gt;  
  
 &lt;!– Critical transactional queues: detect but never kill –&gt;  
 &lt;**address-setting** match=”orders.#”&gt;  
 &lt;**slow-consumer-threshold**&gt;1&lt;/**slow-consumer-threshold**&gt;  
 &lt;**slow-consumer-threshold-measurement-unit**&gt;MESSAGES\_PER\_MINUTE&lt;/**slow-consumer-threshold-measurement-unit**&gt;  
 &lt;**slow-consumer-policy**&gt;NOTIFY&lt;/**slow-consumer-policy**&gt;  
 &lt;**slow-consumer-check-period**&gt;120&lt;/**slow-consumer-check-period**&gt;  
 &lt;/**address-setting**&gt;  
  
 &lt;!– Global fallback: disabled by default –&gt;  
 &lt;**address-setting** match=”#”&gt;  
 &lt;**slow-consumer-threshold**&gt;-1&lt;/**slow-consumer-threshold**&gt;  
 &lt;**slow-consumer-policy**&gt;NOTIFY&lt;/**slow-consumer-policy**&gt;  
 &lt;**slow-consumer-check-period**&gt;5&lt;/**slow-consumer-check-period**&gt;  
 &lt;/**address-setting**&gt;  
  
&lt;/**address-settings**&gt;

### KILL vs. NOTIFY: Choosing the Right Policy

The choice between KILL and NOTIFY is a consequential architectural decision for your ActiveMQ slow consumer strategy, not just a configuration preference.

**KILL** terminates the slow consumer’s entire broker connection. Two critical implications:

1. **Connection scope**: If the slow consumer’s connection hosts multiple JMS sessions (multiple consumers and producers on the same connection factory instance), killing it for slow consumption kills all of them. Any producers sharing that connection will also be disconnected.
2. **Non-durable subscription cleanup**: For non-durable JMS subscribers, killing the connection removes the subscription and all its buffered messages from the broker. This frees server resources immediately, which is the primary use case for KILL.
3. **Message loss**: Messages that were in the slow consumer’s server-side queue at the time of KILL are gone. For non-durable consumers, where you are using KILL, this is intentional and acceptable. Never use KILL for durable consumers or queues where message loss is not acceptable.

**NOTIFY** sends a CONSUMER\_SLOW management notification without disconnecting. Your application or monitoring infrastructure receives this notification and can take its own action: alert the operations team, trigger application-level scaling, or log for trend analysis. NOTIFY is the safer default for most production workloads, it provides detection without the risk of killing related sessions on the same connection.

### **slow-consumer-check-period: The Sizing Rule**

The check period must be at least 2× the maximum expected time for a consumer to process a single message. If you set slow-consumer-threshold=1 (one message per minute) and slow-consumer-check-period=5 (check every 5 seconds), a consumer that legitimately processes one message per minute will be flagged as slow on almost every check, producing false positives.

For a threshold of 1 message per minute, the check period should be at least 120 seconds to allow the consumer a full processing cycle before evaluation. The Apache documentation explicitly notes this relationship.

## Apache ActiveMQ® vs. Apache Artemis™ Slow Consumer Handling: Side-by-Side

**Dimension****Apache ActiveMQ®****Apache Artemis™**Detection mechanismJMX — PendingQueueSize, DiscardedCount on TopicSubscriptionViewMBeanRate-based: slow-consumer-threshold checked every slow-consumer-check-period secondsConfiguration locationdestinationPolicy → policyEntry → pendingMessageLimitStrategyaddress-settings → address-settingResponse: non-durable topicsDiscard older messages (eviction), consumer continuesNOTIFY or KILL – consumer may be disconnectedResponse: durable topics/queuesNo eviction (use prefetch tuning + add consumers)NOTIFY or KILLPrefetch interactionCritical, must reduce prefetch to make eviction effectiveLess critical, rate-based, not buffer-based detectionGranularityPer-destination-pattern via wildcardsPer-address-setting via wildcardsKILL semanticsNot availableDisconnects the entire connection, affects shared sessionsDefault behaviorNo protection, slow consumers accumulate unboundedslow-consumer-threshold=-1 (disabled), no protection

One critical commonality: both Apache ActiveMQ® and Apache Artemis™ have slow consumer protection disabled by default. Neither broker ships with an ActiveMQ slow consumer strategy pre-configured, it must be explicitly added for every deployment.

## Architectural Prevention: Designing Consumer Topologies That Don’t Slow Down

Configuration is the remediation layer. Architecture is the prevention layer. These patterns make slow consumer problems less likely to arise and less severe when they do.

### Pattern 1: Consumer Isolation via Separate Connections

The most common reason a slow consumer kills unrelated sessions is shared connections. If a slow consumer and a healthy producer share a connection factory instance, the Artemis KILL policy (or Apache ActiveMQ®’s flow control backpressure) affects both.

**Rule**: create separate ActiveMQConnectionFactory (Apache ActiveMQ®) or Apache Artemis™ ConnectionFactory instances for consumers and producers in the same application. Never share a connection between consuming and producing operations at different destinations.

### Pattern 2: Consumer Capacity Buffering

Size your consumer thread pool to handle 1.5–2× your expected peak message rate, not your average rate. A consumer thread pool that is at 100% utilization under normal load has zero headroom for processing spikes. When a spike occurs, the thread pool saturates, messages queue up in the prefetch buffer, and the consumer begins appearing slow.

### Pattern 3: Separate Slow and Fast Consumer Topics

If you have a mix of latency-tolerant consumers (batch processors, overnight reporting jobs) and latency-sensitive consumers (real-time dashboards, alerting systems) subscribed to the same high-volume topic, separate them. The latency-tolerant consumers belong on a different topic (or a queue with delayed processing) where their slowness cannot create memory pressure that affects the latency-sensitive consumers.

### Pattern 4: Application-Level Backpressure

For durable queue consumers, implement back-pressure in the application rather than relying on broker-level mechanisms. If a consumer’s downstream system (database, external API) is the bottleneck, use a bounded thread pool and rate limiting at the application level to prevent queue depth growth from triggering DLQ processing.

## Complete Production Configuration: Apache ActiveMQ®

&lt;!– activemq.xml — production slow consumer handling for Classic –&gt;  
&lt;**broker** xmlns=”http://activemq.apache.org/schema/core”  
 brokerName=”prod-broker”  
 useJmx=”true”&gt;  
  
 &lt;**destinationPolicy**&gt;  
 &lt;**policyMap**&gt;  
 &lt;**policyEntries**&gt;  
  
 &lt;!– Market data: aggressive eviction, low prefetch, accept gaps –&gt;  
 &lt;**policyEntry** topic=”MARKET.&gt;” producerFlowControl=”true”  
 topicPrefetch=”50″&gt;  
 &lt;**pendingMessageLimitStrategy**&gt;  
 &lt;**constantPendingMessageLimitStrategy** limit=”5″/&gt;  
 &lt;/**pendingMessageLimitStrategy**&gt;  
 &lt;**messageEvictionStrategy**&gt;  
 &lt;**oldestMessageWithLowestPriorityEvictionStrategy**/&gt;  
 &lt;/**messageEvictionStrategy**&gt;  
 &lt;/**policyEntry**&gt;  
  
 &lt;!– Events requiring guaranteed delivery: no eviction, moderate prefetch –&gt;  
 &lt;**policyEntry** topic=”EVENTS.&gt;” producerFlowControl=”true”  
 topicPrefetch=”200″&gt;  
 &lt;**pendingMessageLimitStrategy**&gt;  
 &lt;!– -1 = never evict; flow control protects memory instead –&gt;  
 &lt;**constantPendingMessageLimitStrategy** limit=”-1″/&gt;  
 &lt;/**pendingMessageLimitStrategy**&gt;  
 &lt;/**policyEntry**&gt;  
  
 &lt;!– Queues: low prefetch for fair distribution across consumers –&gt;  
 &lt;**policyEntry** queue=”&gt;” producerFlowControl=”true”  
 queuePrefetch=”10″&gt;  
 &lt;/**policyEntry**&gt;  
  
 &lt;!– Advisory topics: never apply limits –&gt;  
 &lt;**policyEntry** topic=”ActiveMQ.Advisory.&gt;”  
 producerFlowControl=”false”/&gt;  
  
 &lt;/**policyEntries**&gt;  
 &lt;/**policyMap**&gt;  
 &lt;/**destinationPolicy**&gt;  
  
 &lt;!– Appropriately sized system usage to prevent premature flow control –&gt;  
 &lt;**systemUsage**&gt;  
 &lt;**systemUsage**&gt;  
 &lt;**memoryUsage**&gt;&lt;**memoryUsage** percentOfJvmHeap=”20″/&gt;&lt;/**memoryUsage**&gt;  
 &lt;**storeUsage**&gt;&lt;**storeUsage** limit=”500gb”/&gt;&lt;/**storeUsage**&gt;  
 &lt;**tempUsage**&gt;&lt;**tempUsage** limit=”50gb”/&gt;&lt;/**tempUsage**&gt;  
 &lt;/**systemUsage**&gt;  
 &lt;/**systemUsage**&gt;  
  
&lt;/**broker**&gt;

## One Slow Consumer Shouldn’t Bring Down Your Entire Broker

Slow consumer protection is not a feature you enable after your first incident, it is a standard component of any production ActiveMQ configuration. Both Apache ActiveMQ® and Apache Artemis™ default to no protection, meaning every broker is one sluggish consumer away from a broker-wide performance event.

The configurations in this guide, per-destination pendingMessageLimitStrategy with appropriate prefetch tuning for Apache ActiveMQ®, and rate-based slow-consumer-threshold with NOTIFY policy for Artemis, give your broker the protection it needs. Combined with continuous JMX monitoring via meshIQ Console, slow consumers become a routine operational alert rather than an emergency incident.

**Get your ActiveMQ slow consumer configuration reviewed by our team → [Talk to an Expert](https://www.meshiq.com/activemq-support/)**

## **Frequently Asked Questions**

**Q1: What is a slow consumer in ActiveMQ?** 

A slow consumer is a client that cannot process and acknowledge messages as fast as the broker dispatches them. For non-durable topic consumers, this forces the broker to accumulate a message backlog in memory. When memory is exhausted, producer flow control activates, blocking all producers on the broker regardless of destination, degrading the entire system.







**Q2: How do I detect a slow consumer in ActiveMQ?** 

In Apache ActiveMQ®, use JMX to inspect PendingQueueSize and DiscardedCount on TopicSubscriptionViewMBean. A continuously growing PendingQueueSize with a lagging DispatchedCounter identifies the slow consumer. In Artemis, configure slow-consumer-threshold and slow-consumer-policy=NOTIFY to receive CONSUMER\_SLOW management notifications automatically.







**Q3: What is pendingMessageLimitStrategyin ActiveMQ?** 

A Apache ActiveMQ® broker policy that sets the maximum messages retained for a non-durable topic consumer above its prefetch buffer. Once exceeded, older messages are discarded. Must be configured alongside a reduced topicPrefetch to be effective, the default prefetch of 32,766 means 32,766 messages accumulate before any eviction applies.







**Q4: Why does a slow consumer block producers in ActiveMQ?** 

Slow topic consumers force the broker to hold unacknowledged messages in the shared memoryUsage allocation. When that limit is exhausted (default 64MB), producer flow control activates for all producers on the broker, even those publishing to unrelated destinations. Slow consumer handling prevents this by either evicting old messages (Apache ActiveMQ®) or disconnecting the consumer (Apache Artemis™).







**Q5: What is the difference between Apache Artemis™ slow-consumer-policy KILL and NOTIFY?**

KILL disconnects the slow consumer’s entire connection, freeing resources but affecting all other sessions on that connection. NOTIFY sends a CONSUMER\_SLOW management notification without disconnection. Use NOTIFY as the default for safety; use KILL only for non-durable consumers where message loss is acceptable, and you have verified the connection is not shared with other sessions.