---
title: "ActiveMQ Monitoring & Alerting Setup: The Complete 2026 Guide"
date: 2026-05-07
author: "TheFrameGuy"
featured_image: "https://www.meshiq.com/wp-content/uploads/blog_activeMQ-monitoring-setup_050826.jpg"
categories:
  - name: "Apache ActiveMQ®"
    url: "/sort-by/active-mq.md"
  - name: "Middleware Optimization"
    url: "/sort-by/middleware-optimization.md"
  - name: "Monitoring"
    url: "/sort-by/monitoring.md"
  - name: "MQ"
    url: "/sort-by/mq.md"
tags:
  - name: "devops"
    url: "/sort-by/tag/devops.md"
  - name: "monitoring"
    url: "/sort-by/tag/monitoring.md"
---

# ActiveMQ Monitoring & Alerting Setup: The Complete 2026 Guide

The organizations that catch these early are the ones with continuous ActiveMQ monitoring in place. The ones responding to pages at 3 AM are the ones relying on the web console checked manually, or not at all.

This guide establishes the complete ActiveMQ monitoring stack: the metrics that matter, the JMX infrastructure that exposes them, the Prometheus monitoring + Grafana dashboards integration that makes them continuous, the Apache Artemis™-native approach that requires no external tooling, and the alert thresholds that transform metric collection into incident prevention.

## The Metric Hierarchy: Broker, Destination, and Subscription

Before diving into tooling, understand how ActiveMQ metrics are organized. The metric hierarchy has three distinct levels, each covering different failure modes.

1. **Broker-level metrics** describe the health of the entire JVM process: total memory usage, store utilization, temp store usage, total connections, and JVM heap pressure. A broker-level alert affects everything.
2. **Destination-level metrics** (queues and topics in Apache ActiveMQ®; addresses and queues in Apache Artemis™) describe the health of individual message channels: queue depth, consumer count, enqueue/dequeue rates, average enqueue time. A destination-level alert affects the applications using that destination.
3. **Subscription-level metrics** describe individual consumer health: pending messages, dispatched messages, discarded messages. A subscription-level alert typically indicates a slow consumer or a consumer in trouble.

We covered the slow consumer detection metrics (specifically PendingQueueSize and DiscardedCount) in our [**Slow Consumer Detection &amp; Handling**](https://www.meshiq.com/blog/activemq-slow-consumer-detection-handling/) post.

All three levels of this metric hierarchy are essential for complete coverage of ActiveMQ monitoring. Broker-level monitoring alone tells you the building is on fire but not which room. Destination-level alone misses broker-wide resource exhaustion. Subscription-level alone misses the storage cliff.

## The Essential Metrics Reference

### Apache ActiveMQ®: Broker-Level JMX Attributes (BrokerViewMBean)

**JMX Attribute****Prometheus-Style Name****Type****What It Means****Alert Threshold**MemoryPercentUsageactivemq\_memory\_pctGauge% of broker memoryUsage limit in useWarn &gt; 70%, Crit &gt; 85%StorePercentUsageactivemq\_store\_pctGauge% of KahaDB storeUsage limit in useWarn &gt; 70%, Crit &gt; 85%TempPercentUsageactivemq\_temp\_pctGauge% of temp store limit in useWarn &gt; 75%TotalConnectionsCountactivemq\_connections\_totalGaugeTotal active broker connectionsAlert if drops to 0 unexpectedlyTotalConsumerCountactivemq\_consumers\_totalGaugeTotal consumers across all destinationsContextualTotalProducerCountactivemq\_producers\_totalGaugeTotal producers across all destinationsAlert if drops to 0 unexpectedlyTotalEnqueueCountactivemq\_enqueues\_totalCounterTotal messages enqueued since startRate alert: sudden stopTotalDequeueCountactivemq\_dequeues\_totalCounterTotal messages dequeued since startRate alert: rate &lt; enqueue rate

### Apache ActiveMQ®: Destination-Level JMX Attributes (QueueViewMBean / TopicViewMBean)

**JMX Attribute****Prometheus-Style Name****Type****What It Means****Alert Threshold**QueueSizeactivemq\_queue\_sizeGaugeMessages waiting in queueApplication-specific; alert on growth rateConsumerCountactivemq\_queue\_consumer\_countGaugeActive consumers on this destinationCrit = 0 on critical queuesProducerCountactivemq\_queue\_producer\_countGaugeActive producers on this destinationAlert on unexpected drop to 0EnqueueCountactivemq\_queue\_enqueue\_countCounterMessages enqueued to this destinationRate alertDequeueCountactivemq\_queue\_dequeue\_countCounterMessages dequeued from this destinationRate: should track EnqueueCountExpiredCountactivemq\_queue\_expired\_countCounterMessages expired before deliveryAlert: any growth on critical queuesMemoryPercentUsageactivemq\_queue\_memory\_pctGauge% of destination memory limit in useWarn &gt; 60% (feeds into broker MemoryPercentUsage)AverageEnqueueTimeactivemq\_queue\_avg\_enqueue\_msGaugeAverage time messages wait before deliveryAlert on 2× baseline increaseMaxEnqueueTimeactivemq\_queue\_max\_enqueue\_msGaugeMaximum message wait timeAlert on sustained high values

### Apache Artemis™: Native Prometheus Metric Names

Artemis organizes metrics differently, destinations are addresses containing queues, and prometheus metric names use the artemis\_ prefix:

**Apache Artemis™ Metric****Type****Apache ActiveMQ® Equivalent****Notes**artemis\_message\_countGaugeQueueSizePer queue within an addressartemis\_address\_sizeGaugeMemoryPercentUsageBytes; compare to max-size-bytesartemis\_consumer\_countGaugeConsumerCountPer queueartemis\_producer\_countGaugeProducerCountPer addressartemis\_messages\_addedCounterEnqueueCountPer queueartemis\_messages\_acknowledgedCounterDequeueCountPer queueartemis\_messages\_expiredCounterExpiredCountPer queueartemis\_disk\_store\_usageGaugeStorePercentUsagePercentage; alert &gt; 70%artemis\_routed\_message\_countCounter—Messages routed to at least one queueartemis\_unrouted\_message\_countCounter—Messages with no matching queue — important for detecting misconfigured destinations

artemis\_unrouted\_message\_count has no Apache ActiveMQ® equivalent and is one of Apache Artemis™’s most useful prometheus metric signals for monitoring ActiveMQ health. A non-zero and rising value means messages are being published to addresses that have no queues, common after a misconfigured deployment.

We noted the Apache ActiveMQ® vs Apache Artemis™ architectural differences that make this metric unique in our [Apache ActiveMQ® vs Apache Artemis™: 2026 Definitive Guide](https://www.meshiq.com/blog/apache-activemq-vs-apache-artemis/).

## Prometheus Monitoring: Apache ActiveMQ® Setup (JMX Exporter)

Apache ActiveMQ® does not natively expose Prometheus-format metrics. The standard approach is the JMX Prometheus Exporter, a Java agent that runs inside the broker JVM, converts JMX MBeans to Prometheus exposition format, and serves them on a configurable HTTP port.

### Step 1: Download and Configure the Agent

\# Download the latest JMX Exporter from GitHub  
curl -Lo /opt/activemq/lib/jmx\_prometheus\_javaagent.jar \\  
 https://repo1.maven.org/maven2/io/prometheus/jmx/jmx\_prometheus\_javaagent/0.20.0/jmx\_prometheus\_javaagent-0.20.0.jar  
  
\# /opt/activemq/conf/prometheus-config.yml  
\# Map ActiveMQ JMX MBeans to Prometheus metric names  
rules:  
 # Broker-level metrics  
 – pattern: ‘org.apache.activemq&lt;type=Broker, brokerName=(.+)&gt;&lt;&gt;(.+)’  
 name: activemq\_broker\_$2  
 labels:  
 broker: “$1”  
 type: GAUGE  
  
 # Queue metrics  
 – pattern: ‘org.apache.activemq&lt;type=Broker, brokerName=(.+), destinationType=Queue, destinationName=(.+)&gt;&lt;&gt;(QueueSize|ConsumerCount|ProducerCount|MemoryPercentUsage|EnqueueCount|DequeueCount|ExpiredCount|AverageEnqueueTime|MaxEnqueueTime|InflightCount)’  
 name: activemq\_queue\_$3  
 labels:  
 broker: “$1”  
 destination: “$2”  
 type: GAUGE  
  
 # Topic metrics  
 – pattern: ‘org.apache.activemq&lt;type=Broker, brokerName=(.+), destinationType=Topic, destinationName=(.+)&gt;&lt;&gt;(ConsumerCount|ProducerCount|EnqueueCount|DequeueCount|ExpiredCount|MemoryPercentUsage)’  
 name: activemq\_topic\_$3  
 labels:  
 broker: “$1”  
 destination: “$2”  
 type: GAUGE

### Step 2: Add the Agent to Broker Startup

\# /opt/activemq/bin/env — add to ACTIVEMQ\_OPTS  
ACTIVEMQ\_OPTS=”$ACTIVEMQ\_OPTS \\  
 -javaagent:/opt/activemq/lib/jmx\_prometheus\_javaagent.jar=9779:/opt/activemq/conf/prometheus-config.yml”  
  
Port 9779 is a community convention for the JMX Exporter sidecar; it can be any unused port. After restarting ActiveMQ, verify the endpoint is working:  
curl http://localhost:9779/metrics | grep activemq\_queue\_QueueSize  
\# Expected output: activemq\_queue\_QueueSize{broker=”prod-broker”,destination=”orders.queue”} 0.0

### Step 3: Configure Prometheus Scraping

\# prometheus.yml  
global:  
 scrape\_interval: 15s  
  
scrape\_configs:  
 – job\_name: ‘activemq-classic’  
 scrape\_interval: 15s  
 static\_configs:  
 – targets:  
 – ‘broker1.internal:9779’  
 – ‘broker2.internal:9779’  
 labels:  
 environment: ‘production’  
 cluster: ‘main’

For high-cardinality environments with many destinations (hundreds of queues), consider increasing scrape\_interval to 30s or 60s to reduce the per-scrape JMX attribute resolution load on your prometheus monitoring setup. Very high queue counts with 15s scraping can produce noticeable JMX overhead on the broker.

## Prometheus Monitoring: Apache Artemis™ Setup (Native Plugin)

Artemis ships with a native Prometheus metrics plugin, no external agent, no configuration mapping file, no port-juggling. Enabling it requires one line in broker.xml:

&lt;!– broker.xml — enable native Prometheus metrics –&gt;  
&lt;**metrics**&gt;  
 &lt;**plugin** class-name=”org.apache.activemq.artemis.core.server.metrics.plugins.ArtemisPrometheusMetricsPlugin”/&gt;  
&lt;/**metrics**&gt;

After restarting, metrics are available at the broker’s management web server endpoint:

\# Default Artemis management port is 8161; /metrics path is automatic  
curl http://localhost:8161/metrics | grep artemis\_message\_count  
\# Expected: artemis\_message\_count{address=”orders.queue”,broker=”0.0.0.0″,queue=”orders.queue”,…} 0.0  


Prometheus scrape configuration for Artemis:

scrape\_configs:  
 – job\_name: ‘activemq-artemis’  
 scrape\_interval: 15s  
 metrics\_path: ‘/metrics’  
 static\_configs:  
 – targets:  
 – ‘artemis-broker1.internal:8161’  
 – ‘artemis-broker2.internal:8161’

**Security note:** Before exposing the Apache Artemis™ management port externally, review the security hardening configuration we covered in the **[Security Hardening Guide](https://www.meshiq.com/blog/activemq-security-hardening-guide/)**. The management port hosts Jolokia (CVE-2022-41678 vector) and should be restricted to the monitoring subnet.

## Grafana Dashboards: What to Visualize

With prometheus monitoring collecting data, Grafana dashboards translate it into actionable visibility. Grafana Cloud includes a pre-built Apache ActiveMQ integration with 5 Grafana dashboards and 4 alert rules, a reasonable starting point. But pre-built dashboards cover common cases; production environments benefit from destination-specific panels tailored to your workload.

### Recommended Dashboard Layout

**Row 1: Broker Health Overview**

- Memory Usage % (gauge) with threshold coloring at 70% and 85%
- Store Usage % (gauge) with threshold coloring at 70% and 85%
- Temp Usage % (gauge) threshold at 75%
- Total Connections (stat panel) compare to expected baseline
- JVM Heap Used vs. Committed (time series)

**Row 2: Critical Queue Health**

- Queue Depth over time (time series, one line per critical queue)
- Consumer Count per queue (table, red row highlight when = 0)
- Enqueue vs. Dequeue rate comparison (time series, detect divergence)
- Average Enqueue Time (time series, rising trend is the early warning signal)

**Row 3: Throughput Metrics**

- Total Enqueue Rate (per-broker, per-destination)
- Total Dequeue Rate
- Expired Message Rate (alert-worthy if non-zero on critical queues)
- DLQ Depth (separate panel for each DLQ destination)

**Row 4: Subscription Health**

- Active consumers per destination (heatmap across all queues)
- Slow consumer indicators: PendingQueueSize, DiscardedCount

### Sample PromQL Queries

\# Queue depth growth rate over 5 minutes (**detect** accumulation trends)  
rate(**activemq\_queue\_QueueSize**{destination=”orders.queue”}\[5m\])  
  
\# Memory usage approaching producer flow control trigger (70% threshold)  
activemq\_broker\_MemoryPercentUsage &gt; 60  
  
\# Destinations with zero consumers (**critical** alert candidate)  
activemq\_queue\_ConsumerCount == 0  
  
\# Dequeue rate / enqueue rate ratio — below 1 means queue is growing  
rate(**activemq\_queue\_DequeueCount**\[5m\]) /  
rate(**activemq\_queue\_EnqueueCount**\[5m\])  
  
\# Average enqueue time deviation from 1-hour moving average (**early** warning)  
activemq\_queue\_AverageEnqueueTime /  
avg\_over\_time(**activemq\_queue\_AverageEnqueueTime**\[1h\]) &gt; 2

## Alert Rules: The Production Alert Playbook

Good alerts fire before the situation is critical, not when it is. The thresholds below are designed for early warning intervention, not reactive fire-fighting.

### Critical Alerts (Page Immediately)

\# Prometheus alert rules — critical.yml  
groups:  
 – name: activemq\_critical  
 rules:  
  
 # Consumer count drops to zero on a monitored queue  
 – alert: ActiveMQConsumerCountZero  
 expr: activemq\_queue\_ConsumerCount{destination=~”orders.\*|payments.\*|critical.\*”} == 0  
 for: 2m  
 labels:  
 severity: critical  
 annotations:  
 summary: “No consumers on {{ $labels.destination }}”  
 description: “Queue {{ $labels.destination }} on broker {{ $labels.broker }} has had zero consumers for 2 minutes. Messages are accumulating.”  
  
 # Store usage approaching hard limit (100% = sends fail)  
 – alert: ActiveMQStoreUsageCritical  
 expr: activemq\_broker\_StorePercentUsage &gt; 85  
 for: 5m  
 labels:  
 severity: critical  
 annotations:  
 summary: “ActiveMQ store usage critical: {{ $value }}%”  
 description: “Broker {{ $labels.broker }} store is {{ $value }}% full. At 100% all persistent message sends fail.”  
  
 # Memory usage at flow control trigger point  
 – alert: ActiveMQMemoryUsageCritical  
 expr: activemq\_broker\_MemoryPercentUsage &gt; 85  
 for: 3m  
 labels:  
 severity: critical  
 annotations:  
 summary: “ActiveMQ memory critical: {{ $value }}%”  
 description: “Broker {{ $labels.broker }} memory at {{ $value }}%. Producer flow control is active, blocking all producer sends.”

### Warning Alerts (Notify the Team)

 # Memory approaching producer flow control threshold  
 – alert: ActiveMQMemoryUsageWarning  
 expr: activemq\_broker\_MemoryPercentUsage &gt; 70  
 for: 5m  
 labels:  
 severity: warning  
 annotations:  
 summary: “ActiveMQ memory approaching flow control: {{ $value }}%”  
 description: “cursorMemoryHighWaterMark default is 70%. Producer flow control will activate soon.”  
  
 # Store usage warning — time to investigate disk  
 – alert: ActiveMQStoreUsageWarning  
 expr: activemq\_broker\_StorePercentUsage &gt; 70  
 for: 10m  
 labels:  
 severity: warning  
 annotations:  
 summary: “ActiveMQ store usage high: {{ $value }}%”  
  
 # Queue depth growing for 10 minutes without draining  
 – alert: ActiveMQQueueDepthGrowing  
 expr: |  
 deriv(activemq\_queue\_QueueSize\[5m\]) &gt; 0  
 and activemq\_queue\_QueueSize &gt; 1000  
 for: 10m  
 labels:  
 severity: warning  
 annotations:  
 summary: “Queue {{ $labels.destination }} depth growing”  
 description: “Queue depth has been increasing for 10 minutes. Current depth: {{ $value }}”  
  
 # Enqueue time 2× baseline — consumer slowing down  
 – alert: ActiveMQEnqueueTimeElevated  
 expr: |  
 activemq\_queue\_AverageEnqueueTime /  
 avg\_over\_time(activemq\_queue\_AverageEnqueueTime\[1h\]) &gt; 2  
 for: 5m  
 labels:  
 severity: warning  
 annotations:  
 summary: “Elevated enqueue time on {{ $labels.destination }}”  
 description: “Average enqueue time is 2× the hourly baseline. Consumer may be slowing down.”  
  
 # Expired messages on critical destinations  
 – alert: ActiveMQMessageExpiry  
 expr: |  
 increase(activemq\_queue\_ExpiredCount  
 {destination=~”orders.\*|payments.\*”}\[5m\]) &gt; 0  
 labels:  
 severity: warning  
 annotations:  
 summary: “Messages expiring on {{ $labels.destination }}”  
 description: “{{ $value }} messages expired in the last 5 minutes. Consumers are lagging behind TTL.”

### The Metric That Predicts Incidents: AverageEnqueueTime

Most teams monitor ActiveMQ using queue depth (QueueSize) as their primary consumer health signal. Queue depth is a lagging indicator, it rises after the problem has already started. AverageEnqueueTime is the leading indicator.

When a consumer begins to slow down, a database lock, a GC pause, an external API latency spike, and messages begin spending more time in the queue before delivery. QueueSize may still be near normal because the consumer is still processing, just more slowly. AverageEnqueueTime detects this trend before the backlog becomes visible in depth, giving your Grafana dashboards something to surface well before the situation escalates.

Alerting on AverageEnqueueTime &gt; 2× baseline (where baseline is the hourly moving average) gives your team a 5–15 minute head start on what would otherwise look like a sudden queue depth explosion. It is the most underused signal in a standard ActiveMQ monitoring stack.

## JMX Direct Access: Quick Commands Without Prometheus

For environments without a Prometheus stack, or for one-off diagnostic queries, the ActiveMQ CLI provides direct metric access.

### Apache ActiveMQ®: activemq dstat and activemq bstat

\# Destination statistics — queue depth, consumer count, enqueue/dequeue rates  
cd $ACTIVEMQ\_HOME  
bin/activemq dstat  
  
\# Output:  
\# Name Queue Size Producer # Consumer # Enqueue # Dequeue # Memory %  
\# orders.queue 2840 2 3 15420 12580 12  
\# payments.queue 0 1 2 8230 8230 0  
  
\# Broker statistics — memory, store, temp usage  
bin/activemq bstat  
  
\# For Artemis:  
bin/artemis queue stat –url tcp://localhost:61616 –user admin –password admin

### Programmatic JMX Access

For scripted monitoring or integration with custom tooling:

// Connect to broker JMX programmatically  
JMXServiceURL url = new JMXServiceURL(  
 “service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi”);  
JMXConnector connector = JMXConnectorFactory.connect(url, credentials);  
MBeanServerConnection mbsc = connector.getMBeanServerConnection();  
  
// Query all queue metrics  
ObjectName brokerPattern = new ObjectName(  
 “org.apache.activemq:type=Broker,brokerName=\*,” +  
 “destinationType=Queue,destinationName=\*”);  
  
Set&lt;ObjectName&gt; queues = mbsc.queryNames(brokerPattern, null);  
for (ObjectName queue : queues) {  
 long queueSize = (Long) mbsc.getAttribute(queue, “QueueSize”);  
 long consumerCount = (Long) mbsc.getAttribute(queue, “ConsumerCount”);  
 double memPct = (Double) mbsc.getAttribute(queue, “MemoryPercentUsage”);  
 String destName = queue.getKeyProperty(“destinationName”);  
  
 if (consumerCount == 0 &amp;&amp; queueSize &gt; 0) {  
 System.out.printf(“ALERT: No consumers on %s (depth: %d)%n”,  
 destName, queueSize);  
 }  
}

## Monitoring in a Network of Brokers

When running a [Network of Brokers](https://www.meshiq.com/blog/activemq-network-of-brokers-configuration/), each broker must be monitored independently. There is no built-in NoB-level aggregation in the ActiveMQ metrics API, you need to aggregate across brokers in your Prometheus monitoring stack and surface the results in unified Grafana dashboards.

Additional NoB-specific metrics to monitor ActiveMQ topology health:

- **networkBridges (BrokerViewMBean):** number of active bridge connections. Dropping to 0 on a broker that should have active bridges indicates a network split.
- **Per-broker TotalConnectionsCount:** sudden drops across all brokers simultaneously suggest a network-level event.
- **Cross-broker message flow:** compare enqueue rates on producer-side brokers against dequeue rates on consumer-side brokers. Growing divergence indicates a forwarding bottleneck in the NoB topology.

For multi-datacenter NoB deployments, label your Prometheus metrics by datacenter and create alert rules that fire when inbound message rate from the remote datacenter drops to zero, this detects inter-datacenter link failures before application teams notice.

## Monitoring Is Not Optional – It Is the Difference Between Incident and Prevention

The metrics in this guide exist in your broker right now. Every ActiveMQ deployment exposes QueueSize, MemoryPercentUsage, ConsumerCount, and AverageEnqueueTime through JMX from the moment the broker starts. The question is whether anyone is watching them.

Connecting that JMX data to prometheus monitoring takes less than an hour following this guide. Connecting Prometheus to Grafana dashboards and defining the alert rules in this post adds another hour. After that, your team gets advance warning of memory pressure, storage exhaustion, consumer outages, and processing slowdowns, all before they become pages.

meshIQ Console provides this ActiveMQ monitoring visibility immediately, without the overhead of the Prometheus/Grafana infrastructure, and is purpose-built for Apache ActiveMQ® and Apache Artemis™ deployments, with enterprise alerting integration built in.

**Set up monitoring for your ActiveMQ environment today → Talk to an Expert**

## **Frequently Asked Questions**

**Q1. How do I monitor ActiveMQ?** 

ActiveMQ exposes all metrics through JMX. For continuous monitoring, use the JMX Prometheus Exporter agent (Apache ActiveMQ®) or the native Apache Artemis™ Prometheus plugin (Artemis) to serve metrics on an HTTP endpoint. Prometheus scrapes at an interval; Grafana visualizes with dashboards and alert rules. For point-in-time inspection, the built-in web console (port 8161) and the activemq dstat / bstat CLI commands provide immediate visibility without setup.







**Q2: What are the most important ActiveMQ metrics to monitor?** 

The five must-have metrics are QueueSize growth rate, MemoryPercentUsage (alert before 70%), StorePercentUsage (alert before 85%), ConsumerCount (alert on zero), and AverageEnqueueTime rate-of-change (the leading indicator). ExpiredCount and DLQ depth are important secondary signals for critical queues.







**Q3. How do I set up Prometheus monitoring for ActiveMQ?** 

For Apache ActiveMQ®: add the JMX Prometheus Exporter JAR as a Java agent via ACTIVEMQ\_OPTS in bin/env, with a YAML config file mapping MBean patterns to metric names. For Apache Artemis™: add one &lt;metrics-plugin&gt; line to broker.xml, no external tools required. Both expose metrics at /metrics for Prometheus to scrape.







**Q4. What alert thresholds should I set for ActiveMQ?** 

Warning at MemoryPercentUsage &gt; 70%, Critical &gt; 85%. Warning at StorePercentUsage &gt; 70%, Critical &gt; 85%. Warning at TempPercentUsage &gt; 75%. Critical when ConsumerCount = 0 on any monitored queue for more than 2 minutes. Warning when AverageEnqueueTime &gt; 2× hourly baseline. Warning when ExpiredCount grows on critical destinations.







**Q5: What is the difference between Apache ActiveMQ® and Apache Artemis™ monitoring?** 

Apache ActiveMQ® requires the external JMX Prometheus Exporter agent with a YAML mapping file. Artemis has a native built-in Prometheus plugin requiring one line of broker.xml. Metric names differ (QueueSize vs artemis\_message\_count). Artemis includes artemis\_unrouted\_message\_count, a diagnostic metric with no Apache ActiveMQ® equivalent that detects messages published to addresses without queues.