ActiveMQ Monitoring & Alerting Setup: The Complete 2026 Guide

meshIQ May 7, 2026

Most ActiveMQ outages are not sudden failures. They are visible in the metrics for minutes, sometimes hours, before they become incidents. A memory usage graph climbing past 60%. A queue depth that isn't draining. An enqueue time that doubled after a deployment. A consumer count that dropped from 3 to 1 at 2 AM.

The organizations that catch these early are the ones with continuous ActiveMQ monitoring in place. The ones responding to pages at 3 AM are the ones relying on the web console checked manually, or not at all.

This guide establishes the complete ActiveMQ monitoring stack: the metrics that matter, the JMX infrastructure that exposes them, the Prometheus monitoring + Grafana dashboards integration that makes them continuous, the Apache Artemis™-native approach that requires no external tooling, and the alert thresholds that transform metric collection into incident prevention.

The Metric Hierarchy: Broker, Destination, and Subscription

Before diving into tooling, understand how ActiveMQ metrics are organized. The metric hierarchy has three distinct levels, each covering different failure modes. 

  1. Broker-level metrics describe the health of the entire JVM process: total memory usage, store utilization, temp store usage, total connections, and JVM heap pressure. A broker-level alert affects everything.
  2. Destination-level metrics (queues and topics in Apache ActiveMQ®; addresses and queues in Apache Artemis™) describe the health of individual message channels: queue depth, consumer count, enqueue/dequeue rates, average enqueue time. A destination-level alert affects the applications using that destination.
  3. Subscription-level metrics describe individual consumer health: pending messages, dispatched messages, discarded messages. A subscription-level alert typically indicates a slow consumer or a consumer in trouble. 

We covered the slow consumer detection metrics (specifically PendingQueueSize and DiscardedCount) in our Slow Consumer Detection & Handling post.

All three levels of this metric hierarchy are essential for complete coverage of ActiveMQ monitoring. Broker-level monitoring alone tells you the building is on fire but not which room. Destination-level alone misses broker-wide resource exhaustion. Subscription-level alone misses the storage cliff. 

The Essential Metrics Reference

Apache ActiveMQ®: Broker-Level JMX Attributes (BrokerViewMBean)

JMX AttributePrometheus-Style NameTypeWhat It MeansAlert Threshold
MemoryPercentUsageactivemq_memory_pctGauge% of broker memoryUsage limit in useWarn > 70%, Crit > 85%
StorePercentUsageactivemq_store_pctGauge% of KahaDB storeUsage limit in useWarn > 70%, Crit > 85%
TempPercentUsageactivemq_temp_pctGauge% of temp store limit in useWarn > 75%
TotalConnectionsCountactivemq_connections_totalGaugeTotal active broker connectionsAlert if drops to 0 unexpectedly
TotalConsumerCountactivemq_consumers_totalGaugeTotal consumers across all destinationsContextual
TotalProducerCountactivemq_producers_totalGaugeTotal producers across all destinationsAlert if drops to 0 unexpectedly
TotalEnqueueCountactivemq_enqueues_totalCounterTotal messages enqueued since startRate alert: sudden stop
TotalDequeueCountactivemq_dequeues_totalCounterTotal messages dequeued since startRate alert: rate < enqueue rate

Apache ActiveMQ®: Destination-Level JMX Attributes (QueueViewMBean / TopicViewMBean)

JMX AttributePrometheus-Style NameTypeWhat It MeansAlert Threshold
QueueSizeactivemq_queue_sizeGaugeMessages waiting in queueApplication-specific; alert on growth rate
ConsumerCountactivemq_queue_consumer_countGaugeActive consumers on this destinationCrit = 0 on critical queues
ProducerCountactivemq_queue_producer_countGaugeActive producers on this destinationAlert on unexpected drop to 0
EnqueueCountactivemq_queue_enqueue_countCounterMessages enqueued to this destinationRate alert
DequeueCountactivemq_queue_dequeue_countCounterMessages dequeued from this destinationRate: should track EnqueueCount
ExpiredCountactivemq_queue_expired_countCounterMessages expired before deliveryAlert: any growth on critical queues
MemoryPercentUsageactivemq_queue_memory_pctGauge% of destination memory limit in useWarn > 60% (feeds into broker MemoryPercentUsage)
AverageEnqueueTimeactivemq_queue_avg_enqueue_msGaugeAverage time messages wait before deliveryAlert on 2× baseline increase
MaxEnqueueTimeactivemq_queue_max_enqueue_msGaugeMaximum message wait timeAlert on sustained high values

Apache Artemis™: Native Prometheus Metric Names

Artemis organizes metrics differently, destinations are addresses containing queues, and prometheus metric names use the artemis_ prefix: 

Apache Artemis™ MetricTypeApache ActiveMQ® EquivalentNotes
artemis_message_countGaugeQueueSizePer queue within an address
artemis_address_sizeGaugeMemoryPercentUsageBytes; compare to max-size-bytes
artemis_consumer_countGaugeConsumerCountPer queue
artemis_producer_countGaugeProducerCountPer address
artemis_messages_addedCounterEnqueueCountPer queue
artemis_messages_acknowledgedCounterDequeueCountPer queue
artemis_messages_expiredCounterExpiredCountPer queue
artemis_disk_store_usageGaugeStorePercentUsagePercentage; alert > 70%
artemis_routed_message_countCounterMessages routed to at least one queue
artemis_unrouted_message_countCounterMessages with no matching queue — important for detecting misconfigured destinations

artemis_unrouted_message_count has no Apache ActiveMQ® equivalent and is one of Apache Artemis™’s most useful prometheus metric signals for monitoring ActiveMQ health. A non-zero and rising value means messages are being published to addresses that have no queues, common after a misconfigured deployment. 

We noted the Apache ActiveMQ® vs Apache Artemis™ architectural differences that make this metric unique in our Apache ActiveMQ® vs Apache Artemis™: 2026 Definitive Guide

Prometheus Monitoring: Apache ActiveMQ® Setup (JMX Exporter)

Apache ActiveMQ® does not natively expose Prometheus-format metrics. The standard approach is the JMX Prometheus Exporter, a Java agent that runs inside the broker JVM, converts JMX MBeans to Prometheus exposition format, and serves them on a configurable HTTP port.

Step 1: Download and Configure the Agent

# Download the latest JMX Exporter from GitHub
curl -Lo /opt/activemq/lib/jmx_prometheus_javaagent.jar \
  https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.20.0/jmx_prometheus_javaagent-0.20.0.jar

# /opt/activemq/conf/prometheus-config.yml
# Map ActiveMQ JMX MBeans to Prometheus metric names
rules:
  # Broker-level metrics
  – pattern: ‘org.apache.activemq<type=Broker, brokerName=(.+)><>(.+)’
    name: activemq_broker_$2
    labels:
      broker: “$1”
    type: GAUGE

  # Queue metrics
  – pattern: ‘org.apache.activemq<type=Broker, brokerName=(.+), destinationType=Queue, destinationName=(.+)><>(QueueSize|ConsumerCount|ProducerCount|MemoryPercentUsage|EnqueueCount|DequeueCount|ExpiredCount|AverageEnqueueTime|MaxEnqueueTime|InflightCount)’
    name: activemq_queue_$3
    labels:
      broker: “$1”
      destination: “$2”
    type: GAUGE

  # Topic metrics
  – pattern: ‘org.apache.activemq<type=Broker, brokerName=(.+), destinationType=Topic, destinationName=(.+)><>(ConsumerCount|ProducerCount|EnqueueCount|DequeueCount|ExpiredCount|MemoryPercentUsage)’
    name: activemq_topic_$3
    labels:
      broker: “$1”
      destination: “$2”
    type: GAUGE

Step 2: Add the Agent to Broker Startup

# /opt/activemq/bin/env — add to ACTIVEMQ_OPTS
ACTIVEMQ_OPTS=”$ACTIVEMQ_OPTS \
  -javaagent:/opt/activemq/lib/jmx_prometheus_javaagent.jar=9779:/opt/activemq/conf/prometheus-config.yml”

Port 9779 is a community convention for the JMX Exporter sidecar; it can be any unused port. After restarting ActiveMQ, verify the endpoint is working:
curl http://localhost:9779/metrics | grep activemq_queue_QueueSize
# Expected output: activemq_queue_QueueSize{broker=”prod-broker”,destination=”orders.queue”} 0.0

Step 3: Configure Prometheus Scraping

# prometheus.yml
global:
  scrape_interval: 15s

scrape_configs:
  – job_name: ‘activemq-classic’
    scrape_interval: 15s
    static_configs:
      – targets:
          – ‘broker1.internal:9779’
          – ‘broker2.internal:9779’
        labels:
          environment: ‘production’
          cluster: ‘main’

For high-cardinality environments with many destinations (hundreds of queues), consider increasing scrape_interval to 30s or 60s to reduce the per-scrape JMX attribute resolution load on your prometheus monitoring setup. Very high queue counts with 15s scraping can produce noticeable JMX overhead on the broker. 

Prometheus Monitoring: Apache Artemis™ Setup (Native Plugin)

Artemis ships with a native Prometheus metrics plugin, no external agent, no configuration mapping file, no port-juggling. Enabling it requires one line in broker.xml:

<!– broker.xml — enable native Prometheus metrics –>
<metrics>
  <plugin class-name=”org.apache.activemq.artemis.core.server.metrics.plugins.ArtemisPrometheusMetricsPlugin”/>
</metrics>

After restarting, metrics are available at the broker’s management web server endpoint:

# Default Artemis management port is 8161; /metrics path is automatic
curl http://localhost:8161/metrics | grep artemis_message_count
# Expected: artemis_message_count{address=”orders.queue”,broker=”0.0.0.0″,queue=”orders.queue”,…} 0.0

Prometheus scrape configuration for Artemis:

scrape_configs:
  – job_name: ‘activemq-artemis’
    scrape_interval: 15s
    metrics_path: ‘/metrics’
    static_configs:
      – targets:
          – ‘artemis-broker1.internal:8161’
          – ‘artemis-broker2.internal:8161’

Security note: Before exposing the Apache Artemis™ management port externally, review the security hardening configuration we covered in the Security Hardening Guide. The management port hosts Jolokia (CVE-2022-41678 vector) and should be restricted to the monitoring subnet.

Grafana Dashboards: What to Visualize

With prometheus monitoring collecting data, Grafana dashboards translate it into actionable visibility. Grafana Cloud includes a pre-built Apache ActiveMQ integration with 5 Grafana dashboards and 4 alert rules, a reasonable starting point. But pre-built dashboards cover common cases; production environments benefit from destination-specific panels tailored to your workload. 

Row 1: Broker Health Overview

  • Memory Usage % (gauge) with threshold coloring at 70% and 85%
  • Store Usage % (gauge) with threshold coloring at 70% and 85%
  • Temp Usage % (gauge) threshold at 75%
  • Total Connections (stat panel) compare to expected baseline
  • JVM Heap Used vs. Committed (time series)

Row 2: Critical Queue Health

  • Queue Depth over time (time series, one line per critical queue)
  • Consumer Count per queue (table, red row highlight when = 0)
  • Enqueue vs. Dequeue rate comparison (time series, detect divergence)
  • Average Enqueue Time (time series, rising trend is the early warning signal)

Row 3: Throughput Metrics

  • Total Enqueue Rate (per-broker, per-destination)
  • Total Dequeue Rate
  • Expired Message Rate (alert-worthy if non-zero on critical queues)
  • DLQ Depth (separate panel for each DLQ destination)

Row 4: Subscription Health

  • Active consumers per destination (heatmap across all queues)
  • Slow consumer indicators: PendingQueueSize, DiscardedCount

Sample PromQL Queries

# Queue depth growth rate over 5 minutes (detect accumulation trends)
rate(activemq_queue_QueueSize{destination=”orders.queue”}[5m])

# Memory usage approaching producer flow control trigger (70% threshold)
activemq_broker_MemoryPercentUsage > 60

# Destinations with zero consumers (critical alert candidate)
activemq_queue_ConsumerCount == 0

# Dequeue rate / enqueue rate ratio — below 1 means queue is growing
rate(activemq_queue_DequeueCount[5m]) /
rate(activemq_queue_EnqueueCount[5m])

# Average enqueue time deviation from 1-hour moving average (early warning)
activemq_queue_AverageEnqueueTime /
avg_over_time(activemq_queue_AverageEnqueueTime[1h]) > 2

Alert Rules: The Production Alert Playbook

Good alerts fire before the situation is critical, not when it is. The thresholds below are designed for early warning intervention, not reactive fire-fighting.

Critical Alerts (Page Immediately)

# Prometheus alert rules — critical.yml
groups:
  – name: activemq_critical
    rules:

      # Consumer count drops to zero on a monitored queue
      – alert: ActiveMQConsumerCountZero
        expr: activemq_queue_ConsumerCount{destination=~”orders.*|payments.*|critical.*”} == 0
        for: 2m
        labels:
          severity: critical
        annotations:
          summary: “No consumers on {{ $labels.destination }}”
          description: “Queue {{ $labels.destination }} on broker {{ $labels.broker }} has had zero consumers for 2 minutes. Messages are accumulating.”

      # Store usage approaching hard limit (100% = sends fail)
      – alert: ActiveMQStoreUsageCritical
        expr: activemq_broker_StorePercentUsage > 85
        for: 5m
        labels:
          severity: critical
        annotations:
          summary: “ActiveMQ store usage critical: {{ $value }}%”
          description: “Broker {{ $labels.broker }} store is {{ $value }}% full. At 100% all persistent message sends fail.”

      # Memory usage at flow control trigger point
      – alert: ActiveMQMemoryUsageCritical
        expr: activemq_broker_MemoryPercentUsage > 85
        for: 3m
        labels:
          severity: critical
        annotations:
          summary: “ActiveMQ memory critical: {{ $value }}%”
          description: “Broker {{ $labels.broker }} memory at {{ $value }}%. Producer flow control is active, blocking all producer sends.”

Warning Alerts (Notify the Team)

    # Memory approaching producer flow control threshold
      – alert: ActiveMQMemoryUsageWarning
        expr: activemq_broker_MemoryPercentUsage > 70
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: “ActiveMQ memory approaching flow control: {{ $value }}%”
          description: “cursorMemoryHighWaterMark default is 70%. Producer flow control will activate soon.”

      # Store usage warning — time to investigate disk
      – alert: ActiveMQStoreUsageWarning
        expr: activemq_broker_StorePercentUsage > 70
        for: 10m
        labels:
          severity: warning
        annotations:
          summary: “ActiveMQ store usage high: {{ $value }}%”

      # Queue depth growing for 10 minutes without draining
      – alert: ActiveMQQueueDepthGrowing
        expr: |
          deriv(activemq_queue_QueueSize[5m]) > 0
          and activemq_queue_QueueSize > 1000
        for: 10m
        labels:
          severity: warning
        annotations:
          summary: “Queue {{ $labels.destination }} depth growing”
          description: “Queue depth has been increasing for 10 minutes. Current depth: {{ $value }}”

      # Enqueue time 2× baseline — consumer slowing down
      – alert: ActiveMQEnqueueTimeElevated
        expr: |
          activemq_queue_AverageEnqueueTime /
          avg_over_time(activemq_queue_AverageEnqueueTime[1h]) > 2
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: “Elevated enqueue time on {{ $labels.destination }}”
          description: “Average enqueue time is 2× the hourly baseline. Consumer may be slowing down.”

      # Expired messages on critical destinations
      – alert: ActiveMQMessageExpiry
        expr: |
          increase(activemq_queue_ExpiredCount
            {destination=~”orders.*|payments.*”}[5m]) > 0
        labels:
          severity: warning
        annotations:
          summary: “Messages expiring on {{ $labels.destination }}”
          description: “{{ $value }} messages expired in the last 5 minutes. Consumers are lagging behind TTL.”

The Metric That Predicts Incidents: AverageEnqueueTime

Most teams monitor ActiveMQ using queue depth (QueueSize) as their primary consumer health signal. Queue depth is a lagging indicator, it rises after the problem has already started. AverageEnqueueTime is the leading indicator.

When a consumer begins to slow down, a database lock, a GC pause, an external API latency spike, and messages begin spending more time in the queue before delivery. QueueSize may still be near normal because the consumer is still processing, just more slowly. AverageEnqueueTime detects this trend before the backlog becomes visible in depth, giving your Grafana dashboards something to surface well before the situation escalates.

Alerting on AverageEnqueueTime > 2× baseline (where baseline is the hourly moving average) gives your team a 5–15 minute head start on what would otherwise look like a sudden queue depth explosion. It is the most underused signal in a standard ActiveMQ monitoring stack.

JMX Direct Access: Quick Commands Without Prometheus

For environments without a Prometheus stack, or for one-off diagnostic queries, the ActiveMQ CLI provides direct metric access.

Apache ActiveMQ®: activemq dstat and activemq bstat

# Destination statistics — queue depth, consumer count, enqueue/dequeue rates
cd $ACTIVEMQ_HOME
bin/activemq dstat

# Output:
# Name             Queue Size  Producer #  Consumer #  Enqueue #  Dequeue #  Memory %
# orders.queue     2840        2           3           15420      12580      12
# payments.queue   0           1           2           8230       8230       0

# Broker statistics — memory, store, temp usage
bin/activemq bstat

# For Artemis:
bin/artemis queue stat –url tcp://localhost:61616 –user admin –password admin

Programmatic JMX Access

For scripted monitoring or integration with custom tooling:

// Connect to broker JMX programmatically
JMXServiceURL url = new JMXServiceURL(
    “service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi”);
JMXConnector connector = JMXConnectorFactory.connect(url, credentials);
MBeanServerConnection mbsc = connector.getMBeanServerConnection();

// Query all queue metrics
ObjectName brokerPattern = new ObjectName(
    “org.apache.activemq:type=Broker,brokerName=*,” +
    “destinationType=Queue,destinationName=*”);

Set<ObjectName> queues = mbsc.queryNames(brokerPattern, null);
for (ObjectName queue : queues) {
    long queueSize = (Long) mbsc.getAttribute(queue, “QueueSize”);
    long consumerCount = (Long) mbsc.getAttribute(queue, “ConsumerCount”);
    double memPct = (Double) mbsc.getAttribute(queue, “MemoryPercentUsage”);
    String destName = queue.getKeyProperty(“destinationName”);

    if (consumerCount == 0 && queueSize > 0) {
        System.out.printf(“ALERT: No consumers on %s (depth: %d)%n”,
            destName, queueSize);
    }
}

Monitoring in a Network of Brokers

When running a Network of Brokers, each broker must be monitored independently. There is no built-in NoB-level aggregation in the ActiveMQ metrics API, you need to aggregate across brokers in your Prometheus monitoring stack and surface the results in unified Grafana dashboards.

Additional NoB-specific metrics to monitor ActiveMQ topology health:

  • networkBridges (BrokerViewMBean): number of active bridge connections. Dropping to 0 on a broker that should have active bridges indicates a network split.
  • Per-broker TotalConnectionsCount: sudden drops across all brokers simultaneously suggest a network-level event.
  • Cross-broker message flow: compare enqueue rates on producer-side brokers against dequeue rates on consumer-side brokers. Growing divergence indicates a forwarding bottleneck in the NoB topology.

For multi-datacenter NoB deployments, label your Prometheus metrics by datacenter and create alert rules that fire when inbound message rate from the remote datacenter drops to zero, this detects inter-datacenter link failures before application teams notice.

Monitoring Is Not Optional – It Is the Difference Between Incident and Prevention

The metrics in this guide exist in your broker right now. Every ActiveMQ deployment exposes QueueSize, MemoryPercentUsage, ConsumerCount, and AverageEnqueueTime through JMX from the moment the broker starts. The question is whether anyone is watching them.

Connecting that JMX data to prometheus monitoring takes less than an hour following this guide. Connecting Prometheus to Grafana dashboards and defining the alert rules in this post adds another hour. After that, your team gets advance warning of memory pressure, storage exhaustion, consumer outages, and processing slowdowns, all before they become pages.

meshIQ Console provides this ActiveMQ monitoring visibility immediately, without the overhead of the Prometheus/Grafana infrastructure, and is purpose-built for Apache ActiveMQ® and Apache Artemis™ deployments, with enterprise alerting integration built in.

Set up monitoring for your ActiveMQ environment today → Talk to an Expert

Frequently Asked Questions

Q1. How do I monitor ActiveMQ? 

ActiveMQ exposes all metrics through JMX. For continuous monitoring, use the JMX Prometheus Exporter agent (Apache ActiveMQ®) or the native Apache Artemis™ Prometheus plugin (Artemis) to serve metrics on an HTTP endpoint. Prometheus scrapes at an interval; Grafana visualizes with dashboards and alert rules. For point-in-time inspection, the built-in web console (port 8161) and the activemq dstat / bstat CLI commands provide immediate visibility without setup.

Q2: What are the most important ActiveMQ metrics to monitor? 

The five must-have metrics are QueueSize growth rate, MemoryPercentUsage (alert before 70%), StorePercentUsage (alert before 85%), ConsumerCount (alert on zero), and AverageEnqueueTime rate-of-change (the leading indicator). ExpiredCount and DLQ depth are important secondary signals for critical queues.

Q3. How do I set up Prometheus monitoring for ActiveMQ? 

For Apache ActiveMQ®: add the JMX Prometheus Exporter JAR as a Java agent via ACTIVEMQ_OPTS in bin/env, with a YAML config file mapping MBean patterns to metric names. For Apache Artemis™: add one <metrics-plugin> line to broker.xml, no external tools required. Both expose metrics at /metrics for Prometheus to scrape.

Q4. What alert thresholds should I set for ActiveMQ? 

Warning at MemoryPercentUsage > 70%, Critical > 85%. Warning at StorePercentUsage > 70%, Critical > 85%. Warning at TempPercentUsage > 75%. Critical when ConsumerCount = 0 on any monitored queue for more than 2 minutes. Warning when AverageEnqueueTime > 2× hourly baseline. Warning when ExpiredCount grows on critical destinations.

Q5: What is the difference between Apache ActiveMQ® and Apache Artemis™ monitoring? 

Apache ActiveMQ® requires the external JMX Prometheus Exporter agent with a YAML mapping file. Artemis has a native built-in Prometheus plugin requiring one line of broker.xml. Metric names differ (QueueSize vs artemis_message_count). Artemis includes artemis_unrouted_message_count, a diagnostic metric with no Apache ActiveMQ® equivalent that detects messages published to addresses without queues.

Cookies preferences

Others

Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.

Necessary

Necessary
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.

Advertisement

Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.

Analytics

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.

Functional

Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.

Performance

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.