---
title: "ActiveMQ on Kubernetes: Production Deployment Guide"
date: 2026-05-11
author: "TheFrameGuy"
featured_image: "https://www.meshiq.com/wp-content/uploads/blog_activeMQ-kubernetes_050826.jpg"
categories:
  - name: "Apache ActiveMQ®"
    url: "/sort-by/active-mq.md"
  - name: "Middleware Optimization"
    url: "/sort-by/middleware-optimization.md"
tags:
  - name: "devops"
    url: "/sort-by/tag/devops.md"
  - name: "Observability"
    url: "/sort-by/tag/observability.md"
---

# ActiveMQ on Kubernetes: Production Deployment Guide

They maintain persistent queues, journal files, in-flight transactions, and broker-specific network topology state. The Kubernetes primitives that work beautifully for stateless applications, Deployments with ephemeral pods, are actively harmful for message brokers.

This guide covers the production-ready approach to deploying both Apache ActiveMQ® and Apache Artemis™ on Kubernetes: the correct Kubernetes objects, production-grade StatefulSet manifests with full resource management, health probe configuration, persistent volume strategy, the ArtemisCloud Operator for Apache Artemis™, Prometheus monitoring integration, and the common failure modes that trip up first-time Kubernetes messaging deployments.

## Why StatefulSet, Not Deployment, Is Mandatory

This is the most important architectural decision in ActiveMQ-on-Kubernetes: never use a Deployment for a message broker. Deployments create pods with random suffixes (broker-7b9d4-xzp2k) that change on every restart. StatefulSets create pods with stable, ordinal identities (broker-0, broker-1) that are consistent across restarts.

For ActiveMQ, stable pod identity is not cosmetic, it is functional:

1. **KahaDB lock file**: Apache ActiveMQ® acquires a file-level lock on the KahaDB data directory at startup. If a pod restarts with a new identity and a different PVC binding (as would happen with a Deployment), the new pod may find an already-locked KahaDB directory from the previous pod and refuse to start.
2. **Network of Brokers DNS**: When Apache ActiveMQ® brokers connect to each other, they use static DNS names in networkConnector URIs. The predictable names broker-0.broker-svc, broker-1.broker-svc (from a headless Service) is required for NoB topology. Random Deployment pod names cannot be pre-configured in broker XML.
3. **Per-pod PVC binding**: StatefulSet volumeClaimTemplates create a PersistentVolumeClaim per pod that remains bound to that pod’s ordinal identity across restarts and rescheduling. broker-0 always gets data-broker-0. broker-1 always gets data-broker-1. A Deployment cannot provide this guarantee.
4. **Ordered startup and shutdown**: StatefulSets start pods in order (0 first, then 1) and terminate in reverse order (highest ordinal first). This prevents scenarios in which a backup broker starts before the primary has cleanly released the KahaDB lock, or in which both brokers try to acquire the lock simultaneously.

## Apache ActiveMQ® on Kubernetes: Complete StatefulSet Manifest

The following manifest deploys a production-ready single-instance Apache ActiveMQ® broker with persistent KahaDB storage, health probes, resource limits, and credential management via Secrets.

\# 1. NamespaceapiVersion: v1  
kind: Namespace  
metadata:  
 name: messaging

\# 2. Secret: broker credentials (base64-encode all values)apiVersion: v1  
kind: Secret  
metadata:  
 name: activemq-credentials  
 namespace: messaging  
type: Opaque  
stringData: # Kubernetes base64-encodes automatically  
 admin-user: “admin”  
 admin-password: “changeme-strong-password”

\# 3. ConfigMap: broker configurationapiVersion: v1  
kind: ConfigMap  
metadata:  
 name: activemq-config  
 namespace: messaging  
data:  
 activemq.xml: |  
 &lt;?xml version=”1.0″ encoding=”UTF-8″?&gt;  
 &lt;**beans** xmlns=”http://www.springframework.org/schema/beans”  
 xmlns:amq=”http://activemq.apache.org/schema/core”  
 xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”  
 xsi:schemaLocation=”http://www.springframework.org/schema/beans  
 http://www.springframework.org/schema/beans/spring-beans.xsd  
 http://activemq.apache.org/schema/core  
 http://activemq.apache.org/schema/core/activemq-core.xsd”&gt;  
  
 &lt;**broker** xmlns=”http://activemq.apache.org/schema/core”  
 brokerName=”activemq-broker”  
 dataDirectory=”/opt/activemq/data”  
 persistent=”true”  
 useJmx=”true”  
 schedulerSupport=”false”  
 advisorySupport=”true”&gt;  
  
 &lt;**transportConnectors**&gt;  
 &lt;**transportConnector** name=”openwire”  
 uri=”nio://0.0.0.0:61616?maximumConnections=2000  
 &amp;amp;wireFormat.maxFrameSize=104857600″/&gt;  
 &lt;**transportConnector** name=”amqp”  
 uri=”amqp://0.0.0.0:5672?maximumConnections=500″/&gt;  
 &lt;**transportConnector** name=”mqtt”  
 uri=”mqtt+nio://0.0.0.0:1883?maximumConnections=5000″/&gt;  
 &lt;/**transportConnectors**&gt;  
  
 &lt;**persistenceAdapter**&gt;  
 &lt;**kahaDB** directory=”/opt/activemq/data/kahadb”  
 journalMaxFileLength=”67108864″  
 indexCacheSize=”20000″  
 enableJournalDiskSyncs=”true”  
 concurrentStoreAndDispatchQueues=”true”/&gt;  
 &lt;/**persistenceAdapter**&gt;  
  
 &lt;**plugins**&gt;  
 &lt;**jaasAuthenticationPlugin** configuration=”activemq”/&gt;  
 &lt;/**plugins**&gt;  
  
 &lt;**systemUsage**&gt;  
 &lt;**systemUsage**&gt;  
 &lt;**memoryUsage**&gt;  
 &lt;**memoryUsage** percentOfJvmHeap=”70″/&gt;  
 &lt;/**memoryUsage**&gt;  
 &lt;**storeUsage**&gt;  
 &lt;**storeUsage** limit=”50gb”/&gt;  
 &lt;/**storeUsage**&gt;  
 &lt;**tempUsage**&gt;  
 &lt;**tempUsage** limit=”5gb”/&gt;  
 &lt;/**tempUsage**&gt;  
 &lt;/**systemUsage**&gt;  
 &lt;/**systemUsage**&gt;  
  
 &lt;**managementContext**&gt;  
 &lt;**managementContext** createConnector=”false”/&gt;  
 &lt;/**managementContext**&gt;  
  
 &lt;/**broker**&gt;  
 &lt;/**beans**&gt;

\# 4. Headless Service: stable DNS for pod-to-pod communicationapiVersion: v1  
kind: Service  
metadata:  
 name: activemq-headless  
 namespace: messaging  
 labels:  
 app: activemq  
spec:  
 clusterIP: None # Headless: no load balancing, direct pod DNS  
 selector:  
 app: activemq  
 ports:  
 – name: openwire  
 port: 61616  
 protocol: TCP

\# 5. ClusterIP Service: stable VIP for application clientsapiVersion: v1  
kind: Service  
metadata:  
 name: activemq-svc  
 namespace: messaging  
 labels:  
 app: activemq  
spec:  
 type: ClusterIP  
 selector:  
 app: activemq  
 ports:  
 – name: openwire  
 port: 61616  
 targetPort: 61616  
 protocol: TCP  
 – name: amqp  
 port: 5672  
 targetPort: 5672  
 protocol: TCP  
 – name: mqtt  
 port: 1883  
 targetPort: 1883  
 protocol: TCP  
 – name: webconsole  
 port: 8161  
 targetPort: 8161  
 protocol: TCP  
 – name: prometheus  
 port: 9779  
 targetPort: 9779  
 protocol: TCP

\# 6. StatefulSet: the broker itselfapiVersion: apps/v1  
kind: StatefulSet  
metadata:  
 name: activemq  
 namespace: messaging  
 labels:  
 app: activemq  
spec:  
 serviceName: activemq-headless # Must match headless service name  
 replicas: 1  
 selector:  
 matchLabels:  
 app: activemq  
 template:  
 metadata:  
 labels:  
 app: activemq  
 annotations:  
 # Prometheus scraping annotation  
 prometheus.io/scrape: “true”  
 prometheus.io/port: “9779”  
 prometheus.io/path: “/metrics”  
 spec:  
 # Anti-affinity: spread broker pods across nodes (for multi-replica NoB)  
 affinity:  
 podAntiAffinity:  
 requiredDuringSchedulingIgnoredDuringExecution:  
 – labelSelector:  
 matchExpressions:  
 – key: app  
 operator: In  
 values:  
 – activemq  
 topologyKey: “kubernetes.io/hostname”  
  
 containers:  
 – name: activemq  
 # Use official Apache ActiveMQ image or your org’s hardened image  
 image: apache/activemq-classic:5.18.3  
 imagePullPolicy: IfNotPresent  
  
 ports:  
 – name: openwire  
 containerPort: 61616  
 protocol: TCP  
 – name: amqp  
 containerPort: 5672  
 protocol: TCP  
 – name: mqtt  
 containerPort: 1883  
 protocol: TCP  
 – name: webconsole  
 containerPort: 8161  
 protocol: TCP  
 – name: prometheus  
 containerPort: 9779  
 protocol: TCP  
  
 env:  
 # JVM heap sizing: CRITICAL – must align with memory limits below  
 # Rule: -Xmx = container\_memory\_limit \* 0.75  
 # For 2Gi limit: -Xmx1536m (1.5Gi heap, 0.5Gi JVM overhead)  
 – name: ACTIVEMQ\_OPTS  
 value: &gt;-  
 -Xms512m  
 -Xmx1536m  
 -XX:+UseG1GC  
 -XX:MaxGCPauseMillis=20  
 -XX:+UseStringDeduplication  
 -javaagent:/opt/activemq/lib/jmx\_prometheus\_javaagent.jar=9779:/opt/activemq/conf/prometheus-config.yml  
 # Broker credentials from Secret  
 – name: ACTIVEMQ\_USERNAME  
 valueFrom:  
 secretKeyRef:  
 name: activemq-credentials  
 key: admin-user  
 – name: ACTIVEMQ\_PASSWORD  
 valueFrom:  
 secretKeyRef:  
 name: activemq-credentials  
 key: admin-password  
  
 # Resource requests and limits  
 # RULE: request = what the broker needs under normal load  
 # limit = container memory ceiling (JVM heap must fit within this)  
 resources:  
 requests:  
 memory: “1Gi”  
 cpu: “500m”  
 limits:  
 memory: “2Gi”  
 cpu: “2000m”  
  
 # Startup probe: allows up to 5 minutes for large KahaDB journal replay  
 # Fires BEFORE liveness/readiness — prevents premature crash loops  
 startupProbe:  
 tcpSocket:  
 port: 61616  
 initialDelaySeconds: 30  
 periodSeconds: 10  
 failureThreshold: 30 # 30 × 10s = 5 minutes maximum startup window  
  
 # Liveness probe: restarts pod if broker becomes unresponsive  
 # Only fires AFTER startupProbe succeeds  
 livenessProbe:  
 tcpSocket:  
 port: 61616  
 initialDelaySeconds: 0 # Handled by startupProbe  
 periodSeconds: 30  
 failureThreshold: 3  
 timeoutSeconds: 5  
  
 # Readiness probe: removes pod from Service endpoints until broker is ready  
 # HTTP check on web console confirms broker is fully initialized  
 readinessProbe:  
 httpGet:  
 path: /admin  
 port: 8161  
 initialDelaySeconds: 0  
 periodSeconds: 10  
 failureThreshold: 3  
 timeoutSeconds: 5  
  
 volumeMounts:  
 # KahaDB persistent storage  
 – name: data  
 mountPath: /opt/activemq/data  
 # Broker configuration from ConfigMap  
 – name: config  
 mountPath: /opt/activemq/conf/activemq.xml  
 subPath: activemq.xml  
 readOnly: true  
  
 volumes:  
 – name: config  
 configMap:  
 name: activemq-config  
  
 # volumeClaimTemplates: creates a PVC per pod that survives pod restarts  
 volumeClaimTemplates:  
 – metadata:  
 name: data  
 spec:  
 accessModes:  
 – ReadWriteOnce  
 storageClassName: “standard-ssd” # Replace with your StorageClass  
 resources:  
 requests:  
 storage: 100Gi # Size for KahaDB journal + index

\# 7. PodDisruptionBudget: protect against simultaneous node drainsapiVersion: policy/v1  
kind: PodDisruptionBudget  
metadata:  
 name: activemq-pdb  
 namespace: messaging  
spec:  
 minAvailable: 1  
 selector:  
 matchLabels:  
 app: activemq

## The JVM Memory Trap: Why OOMKill Looks Like a Random Restart

This is the most consistently encountered ActiveMQ-on-Kubernetes failure mode, and it produces the most confusing symptoms: the broker pod is killed without any application-level error, the pod shows OOMKilled in kubectl describe, and the restart count increments silently.

What happens: the JVM starts without -Xmx explicitly set in the container (or with -Xmx set higher than the container’s memory limit). The JVM treats the container’s cgroup memory limit as the available system memory only if container awareness is enabled (-XX:+UseContainerSupport, enabled by default in JDK 8u191+ and JDK 10+). But even with container support, JVM off-heap memory: metaspace, native threads, JIT code cache, and garbage collection overhead consume memory beyond -Xmx.

**The safe formula**:

container memory limit = JVM heap (-Xmx) + 25-30% overhead

**Container Limit****Recommended -Xmx****JVM Overhead Budget**1Gi700m300Mi2Gi1536m (1.5Gi)512Mi4Gi3072m (3Gi)1Gi8Gi6144m (6Gi)2Gi

**Always set both -Xms and -Xmx**: Setting only -Xmx allows the JVM heap to start small and grow gradually, which produces intermittent GC pauses as the heap expands. Setting -Xms = -Xmx (or a reasonable initial size like 512m) eliminates growth-phase GC events. We covered the JVM GC tuning implications in our [**ActiveMQ Performance Tuning: 10x Throughput**](https://www.meshiq.com/blog/activemq-performance-tuning/) post.

## Health Probe Design: Avoiding the Crash Loop

The three Kubernetes probe types serve distinct purposes for a message broker:

**Startup Probe**: Fires first. Prevents liveness/readiness from running until the broker is ready. For ActiveMQ with KahaDB, startup can take minutes on a broker that is recovering from a crash (journal replay) or loading a large index. Configure failureThreshold × periodSeconds to allow the maximum realistic startup window (5-10 minutes for large deployments).

**Liveness Probe**: Determines if the broker process is alive and responding. A failing liveness probe causes the container to be killed and restarted. Use a TCP socket check on port 61616, this is cheaper than an HTTP check and directly tests the protocol endpoint that clients use.

Do NOT use the web console HTTP check for liveness, the web console Jetty server can be up before the JMS broker is ready, causing false-positive liveness signals.

**Readiness Probe**: Determines if the broker can accept client traffic. A failing readiness probe removes the pod from Service endpoints without restarting it. Use an HTTP GET check on the web console at port 816 to confirm full broker initialization, including KahaDB journal load. This is the right place for the more expensive HTTP check.

\# Complete probe configuration for ClassicstartupProbe:  
 tcpSocket:  
 port: 61616  
 initialDelaySeconds: 30 # Give JVM 30 seconds to start before probing  
 periodSeconds: 10  
 failureThreshold: 30 # Allow up to 5 minutes total for startup  
  
livenessProbe:  
 tcpSocket:  
 port: 61616  
 periodSeconds: 30  
 failureThreshold: 3  
 timeoutSeconds: 5  
  
readinessProbe:  
 httpGet:  
 path: /admin  
 port: 8161  
 periodSeconds: 10  
 failureThreshold: 3  
 timeoutSeconds: 5

**The crash loop scenario**: Without a startupProbe, if initialDelaySeconds on the livenessProbe is too low (e.g., 30s, when KahaDB replay takes 3 minutes), Kubernetes fires the liveness check before the broker port is open, fails 3 consecutive times, and restarts the pod, which begins journal replay again, fails again, and loops indefinitely. The symptom is CrashLoopBackOff, and a pod that shows increasing restart counts but never reaches Running.

## Storage Strategy: PVC Sizing and StorageClass Selection

### **For Apache ActiveMQ® (KahaDB)**

The KahaDB data directory must be on a PVC, not emptyDir. Size the PVC generously, KahaDB journal files accumulate before being reclaimed, and a full journal (100% StorePercentUsage) stops all persistent message sends immediately.

See our [**Message Persistence Strategies**](https://www.meshiq.com/blog/activemq-message-persistence-strategies/) post for the journal pinning phenomenon that causes unexpected disk growth.

volumeClaimTemplates:  
 – metadata:  
 name: data  
 spec:  
 accessModes:  
 – ReadWriteOnce  
 storageClassName: “standard-ssd” # Use SSD-backed storage for journal  
 resources:  
 requests:  
 storage: 100Gi

**StorageClass requirements for KahaDB**:

- **ReadWriteOnce** (single pod, single node), correct for Apache ActiveMQ® single-broker or Apache ActiveMQ® Master/Slave, where only the active master writes
- **SSD-backed storage** is strongly recommended, KahaDB’s enableJournalDiskSyncs=true (default) requires fsync performance. The difference between SSD and spinning disk is 50,000 msg/s vs. 2,000 msg/s for persistent messaging.
- **Set allowVolumeExpansion**: true in your StorageClass. Journal growth is hard to predict, and you want to expand without pod restarts.

### For Apache Artemis™ (File Journal)

Apache Artemis™’s file journal should ideally have dedicated volumes for journal data and large messages:

volumeClaimTemplates:  
 – metadata:  
 name: data  
 spec:  
 accessModes:  
 – ReadWriteOnce  
 storageClassName: “fast-ssd”  
 resources:  
 requests:  
 storage: 200Gi # Journal + paging + large messages

For Artemis with the AIO journal, the journal directory must be on a filesystem that supports AIO (ext2/3/4, jfs, xfs). If your Kubernetes cluster’s default StorageClass provisions NFS-backed volumes, Artemis will silently fall back to NIO journal, significantly reducing throughput without any error message. Verify your StorageClass provisions for local or block storage for Apache Artemis™ journal volumes.

## **ActiveMQ on Kubernetes – Let Our Team Review Your Deployment**

Getting ActiveMQ right on Kubernetes means more than a working StatefulSet, it means JVM sizing that won’t OOMKill, probe configuration that handles KahaDB replay, storage classes that support your journal type, and monitoring integration that gives you visibility. meshIQ’s experts have deployed and hardened ActiveMQ on Kubernetes across regulated enterprise environments.

[******Request a Deployment Review******](https://www.meshiq.com/apache-activemq/enterprise-support/)



## Apache Artemis™ on Kubernetes: The ArtemisCloud Operator

For Apache Artemis™, the ArtemisCloud Operator (artemiscloud.io) is the recommended method for deploying to Kubernetes. It abstracts the StatefulSet, Services, and configuration management behind Kubernetes-native Custom Resource Definitions, simplifying deployment, upgrades, and lifecycle operations.

### Installing the Operator

\# Install via kubectl from the official ArtemisCloud repository  
kubectl create -f https://raw.githubusercontent.com/artemiscloud/activemq-artemis-operator/main/deploy/service\_account.yaml -n messaging  
kubectl create -f https://raw.githubusercontent.com/artemiscloud/activemq-artemis-operator/main/deploy/role.yaml -n messaging  
kubectl create -f https://raw.githubusercontent.com/artemiscloud/activemq-artemis-operator/main/deploy/role\_binding.yaml -n messaging  
kubectl create -f https://raw.githubusercontent.com/artemiscloud/activemq-artemis-operator/main/deploy/crds/ -n messaging  
kubectl create -f https://raw.githubusercontent.com/artemiscloud/activemq-artemis-operator/main/deploy/operator.yaml -n messaging  
  
\# Verify operator is running  
kubectl get pods -n messaging -l name=activemq-artemis-operator

### Deploying a Broker via CR

\# artemis-broker.yaml — ActiveMQArtemis Custom Resource  
apiVersion: broker.amq.io/v1beta1  
kind: ActiveMQArtemis  
metadata:  
 name: artemis-broker  
 namespace: messaging  
spec:  
 deploymentPlan:  
 # Number of broker pods (1 = single, 2+ = clustered with replication)  
 size: 1  
 # Image: official ArtemisCloud broker image  
 image: quay.io/artemiscloud/activemq-artemis-broker-kubernetes:latest  
 # Persistent storage for journal  
 persistenceEnabled: true  
 storage:  
 size: “100Gi”  
 storageClassName: “fast-ssd”  
 # Resource limits  
 resources:  
 requests:  
 memory: “2Gi”  
 cpu: “500m”  
 limits:  
 memory: “4Gi”  
 cpu: “2000m”  
 # JVM settings  
 jvmMaxMemory: “3072m” # 3Gi — within 4Gi container limit  
 # Enable Prometheus metrics  
 enableMetricsPlugin: true  
 # Pod anti-affinity for multi-broker deployments  
 requireLogin: true  
  
 # Acceptors (protocol-specific ports)  
 acceptors:  
 – name: all-protocols  
 port: 61616  
 protocols: all  
 needClientAuth: false  
 – name: amqp  
 port: 5672  
 protocols: amqp  
 – name: mqtt  
 port: 1883  
 protocols: mqtt  
  
 # Admin credentials (reference a Secret)  
 adminUser: admin  
 adminPassword: changeme-admin  
  
 # Broker properties (equivalent to broker.xml configuration)  
 brokerProperties:  
 – “addressesSettings.#.maxSizeBytes=524288000”  
 – “addressesSettings.#.pageSizeBytes=10485760”  
 – “addressesSettings.#.addressFullMessagePolicy=PAGE”  
—  
\# Define queues via ActiveMQArtemisAddress CR  
apiVersion: broker.amq.io/v1beta1  
kind: ActiveMQArtemisAddress  
metadata:  
 name: orders-queue  
 namespace: messaging  
spec:  
 addressName: orders  
 queueName: orders.main  
 routingType: anycast  
 removeFromBrokerOnDelete: false  
  
\# Apply and watch the deployment  
kubectl apply -f artemis-broker.yaml  
kubectl get pods -n messaging -w  
\# Expected: artemis-broker-ss-0 → Running  
  
\# Verify broker connectivity  
kubectl exec artemis-broker-ss-0 -n messaging — \\  
 /home/jboss/amq-broker/bin/artemis queue stat \\  
 –user admin –password changeme-admin \\  
 –url tcp://artemis-broker-ss-0:61616

**Key Operator benefits**:

- **Scale down with message migration:** the Operator creates a scaledown controller that migrates in-flight messages from a pod being removed to the remaining pods before deletion
- **Address/queue configuration** as code via ActiveMQArtemisAddress CRs, no JMX or web console operations required after initial deployment
- **Upgrade management**: update the image field in the CR, and the Operator orchestrates a rolling pod replacement

## Exposing Protocols: Services and Ingress

Client applications within the same Kubernetes cluster connect to ActiveMQ via the ClusterIP Service on port 61616. For external clients, legacy applications, IoT devices, or cross-cluster consumers, you need either a LoadBalancer Service or a TCP Ingress.

\# LoadBalancer Service for external OpenWire access  
apiVersion: v1  
kind: Service  
metadata:  
 name: activemq-external  
 namespace: messaging  
 annotations:  
 # For AWS: provision an NLB (Network Load Balancer) for TCP passthrough  
 service.beta.kubernetes.io/aws-load-balancer-type: “nlb”  
spec:  
 type: LoadBalancer  
 selector:  
 app: activemq  
 ports:  
 – name: openwire-ssl  
 port: 61617  
 targetPort: 61617  
 protocol: TCP  
 – name: mqtt-ssl  
 port: 8883  
 targetPort: 8883  
 protocol: TCP

**Important**: Expose TLS-encrypted ports externally, never plaintext. For OpenWire, expose port 61617 (SSL), not 61616. For MQTT, expose 8883 (TLS), not 1883. We covered the TLS configuration for all protocols in our [**Security Hardening Guide**](https://www.meshiq.com/blog/activemq-security-hardening-guide/).

For AMQP clients connecting from Azure Service Bus or other cloud platforms, ensure the LoadBalancer Service is created with an annotation that provisions a Network Load Balancer (not an Application Load Balancer), ActiveMQ protocols are TCP, not HTTP, and Application Load Balancers cannot handle them.

## Prometheus Monitoring Integration in Kubernetes

The JMX Prometheus Exporter agent (Apache ActiveMQ®) and native Prometheus plugin (Apache Artemis™) both work in Kubernetes containers with pod-level annotations that enable Prometheus auto-discovery:

\# Pod annotation for Prometheus scraping (in StatefulSet podTemplate)  
metadata:  
 annotations:  
 prometheus.io/scrape: “true”  
 prometheus.io/port: “9779” # JMX Exporter port (Classic)  
 prometheus.io/path: “/metrics”

For Prometheus Operator (kube-prometheus-stack), create a ServiceMonitor instead:

apiVersion: monitoring.coreos.com/v1  
kind: ServiceMonitor  
metadata:  
 name: activemq-monitor  
 namespace: monitoring  
 labels:  
 release: prometheus # Match your Prometheus Operator’s serviceMonitorSelector  
spec:  
 namespaceSelector:  
 matchNames:  
 – messaging  
 selector:  
 matchLabels:  
 app: activemq  
 endpoints:  
 – port: prometheus  
 interval: 15s  
 path: /metrics

**We covered the full Prometheus monitoring configuration**: JMX Exporter YAML mapping, alert rules, and key metric thresholds in our [**Monitoring &amp; Alerting Setup**](https://www.meshiq.com/blog/activemq-monitoring-alerting-setup/) post. The Kubernetes integration adds only the ServiceMonitor or pod annotation layer on top of the same Prometheus configuration.

## Production Deployment Checklist

Before deploying ActiveMQ to a production Kubernetes cluster, verify each item:

**Kubernetes Objects:**

- \[ \] StatefulSet (not Deployment) with serviceName matching headless Service
- \[ \] Headless Service (clusterIP: None) for stable pod DNS
- \[ \] ClusterIP Service for intra-cluster client access
- \[ \] LoadBalancer Service (TLS ports only) for external access
- \[ \] volumeClaimTemplates with SSD-backed StorageClass and allowVolumeExpansion: true
- \[ \] PodDisruptionBudget with minAvailable: 1
- \[ \] podAntiAffinity rule to spread replicas across nodes

**Resource Configuration:**

- \[ \] ACTIVEMQ\_OPTS / JAVA\_OPTS with explicit -Xmx ≤ 75% of container memory limit
- \[ \] Memory request set to expected steady-state usage
- \[ \] Memory limit set to JVM heap + 25-30% overhead buffer
- \[ \] CPU limit set to allow bursting for GC events

**Health Probes:**

- \[ \] startupProbe with sufficient failureThreshold × periodSeconds for KahaDB journal replay
- \[ \] livenessProbe as TCP socket on OpenWire port (not HTTP)
- \[ \] readinessProbe as HTTP GET on web console port 8161

**Security:**

- \[ \] Broker credentials in Kubernetes Secret (not ConfigMap)
- \[ \] Keystore/truststore files mounted from Secret
- \[ \] TLS transport connectors configured (no plaintext on external LoadBalancer)
- \[ \] Network Policy restricting broker port access to authorized namespaces

**Monitoring:**

- \[ \] JMX Exporter agent in pod (Apache ActiveMQ®) or enableMetricsPlugin: true (Apache Artemis™ Operator)
- \[ \] ServiceMonitor or pod annotations for Prometheus scraping
- \[ \] Alert rules for MemoryPercentUsage, StorePercentUsage, and ConsumerCount=0

## **Monitor Your Kubernetes ActiveMQ Deployment from Day One**

meshIQ Console integrates with ActiveMQ brokers running on Kubernetes, surfacing queue depth, consumer health, memory pressure, and JVM metrics across all broker pods in a unified dashboard, without requiring per-pod Prometheus infrastructure.

[******See It in Action******](https://www.meshiq.com/apache-activemq/enterprise-support/)



## ActiveMQ on Kubernetes Is Operational Maturity, Not Just a YAML File

Running ActiveMQ reliably on Kubernetes requires understanding how Kubernetes’s primitives interact with ActiveMQ’s stateful requirements. The StatefulSet manifest in this guide represents production-grade configuration tested against the most common failure modes: OOMKill from insufficient JVM sizing, crash loops from aggressive probes, data loss from ephemeral storage, and broker split-brain from non-ordered startup.

meshIQ provides enterprise support for ActiveMQ deployments across all environments, including Kubernetes, covering deployment architecture review, incident response, and continuous monitoring via meshIQ Console.

**Get your ActiveMQ Kubernetes deployment reviewed by our team → [Request a Deployment Review](https://www.meshiq.com/apache-activemq/enterprise-support/)**

## **Frequently Asked Questions**

**Q1. Should I use a Deployment or a StatefulSet for ActiveMQ on Kubernetes?** 

Always StatefulSet. Deployments create pods with random names and no guaranteed persistent volume binding, both of which are incompatible with reliable message persistence. StatefulSets provide stable pod names, ordered start/stop, and per-pod PVCs that survive rescheduling.







**Q2. How do I persist ActiveMQ data on Kubernetes?**

Use volumeClaimTemplates in your StatefulSet to provision a per-pod PVC. Mount it at the KahaDB directory (Apache ActiveMQ®) or journal directory (Apache Artemis™). Use an SSD-backed StorageClass with ReadWriteOnce. Never use emptyDir for a message broker, all messages are lost on pod restart.







**Q3. What is the ArtemisCloud Operator for Kubernetes?** 

A Kubernetes Operator from artemiscloud.io that manages Apache Artemis™ broker deployments via Custom Resource Definitions. Deploy a broker by creating an ActiveMQArtemis CR; the Operator handles the StatefulSet, Services, and lifecycle operations. Available at [github.com/artemiscloud/activemq-artemis-operator](https://github.com/artemiscloud/activemq-artemis-operator).







**Q4. How do I set JVM memory limits for ActiveMQ on Kubernetes?** 

Set -Xmx to approximately 75% of the container memory limit via ACTIVEMQ\_OPTS (Apache ActiveMQ®) or JAVA\_OPTS (Apache Artemis™). Without an explicit -Xmx, the JVM may exceed the container limit and be OOMKilled by Kubernetes, resulting in unexplained pod restarts with no error logs.







**Q5. How do I configure health probes for ActiveMQ on Kubernetes?** 

Use a startupProbe (TCP socket, port 61616) with a long failureThreshold window (10–30 minutes for large KahaDB stores) to prevent crash loops during journal replay. Use a livenessProbe (TCP socket) after startup succeeds. Use a readinessProbe (HTTP GET, port 8161) to confirm full broker initialization before accepting client traffic.