ActiveMQ on Kubernetes: Production Deployment Guide

meshIQ May 11, 2026

Kubernetes is now the default deployment substrate for most enterprise platform teams. But ActiveMQ on Kubernetes presents a specific challenge that pure stateless workloads do not: message brokers are stateful.

They maintain persistent queues, journal files, in-flight transactions, and broker-specific network topology state. The Kubernetes primitives that work beautifully for stateless applications, Deployments with ephemeral pods, are actively harmful for message brokers.

This guide covers the production-ready approach to deploying both Apache ActiveMQ® and Apache Artemis™ on Kubernetes: the correct Kubernetes objects, production-grade StatefulSet manifests with full resource management, health probe configuration, persistent volume strategy, the ArtemisCloud Operator for Apache Artemis™, Prometheus monitoring integration, and the common failure modes that trip up first-time Kubernetes messaging deployments.

Why StatefulSet, Not Deployment, Is Mandatory

This is the most important architectural decision in ActiveMQ-on-Kubernetes: never use a Deployment for a message broker. Deployments create pods with random suffixes (broker-7b9d4-xzp2k) that change on every restart. StatefulSets create pods with stable, ordinal identities (broker-0, broker-1) that are consistent across restarts.

For ActiveMQ, stable pod identity is not cosmetic, it is functional:

  1. KahaDB lock file: Apache ActiveMQ® acquires a file-level lock on the KahaDB data directory at startup. If a pod restarts with a new identity and a different PVC binding (as would happen with a Deployment), the new pod may find an already-locked KahaDB directory from the previous pod and refuse to start.
  2. Network of Brokers DNS: When Apache ActiveMQ® brokers connect to each other, they use static DNS names in networkConnector URIs. The predictable names broker-0.broker-svc, broker-1.broker-svc (from a headless Service) is required for NoB topology. Random Deployment pod names cannot be pre-configured in broker XML.
  3. Per-pod PVC binding: StatefulSet volumeClaimTemplates create a PersistentVolumeClaim per pod that remains bound to that pod’s ordinal identity across restarts and rescheduling. broker-0 always gets data-broker-0. broker-1 always gets data-broker-1. A Deployment cannot provide this guarantee.
  4. Ordered startup and shutdown: StatefulSets start pods in order (0 first, then 1) and terminate in reverse order (highest ordinal first). This prevents scenarios in which a backup broker starts before the primary has cleanly released the KahaDB lock, or in which both brokers try to acquire the lock simultaneously.

Apache ActiveMQ® on Kubernetes: Complete StatefulSet Manifest

The following manifest deploys a production-ready single-instance Apache ActiveMQ® broker with persistent KahaDB storage, health probes, resource limits, and credential management via Secrets.

The JVM Memory Trap: Why OOMKill Looks Like a Random Restart

This is the most consistently encountered ActiveMQ-on-Kubernetes failure mode, and it produces the most confusing symptoms: the broker pod is killed without any application-level error, the pod shows OOMKilled in kubectl describe, and the restart count increments silently.

What happens: the JVM starts without -Xmx explicitly set in the container (or with -Xmx set higher than the container’s memory limit). The JVM treats the container’s cgroup memory limit as the available system memory only if container awareness is enabled (-XX:+UseContainerSupport, enabled by default in JDK 8u191+ and JDK 10+). But even with container support, JVM off-heap memory: metaspace, native threads, JIT code cache, and garbage collection overhead consume memory beyond -Xmx.

The safe formula:

container memory limit = JVM heap (-Xmx) + 25-30% overhead

Container LimitRecommended -XmxJVM Overhead Budget
1Gi700m300Mi
2Gi1536m (1.5Gi)512Mi
4Gi3072m (3Gi)1Gi
8Gi6144m (6Gi)2Gi

Always set both -Xms and -Xmx: Setting only -Xmx allows the JVM heap to start small and grow gradually, which produces intermittent GC pauses as the heap expands. Setting -Xms = -Xmx (or a reasonable initial size like 512m) eliminates growth-phase GC events. We covered the JVM GC tuning implications in our ActiveMQ Performance Tuning: 10x Throughput post.

Health Probe Design: Avoiding the Crash Loop

The three Kubernetes probe types serve distinct purposes for a message broker:

Startup Probe: Fires first. Prevents liveness/readiness from running until the broker is ready. For ActiveMQ with KahaDB, startup can take minutes on a broker that is recovering from a crash (journal replay) or loading a large index. Configure failureThreshold × periodSeconds to allow the maximum realistic startup window (5-10 minutes for large deployments).

Liveness Probe: Determines if the broker process is alive and responding. A failing liveness probe causes the container to be killed and restarted. Use a TCP socket check on port 61616, this is cheaper than an HTTP check and directly tests the protocol endpoint that clients use. 

Do NOT use the web console HTTP check for liveness, the web console Jetty server can be up before the JMS broker is ready, causing false-positive liveness signals.

Readiness Probe: Determines if the broker can accept client traffic. A failing readiness probe removes the pod from Service endpoints without restarting it. Use an HTTP GET check on the web console at port 816 to confirm full broker initialization, including KahaDB journal load. This is the right place for the more expensive HTTP check.

The crash loop scenario: Without a startupProbe, if initialDelaySeconds on the livenessProbe is too low (e.g., 30s, when KahaDB replay takes 3 minutes), Kubernetes fires the liveness check before the broker port is open, fails 3 consecutive times, and restarts the pod, which begins journal replay again, fails again, and loops indefinitely. The symptom is CrashLoopBackOff, and a pod that shows increasing restart counts but never reaches Running.

Storage Strategy: PVC Sizing and StorageClass Selection

For Apache ActiveMQ® (KahaDB)

The KahaDB data directory must be on a PVC, not emptyDir. Size the PVC generously, KahaDB journal files accumulate before being reclaimed, and a full journal (100% StorePercentUsage) stops all persistent message sends immediately. 

See our Message Persistence Strategies post for the journal pinning phenomenon that causes unexpected disk growth.

StorageClass requirements for KahaDB:

  • ReadWriteOnce (single pod, single node), correct for Apache ActiveMQ® single-broker or Apache ActiveMQ® Master/Slave, where only the active master writes
  • SSD-backed storage is strongly recommended, KahaDB’s enableJournalDiskSyncs=true (default) requires fsync performance. The difference between SSD and spinning disk is 50,000 msg/s vs. 2,000 msg/s for persistent messaging. 
  • Set allowVolumeExpansion: true in your StorageClass. Journal growth is hard to predict, and you want to expand without pod restarts.

For Apache Artemis™ (File Journal)

Apache Artemis™’s file journal should ideally have dedicated volumes for journal data and large messages:

For Artemis with the AIO journal, the journal directory must be on a filesystem that supports AIO (ext2/3/4, jfs, xfs). If your Kubernetes cluster’s default StorageClass provisions NFS-backed volumes, Artemis will silently fall back to NIO journal, significantly reducing throughput without any error message. Verify your StorageClass provisions for local or block storage for Apache Artemis™ journal volumes.

ActiveMQ on Kubernetes – Let Our Team Review Your Deployment

Getting ActiveMQ right on Kubernetes means more than a working StatefulSet, it means JVM sizing that won’t OOMKill, probe configuration that handles KahaDB replay, storage classes that support your journal type, and monitoring integration that gives you visibility. meshIQ’s experts have deployed and hardened ActiveMQ on Kubernetes across regulated enterprise environments.

Request a Deployment Review

Apache Artemis™ on Kubernetes: The ArtemisCloud Operator

For Apache Artemis™, the ArtemisCloud Operator (artemiscloud.io) is the recommended method for deploying to Kubernetes. It abstracts the StatefulSet, Services, and configuration management behind Kubernetes-native Custom Resource Definitions, simplifying deployment, upgrades, and lifecycle operations.

Installing the Operator

Deploying a Broker via CR

Key Operator benefits:

  • Scale down with message migration: the Operator creates a scaledown controller that migrates in-flight messages from a pod being removed to the remaining pods before deletion
  • Address/queue configuration as code via ActiveMQArtemisAddress CRs, no JMX or web console operations required after initial deployment
  • Upgrade management: update the image field in the CR, and the Operator orchestrates a rolling pod replacement

Exposing Protocols: Services and Ingress

Client applications within the same Kubernetes cluster connect to ActiveMQ via the ClusterIP Service on port 61616. For external clients, legacy applications, IoT devices, or cross-cluster consumers, you need either a LoadBalancer Service or a TCP Ingress.

Important: Expose TLS-encrypted ports externally, never plaintext. For OpenWire, expose port 61617 (SSL), not 61616. For MQTT, expose 8883 (TLS), not 1883. We covered the TLS configuration for all protocols in our Security Hardening Guide.

For AMQP clients connecting from Azure Service Bus or other cloud platforms, ensure the LoadBalancer Service is created with an annotation that provisions a Network Load Balancer (not an Application Load Balancer), ActiveMQ protocols are TCP, not HTTP, and Application Load Balancers cannot handle them.

Prometheus Monitoring Integration in Kubernetes

The JMX Prometheus Exporter agent (Apache ActiveMQ®) and native Prometheus plugin (Apache Artemis™) both work in Kubernetes containers with pod-level annotations that enable Prometheus auto-discovery:

For Prometheus Operator (kube-prometheus-stack), create a ServiceMonitor instead:

We covered the full Prometheus monitoring configuration: JMX Exporter YAML mapping, alert rules, and key metric thresholds in our Monitoring & Alerting Setup post. The Kubernetes integration adds only the ServiceMonitor or pod annotation layer on top of the same Prometheus configuration.

Production Deployment Checklist

Before deploying ActiveMQ to a production Kubernetes cluster, verify each item:

Kubernetes Objects:

  • [ ] StatefulSet (not Deployment) with serviceName matching headless Service
  • [ ] Headless Service (clusterIP: None) for stable pod DNS
  • [ ] ClusterIP Service for intra-cluster client access
  • [ ] LoadBalancer Service (TLS ports only) for external access
  • [ ] volumeClaimTemplates with SSD-backed StorageClass and allowVolumeExpansion: true
  • [ ] PodDisruptionBudget with minAvailable: 1
  • [ ] podAntiAffinity rule to spread replicas across nodes

Resource Configuration:

  • [ ] ACTIVEMQ_OPTS / JAVA_OPTS with explicit -Xmx ≤ 75% of container memory limit
  • [ ] Memory request set to expected steady-state usage
  • [ ] Memory limit set to JVM heap + 25-30% overhead buffer
  • [ ] CPU limit set to allow bursting for GC events

Health Probes:

  • [ ] startupProbe with sufficient failureThreshold × periodSeconds for KahaDB journal replay
  • [ ] livenessProbe as TCP socket on OpenWire port (not HTTP)
  • [ ] readinessProbe as HTTP GET on web console port 8161

Security:

  • [ ] Broker credentials in Kubernetes Secret (not ConfigMap)
  • [ ] Keystore/truststore files mounted from Secret
  • [ ] TLS transport connectors configured (no plaintext on external LoadBalancer)
  • [ ] Network Policy restricting broker port access to authorized namespaces

Monitoring:

  • [ ] JMX Exporter agent in pod (Apache ActiveMQ®) or enableMetricsPlugin: true (Apache Artemis™ Operator)
  • [ ] ServiceMonitor or pod annotations for Prometheus scraping
  • [ ] Alert rules for MemoryPercentUsage, StorePercentUsage, and ConsumerCount=0

Monitor Your Kubernetes ActiveMQ Deployment from Day One

meshIQ Console integrates with ActiveMQ brokers running on Kubernetes, surfacing queue depth, consumer health, memory pressure, and JVM metrics across all broker pods in a unified dashboard, without requiring per-pod Prometheus infrastructure.

See It in Action

ActiveMQ on Kubernetes Is Operational Maturity, Not Just a YAML File

Running ActiveMQ reliably on Kubernetes requires understanding how Kubernetes’s primitives interact with ActiveMQ’s stateful requirements. The StatefulSet manifest in this guide represents production-grade configuration tested against the most common failure modes: OOMKill from insufficient JVM sizing, crash loops from aggressive probes, data loss from ephemeral storage, and broker split-brain from non-ordered startup.

meshIQ provides enterprise support for ActiveMQ deployments across all environments, including Kubernetes, covering deployment architecture review, incident response, and continuous monitoring via meshIQ Console.

Get your ActiveMQ Kubernetes deployment reviewed by our team → Request a Deployment Review

Frequently Asked Questions

Q1. Should I use a Deployment or a StatefulSet for ActiveMQ on Kubernetes? 

Always StatefulSet. Deployments create pods with random names and no guaranteed persistent volume binding, both of which are incompatible with reliable message persistence. StatefulSets provide stable pod names, ordered start/stop, and per-pod PVCs that survive rescheduling.

Q2. How do I persist ActiveMQ data on Kubernetes?

Use volumeClaimTemplates in your StatefulSet to provision a per-pod PVC. Mount it at the KahaDB directory (Apache ActiveMQ®) or journal directory (Apache Artemis™). Use an SSD-backed StorageClass with ReadWriteOnce. Never use emptyDir for a message broker, all messages are lost on pod restart.

Q3. What is the ArtemisCloud Operator for Kubernetes? 

A Kubernetes Operator from artemiscloud.io that manages Apache Artemis™ broker deployments via Custom Resource Definitions. Deploy a broker by creating an ActiveMQArtemis CR; the Operator handles the StatefulSet, Services, and lifecycle operations. Available at github.com/artemiscloud/activemq-artemis-operator.

Q4. How do I set JVM memory limits for ActiveMQ on Kubernetes? 

Set -Xmx to approximately 75% of the container memory limit via ACTIVEMQ_OPTS (Apache ActiveMQ®) or JAVA_OPTS (Apache Artemis™). Without an explicit -Xmx, the JVM may exceed the container limit and be OOMKilled by Kubernetes, resulting in unexplained pod restarts with no error logs.

Q5. How do I configure health probes for ActiveMQ on Kubernetes? 

Use a startupProbe (TCP socket, port 61616) with a long failureThreshold window (10–30 minutes for large KahaDB stores) to prevent crash loops during journal replay. Use a livenessProbe (TCP socket) after startup succeeds. Use a readinessProbe (HTTP GET, port 8161) to confirm full broker initialization before accepting client traffic.

Cookies preferences

Others

Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.

Necessary

Necessary
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.

Advertisement

Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.

Analytics

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.

Functional

Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.

Performance

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.