---
title: "ActiveMQ Message Persistence: KahaDB, Artemis Journal & JDBC "
date: 2026-05-13
author: "TheFrameGuy"
featured_image: "https://www.meshiq.com/wp-content/uploads/blog_activeMQ-persistence_051326.jpg"
categories:
  - name: "Apache ActiveMQ®"
    url: "/sort-by/active-mq.md"
  - name: "Middleware Optimization"
    url: "/sort-by/middleware-optimization.md"
  - name: "Monitoring"
    url: "/sort-by/monitoring.md"
tags:
  - name: "devops"
    url: "/sort-by/tag/devops.md"
  - name: "monitoring"
    url: "/sort-by/tag/monitoring.md"
---

# ActiveMQ Message Persistence: KahaDB, Artemis Journal & JDBC 

Get it wrong in one direction, and your messages survive restarts, but your throughput is capped by disk sync latency. Get it wrong in the other direction, and your disk fills without warning, stopping all new persistent sends. Get the adapter selection wrong, and a slow consumer on one queue silently pins gigabytes of journal data that can never be reclaimed.

This guide covers ActiveMQ message persistence as a complete engineering discipline: the adapter options for Apache ActiveMQ® and Apache Artemis™, the KahaDB tuning parameters that matter, mKahaDB destination sharding, the Artemis journal model, and AIO vs NIO, JDBC use cases, and the non-persistent delivery scenarios where skipping durability is not a compromise, as it’s the right choice.

## Persistence Adapter Decision Matrix

Before configuration details, understand which adapter fits your ActiveMQ persistent messaging deployment:

**Adapter****Broker****Use Case****Performance****HA Method****KahaDB** (default)Apache ActiveMQ®Single broker, primary choiceHighShared filesystem or JDBC Master/Slave**mKahaDB**Apache ActiveMQ®Multi-destination isolation, high throughputHighSame as KahaDB, per shard**JDBC (plain)**Apache ActiveMQ®Shared-DB HA, complianceLowDatabase row lock**journaledJDBC**Apache ActiveMQ®Shared-DB HA with better throughputMediumDatabase row lock**Artemis File Journal (NIO)**Apache Artemis™All platforms, baselineHighReplication or shared store**Artemis File Journal (AIO)**Apache Artemis™Linux production, max performanceVery HighReplication or shared store**Artemis JDBC Store**Apache Artemis™Shared-DB HA (limited feature set)LowDatabase**Non-persistent**BothTelemetry, broadcast, real-time dataHighestN/A (in-memory only)**LevelDB**Apache ActiveMQ®~~Removed in 5.17, do not use~~N/AN/A

## Apache ActiveMQ®: KahaDB Architecture and Tuning

KahaDB is Apache ActiveMQ®’s default and recommended message persistence adapter since version 5.3. Its architecture has two components: a rolling append-only journal that records all broker events sequentially, and a B-tree index that maps message IDs to journal positions for fast retrieval.

**Journal (data logs):** Messages are appended sequentially to journal files (.amq extension) up to a configurable maximum size. Sequential writes maximize disk throughput. When a journal file has had all its messages acknowledged and consumed, it is marked deletable and eventually reclaimed. The key operational implication: a single slow consumer prevents journal reclamation. If Consumer A on orders.queue is processing slowly, its messages remain unacknowledged, and their journal entries cannot be freed, even if every other destination in the same journal file has been fully consumed. The journal file stays pinned, and disk usage grows.

**B-tree index:** A persistent index holds pointers to message locations in the journal. Portions of this index are cached in memory (controlled by indexCacheSize). Index cache misses require disk reads — for large message stores, cache size directly affects per-message retrieval latency.

### Core KahaDB Configuration

&lt;!– activemq.xml — production KahaDB configuration –&gt;  
&lt;**broker** xmlns=”http://activemq.apache.org/schema/core”  
 brokerName=”prod-broker”  
 dataDirectory=”/var/activemq/data”&gt;  
  
 &lt;**persistenceAdapter**&gt;  
 &lt;**kahaDB**  
 directory=”/var/activemq/data/kahadb”  
  
 &lt;!– Journal file size: increase for high-throughput brokers to reduce  
 file-rotation overhead. 64MB is a good starting point for most  
 production workloads; 128MB for very high throughput. –&gt;  
 journalMaxFileLength=”67108864″  
  
 &lt;!– B-tree index cache: number of index pages held in JVM heap.  
 Default is 10,000. Increase for large message stores to reduce  
 index disk reads. Each page ~4KB; 20,000 pages ≈ 80MB heap impact. –&gt;  
 indexCacheSize=”20000″  
  
 &lt;!– Disk sync on every message: true = fsync before ACK (default, durable).  
 false = skip fsync (higher throughput, non-zero crash risk).  
 Only set false if periodic message loss during broker crash is acceptable. –&gt;  
 enableJournalDiskSyncs=”true”  
  
 &lt;!– Concurrent store and dispatch: true (default) = write to journal  
 and dispatch to consumer simultaneously. Can cause fragmentation  
 under heavy backlog. Set false for batch-processing workloads. –&gt;  
 concurrentStoreAndDispatchQueues=”true”  
  
 &lt;!– Journal preallocation: ‘zeros’ fills new files with 0x00 before use,  
 ensuring disk space is committed. Slower to create files but prevents  
 sparse file surprises on some filesystems. –&gt;  
 preallocationStrategy=”zeros”  
  
 &lt;!– Index write batch: how many dirty index entries accumulate before flush.  
 Higher = faster writes, slower/longer restart recovery on crash. –&gt;  
 indexWriteBatchSize=”2000″  
 /&gt;  
 &lt;/**persistenceAdapter**&gt;  
  
&lt;/**broker**&gt;

### The enableJournalDiskSyncs Decision

This is the most consequential KahaDB tuning decision for ActiveMQ message persistence. With enableJournalDiskSyncs=true (default), the broker performs an explicit fsync() to the OS before sending the PERSISTENT acknowledgment back to the producer. This guarantees the message is physically on disk before the producer considers it delivered.

The cost: throughput is bounded by the disk’s sync write performance. On a spinning disk, this can be as low as 2,000–10,000 messages/second (9.7 MB/sec sync write speed was documented in a Red Hat tuning case study). On NVMe SSD, sync writes are faster, but still have a ceiling.

**When to set enableJournalDiskSyncs=false:**

- Topic messages where occasional loss during broker crash is acceptable (market data, telemetry)
- Non-critical workflow data where at-least-once re-send handles potential duplicates
- Development and testing environments

**Always keep enableJournalDiskSyncs=true for:**

- Financial transactions, order events, payment records
- Any message where loss requires manual recovery
- Compliance-regulated data workloads

We covered the disk performance benchmark context, including the 9.7 MB/sec sync vs 746 MB/sec sequential write measurement in our [**ActiveMQ Performance Tuning: 10x Throughput**](https://www.meshiq.com/blog/activemq-performance-tuning/) post.

### The Journal Pinning Problem and Its Solution: mKahaDB

The single KahaDB journal model has a fundamental limitation in message persistence when a broker hosts destinations with different consumption patterns. A slow consumer on audit.logs.queue processes at 1 message/hour. A fast consumer on orders.queue processes at 10,000 messages/second. Both share the same KahaDB journal.

**Result:** orders.queue messages cycle through journal files rapidly, but each file also contains audit.logs.queue messages that remain unacknowledged. KahaDB cannot reclaim a journal file until every message in it has been acknowledged. Even though orders.queue is healthy, the shared journal grows without bound because audit.logs.queue is pinning files.

**Solution:** mKahaDB (Multi-KahaDB), which gives each destination (or destination group) its own independent journal instance:

&lt;!– activemq.xml — mKahaDB destination sharding –&gt;  
&lt;**persistenceAdapter**&gt;  
 &lt;**mKahaDB** directory=”/var/activemq/data/kahadb”&gt;  
 &lt;**filteredPersistenceAdapters**&gt;  
  
 &lt;!– High-throughput transactional queues: dedicated journal with disk sync –&gt;  
 &lt;**filteredKahaDB** queue=”orders.&gt;”&gt;  
 &lt;**persistenceAdapter**&gt;  
 &lt;**kahaDB**  
 journalMaxFileLength=”67108864″  
 enableJournalDiskSyncs=”true”  
 concurrentStoreAndDispatchQueues=”false”  
 indexCacheSize=”15000″/&gt;  
 &lt;/**persistenceAdapter**&gt;  
 &lt;/**filteredKahaDB**&gt;  
  
 &lt;!– Audit/compliance queues: slow consumer, isolated to prevent pinning –&gt;  
 &lt;**filteredKahaDB** queue=”audit.&gt;”&gt;  
 &lt;**persistenceAdapter**&gt;  
 &lt;**kahaDB**  
 journalMaxFileLength=”32mb”  
 enableJournalDiskSyncs=”true”  
 concurrentStoreAndDispatchQueues=”true”/&gt;  
 &lt;/**persistenceAdapter**&gt;  
 &lt;/**filteredKahaDB**&gt;  
  
 &lt;!– Non-critical topics: no disk sync, highest throughput –&gt;  
 &lt;**filteredKahaDB** topic=”market.data.&gt;”&gt;  
 &lt;**persistenceAdapter**&gt;  
 &lt;**kahaDB**  
 journalMaxFileLength=”32mb”  
 enableJournalDiskSyncs=”false”/&gt;  
 &lt;/**persistenceAdapter**&gt;  
 &lt;/**filteredKahaDB**&gt;  
  
 &lt;!– All other destinations: default journal –&gt;  
 &lt;**filteredKahaDB** perDestination=”true”&gt;  
 &lt;**persistenceAdapter**&gt;  
 &lt;**kahaDB** journalMaxFileLength=”32mb”/&gt;  
 &lt;/**persistenceAdapter**&gt;  
 &lt;/**filteredKahaDB**&gt;  
  
 &lt;/**filteredPersistenceAdapters**&gt;  
 &lt;/**mKahaDB**&gt;  
&lt;/**persistenceAdapter**&gt;

**Critical mKahaDB caveat:** when a transaction spans destinations in different journals (which is the normal case with mKahaDB), KahaDB requires a two-phase commit to record the outcome, adding a second disk sync per cross-journal transaction. For purely intra-destination transactions, this penalty does not apply, but for workflows that transactionally consume from one queue and produce to another across different journal shards, measure the overhead before committing to this topology in production.

## **KahaDB Disk Growth You Can’t Explain?**

Unbounded KahaDB journal growth almost always traces to the journal pinning problem, a slow consumer or durable subscriber preventing file reclamation. MeshIQ’s team diagnoses and resolves KahaDB storage issues regularly and can help you architect the right mKahaDB sharding strategy for your destination topology.

[****Talk to an Expert****](https://www.meshiq.com/apache-activemq/enterprise-support/)



## Apache ActiveMQ®: JDBC Persistence

JDBC persistence stores messages in a relational database rather than in local files. It is appropriate for two specific scenarios: shared-database HA (where the database row lock serves as the HA arbitration mechanism) and compliance requirements that mandate storing message data in an auditable database.

Do not choose JDBC for performance. Plain JDBC message persistence is the slowest option in Apache ActiveMQ®. Every message written requires a database insert with full transaction commit semantics. As the Apache documentation notes: “For long-term persistence, we recommend using JDBC coupled with our high-performance journal. You can use just JDBC if you wish, but it’s quite slow.”

### JournaledJDBC: JDBC with a Performance Layer

journaledJDBC wraps plain JDBC with a local journal that absorbs write spikes and flushes to the database asynchronously. The journal handles acknowledgments at the speed of local disk, while the database receives periodic bulk flushes. This closes the performance gap significantly:

&lt;!– activemq.xml — journaledJDBC persistence for HA deployments –&gt;  
&lt;**persistenceAdapter**&gt;  
 &lt;**journaledJDBC**  
 journalLogFiles=”5″  
 dataDirectory=”/var/activemq/data/journal”  
 useJournal=”true”&gt;  
 &lt;!– JDBC DataSource — configure for your production database –&gt;  
 &lt;/**journaledJDBC**&gt;  
&lt;/**persistenceAdapter**&gt;  
  
&lt;!– DataSource bean — example with PostgreSQL –&gt;  
&lt;**bean** id=”postgres-ds”  
 class=”org.postgresql.ds.PGPoolingDataSource”  
 destroy-method=”close”&gt;  
 &lt;**property** name=”serverName” value=”db.internal.example.com”/&gt;  
 &lt;**property** name=”databaseName” value=”activemq\_store”/&gt;  
 &lt;**property** name=”portNumber” value=”5432″/&gt;  
 &lt;**property** name=”user” value=”activemq\_svc”/&gt;  
 &lt;**property** name=”password” value=”${jdbc.password}”/&gt;  
 &lt;**property** name=”maxConnections” value=”5″/&gt;  
&lt;/**bean**&gt;

For the Apache ActiveMQ® JDBC Master/Slave HA pattern, where the database lock file serves as the active broker determination mechanism. We covered the full architecture, the lock timeout cascade risk, and the journaledJDBC vs plain jdbcPersistenceAdapter HA behavior in our [**High Availability Architecture Guide**](https://www.meshiq.com/blog/activemq-high-availability-architecture/).

**JDBC persistence storage monitoring:** when using JDBC, StorePercentUsage in JMX reflects the configured storeUsage limit applied to the journal layer, not the database. Monitor both the journal directory disk usage and the database table row count and size independently.

## Apache Artemis™: The File Journal

Apache Artemis™ replaces KahaDB with an entirely different message persistence architecture: a pure append-only file journal with automatic garbage collection and compaction. Three implementations are available:

**Journal Type****Platform****Mechanism****Performance****NIO**AllJava NIO with explicit fsyncHigh**AIO**Linux (libaio)Kernel async I/O callback, no explicit fsyncVery High**Memory Mapped**AllOS page cache mapping (READ\_WRITE)High; benefits from huge pages### AIO vs NIO: The Performance Difference

AIO (Asynchronous I/O) is the highest-performing Apache Artemis™ journal option and the recommended configuration for any production Linux ActiveMQ persistent messaging deployment. With AIO, Artemis submits writes to the Linux kernel’s AIO subsystem and receives a callback when the data has physically reached disk. This eliminates the explicit fsync() call entirely, the OS batches and optimizes the physical write path.

**AIO prerequisites:**

- Linux kernel 2.6 or later
- libaio installed (apt install libaio1 or yum install libaio)
- File system: ext2, ext3, ext4, jfs, or xfs (AIO silently falls back to NIO on NFS and other unsupported filesystems)

&lt;!– broker.xml — Artemis journal configuration –&gt;  
&lt;**configuration**&gt;  
 &lt;!– Journal type: ASYNCIO (AIO, Linux only), NIO, or MAPPED –&gt;  
 &lt;**journal-type**&gt;ASYNCIO&lt;/**journal-type**&gt;  
  
 &lt;!– Journal directory: put on a dedicated disk, separate from OS and logs –&gt;  
 &lt;**journal-directory**&gt;/var/artemis/data/journal&lt;/**journal-directory**&gt;  
  
 &lt;!– Large messages stored separately from journal –&gt;  
 &lt;**large-messages-directory**&gt;/var/artemis/data/large-messages&lt;/**large-messages-directory**&gt;  
  
 &lt;!– Paging directory: where excess messages go when address exceeds max-size-bytes –&gt;  
 &lt;**paging-directory**&gt;/var/artemis/data/paging&lt;/**paging-directory**&gt;  
  
 &lt;!– Journal file size in bytes: default 10MiB. Align to disk cylinder size.  
 10MiB is sufficient for most deployments. Increase for very high throughput. –&gt;  
 &lt;**journal-file-size**&gt;10485760&lt;/**journal-file-size**&gt;  
  
 &lt;!– Minimum journal files pre-created at startup –&gt;  
 &lt;**journal-min-files**&gt;2&lt;/**journal-min-files**&gt;  
  
 &lt;!– Pool files: journal reuses up to this many files before creating new ones –&gt;  
 &lt;**journal-pool-files**&gt;10&lt;/**journal-pool-files**&gt;  
  
 &lt;!– Transactional sync: flush journal to disk on every transaction boundary –&gt;  
 &lt;!– Set false only if occasional transaction loss on power failure is acceptable –&gt;  
 &lt;**journal-sync-transactional**&gt;true&lt;/**journal-sync-transactional**&gt;  
  
 &lt;!– Non-transactional sync: flush on every persistent send/acknowledge –&gt;  
 &lt;!– Set false for higher throughput at risk of losing durable messages on crash –&gt;  
 &lt;**journal-sync-non-transactional**&gt;true&lt;/**journal-sync-non-transactional**&gt;  
  
 &lt;!– Bindings journal (queue/address metadata): always NIO, low throughput –&gt;  
 &lt;**bindings-directory**&gt;/var/artemis/data/bindings&lt;/**bindings-directory**&gt;  
  
&lt;/**configuration**&gt;

A dedicated disk is essential for Artemis message persistence performance. The append-only journal’s throughput advantage, which minimizes disk head movement through sequential writes, is lost if the journal shares a disk with other I/O-intensive processes. Paging and large messages should ideally be on separate volumes too, as they involve different I/O patterns (random access for paging, large sequential writes for large messages).

### Apache Artemis™ Paging: Handling Memory Overflow

When an address exceeds its max-size-bytes limit, Artemis pages messages to disk rather than blocking producers (when address-full-policy=PAGE). Paged messages are stored in the paging directory and are transparently retrieved as consumers catch up.

&lt;!– broker.xml — paging configuration per address –&gt;  
&lt;**address-settings**&gt;  
 &lt;**address-setting** match=”orders.#”&gt;  
 &lt;!– Maximum address size before paging activates –&gt;  
 &lt;**max-size-bytes**&gt;209715200&lt;/**max-size-bytes**&gt; &lt;!– 200MB –&gt;  
 &lt;!– Page file size: how much to write per paging file –&gt;  
 &lt;**page-size-bytes**&gt;10485760&lt;/**page-size-bytes**&gt; &lt;!– 10MB –&gt;  
 &lt;**address-full-policy**&gt;PAGE&lt;/**address-full-policy**&gt;  
 &lt;/**address-setting**&gt;  
&lt;/**address-settings**&gt;

**Paging and monitoring**: when address-full-policy=PAGE, the artemis\_address\_size metric reaching max-size-bytes is the signal that paging has begun. A separate monitoring alert on the paging directory disk usage catches the scenario where the paging store grows faster than consumers drain it.

### Apache Artemis™ Large Messages

Messages exceeding a configurable threshold (min-large-message-size, default 100KB) are stored outside the main journal in the large-messages directory. This prevents oversized messages from consuming excessive journal file capacity and fragmenting the write stream.

&lt;!– broker.xml — large message threshold –&gt;  
&lt;**min-large-message-size**&gt;102400&lt;/**min-large-message-size**&gt; &lt;!– 100KB default –&gt;

Large messages are streamed to the large-messages directory during send and retrieved on demand during delivery. For workloads mixing small and large messages on the same address, putting the large-messages directory on a separate volume from the journal prevents large-message I/O from competing with the journal’s sequential write pattern.

## Non-Persistent Messages: The Right Strategy for the Right Workload

DeliveryMode.NON\_PERSISTENT keeps messages in broker memory only, they are lost on broker restart or crash. For the right workloads, skipping ActiveMQ message persistence entirely is the correct engineering choice, not a compromise.

**Use non-persistent delivery when:**

- **High-frequency telemetry** (sensor readings, health checks, GPS pings): a missed reading during a 2-second broker restart is recoverable; 10,000 message writes per second to disk is not cost-free
- **Real-time market data/price updates**: the latest tick is all that matters; retransmitting 10,000 stale price updates after reconnection adds noise, not value
- **Event broadcast for real-time dashboards**: if the dashboard consumer misses 50ms of events during a restart, it will resync from the next batch
- **Intra-datacenter service-to-service heartbeats**: short-lived status signals where the absence of a heartbeat is itself a signal

// Sending non-persistent messages  
try (JMSContext context = connectionFactory.createContext()) {  
 context.createProducer()  
 .setDeliveryMode(DeliveryMode.NON\_PERSISTENT)  
 .send(telemetryTopic, sensorReading);  
}

**Never use non-persistent delivery for:**

- Financial transactions, order events, and payment records
- Workflow state transitions where loss creates orphaned processes
- Compliance-regulated event streams
- Any message where loss would require manual recovery or compensation

The decision matrix for persistence vs non-persistence should be a documented architecture decision per destination, not a performance optimization applied uniformly. For teams working in HIPAA, PCI-DSS, or SOC 2 regulated environments, message persistence requirements map directly to specific compliance controls.

## KahaDB Maintenance: Compaction and Recovery

### Compaction

Over time, KahaDB’s B-tree index accumulates fragmentation from deleted and expired entries. The broker runs automatic cleanup (journal file reclamation) continuously, but the index itself can grow without bound if compaction is never performed.

Trigger manual compaction via JMX:

BrokerViewMBean.resetStatistics() — resets counters  
BrokerViewMBean.gc() — triggers immediate journal cleanup

The gc() operation forces KahaDB to evaluate all journal files for reclamation. It does not compact the B-tree index itself but does free reclaimed journal files immediately rather than waiting for the scheduled cleanup interval.

### Slow Recovery After Crashes

If a broker crashes while KahaDB is mid-write, the journal will need to be replayed on the next startup. This is why indexWriteBatchSize matters: a high value (many dirty index entries accumulate before flush) means more journal replay is needed on recovery, extending restart time. For production brokers that require a restart time under 60 seconds, keep indexWriteBatchSize at the default 1,000.

### Apache Artemis™ Data Retention for Replay

Apache Artemis™ 2.x introduced a Data Retention feature: the broker keeps copies of journal files up to a configurable retention period, enabling message replay from history. This is distinct from traditional persistence; it allows you to “rewind” and reprocess historical messages:

&lt;!– broker.xml — Artemis data retention –&gt;  
&lt;**journal-retention-directory** period=”2″ unit=”DAYS”  
 storage-limit=”10G”&gt;  
 /var/artemis/data/retention  
&lt;/**journal-retention-directory**&gt;

## **Monitor KahaDB Journal Health and Storage Trends**

MeshIQ Console tracks StorePercentUsage, journal file count, and per-destination storage consumption in real time, surfacing the slow consumer journal pinning problem before it becomes a disk-full incident and alerting on anomalous storage growth trends.

[**See It in Action**](https://www.meshiq.com/request-a-demo/)



## Production Configuration Checklist

Before deploying any ActiveMQ broker to production, validate these message persistence decisions:

**Apache ActiveMQ® / KahaDB:**

- Journal directory on dedicated disk, separate from OS and log volumes
- journalMaxFileLength sized appropriately for throughput (64-128MB for high volume)
- indexCacheSize increased from the default of 10,000 for large message stores
- enableJournalDiskSyncs reviewed per-destination via mKahaDB if mixed criticality
- mKahaDB destination sharding is configured if any destination has irregular consumption patterns
- StorePercentUsage alert configured at 70% (per Monitoring &amp; Alerting guide)
- LevelDB removed from any legacy configurations (removed in Apache ActiveMQ® 5.17)

**Apache Artemis™:**

- Journal type set to ASYNCIO on Linux deployments
- libaio installed and validated (/opt/artemis/bin/artemis check node)
- Journal, large messages, and paging directories on separate volumes for high-load deployments
- journal-pool-files set to expected maximum concurrent message load
- Paging configured per address with appropriate max-size-bytes and monitoring

## Persistence Is Architecture, Not Configuration

The choice between KahaDB and mKahaDB, between enableJournalDiskSyncs=true and false, between AIO and NIO, and between ActiveMQ persistent messaging and non-persistent delivery, these are architecture decisions with direct consequences for throughput, durability, recovery time, and storage growth.

The production configurations in this guide give you the foundation. The tuning parameters let you adapt them to your specific workload profile. And MeshIQ Console gives you the visibility into journal health, storage trends, and destination-level storage consumption that lets you know when your message persistence configuration needs attention before it becomes an incident.

**Get expert guidance on your ActiveMQ persistence architecture → [Talk to an Expert](https://www.meshiq.com/apache-activemq/enterprise-support/)**

## **Frequently Asked Questions**

**Q1. What persistence adapters does ActiveMQ support?** 

Apache ActiveMQ® supports KahaDB (default), mKahaDB (multi-journal sharding), and JDBC (journaledJDBC for HA). LevelDB was removed in 5.17, do not use it. Apache Artemis™ supports an append-only file journal in NIO, AIO, and Memory Mapped modes, plus a JDBC store with a limited feature set.







**Q2. How do I tune KahaDB for better performance?** 

The five highest-impact KahaDB tuning levers are: journalMaxFileLength (increase to 64-128MB), indexCacheSize (increase to 20,000+), enableJournalDiskSyncs=false for non-critical destinations, mKahaDB to isolate slow consumer destinations, and a dedicated disk volume for the journal directory.







**Q3. When should I use JDBC persistence?** 

For shared-database HA (where JDBC row lock arbitrates master/slave) or compliance requirements to persist messages in an auditable database. For all other cases, KahaDB provides better performance. Use journaledJDBC rather than raw JDBC for meaningful performance improvement.







**Q4. What is the difference between Apache Artemis™ AIO and NIO journals?** 

AIO uses the Linux libaio kernel library for async write callbacks, eliminating explicit fsync calls and providing higher throughput. NIO uses standard Java NIO with explicit fsyncs – cross-platform, slightly lower performance. AIO requires Linux kernel 2.6+, libaio installed, and an ext/jfs/xfs filesystem. Apache Artemis™ automatically falls back to NIO if AIO prerequisites are not met.







**Q5. When should I use non-persistent messages?** 

For high-frequency telemetry, real-time data feeds, and status broadcasts, where occasional loss during broker restart is acceptable. Never for financial transactions, order events, compliance-regulated data, or any workflow where message loss requires recovery action.