ActiveMQ Network of Brokers: The Complete Configuration Guide

meshIQ April 29, 2026

Distributed enterprise applications eventually need to exchange messages across network boundaries - between datacenters, between application tiers, between geographic regions. A single broker cannot serve all of them efficiently. 

The ActiveMQ Network of Brokers is the architecture that solves this: a store-and-forward messaging fabric in which multiple broker instances collaborate to route messages from producers to consumers, regardless of which broker each is connected to.

But NoB is also where the most operationally complex ActiveMQ Network of Brokers problems originate: messages that never reach their consumers, message loops that saturate bandwidth, consumers that handle 90% of the load while others are starved, and durable subscriber configurations that produce duplicate deliveries. 

This guide covers the full NoB configuration landscape, from the first connector to multi-datacenter topologies, with the parameter-level precision that separates working deployments from ones that fail intermittently and mysteriously.

One foundational point before diving in: Network of Brokers is an Apache ActiveMQ feature. Artemis uses a different clustering model with built-in server-side load balancing.

The Fundamental Concept: Store-and-Forward vs. Replication

Before any configuration, the single most important distinction to internalize:

The Network of Brokers is NOT high-availability. A message on Broker A stays on Broker A until a consumer appears on Broker B and the message is forwarded. If Broker A crashes before that forwarding occurs, the message is inaccessible until Broker A recovers. The message has one owner, always.

This is the opposite of master/slave HA, where the same message exists simultaneously on both the master and the slave. In our High Availability Architecture Guide, we covered the master/slave models in depth, if you need message-level resilience, that post is the right starting point.

What NoB provides is horizontal scale and geographic routing: the ability to distribute producer and consumer workloads across multiple broker nodes, each with its own local clients, while maintaining a unified messaging namespace. A producer on Broker A in New York can send to a queue, and a consumer on Broker B in London receives it, without either client knowing or caring about the topology.

For true production resilience, each broker node in a Network of Brokers should be backed by its own master/slave HA pair. The network provides scale and routing; HA provides per-node message durability. The two architectures are complementary, not alternatives.

Core Concepts: How Forwarding Actually Works

The Network Connector

A network connector is a unidirectional bridge between two brokers. Broker A establishes a connector to Broker B. When a consumer appears on Broker B for a destination that has messages on Broker A, the broker uses advisory messages to learn of that consumer’s existence and begins forwarding messages on demand.

The key phrase is on demand: forwarding only happens when there is a consumer on the remote broker. This is dynamicOnly=true behavior, the default in production-grade configurations. Without it, messages are forwarded to every connected broker regardless of consumer presence, which wastes I/O and fills remote broker stores with messages that may never be consumed.

Advisory Messages: The Discovery Backbone

Advisory messages are the nervous system of a dynamic Network of Brokers. When a consumer subscribes to Broker B, Broker B publishes an advisory message on the ActiveMQ.Advisory.Consumer.> topic. Broker A, which has a network connector to Broker B, receives this advisory and creates a forwarding bridge for the relevant destination.

This mechanism has a critical operational implication: advisorySupport must never be disabled on any broker participating in a dynamic NoB. Disabling it removes the consumer discovery signal, causing Broker A to never forward messages to Broker B’s consumers. This is the second most common cause of messages appearing “stuck”, the first being TTL misconfiguration. Disabling advisory support is only viable for fully static NoB configurations where all destination bridging is declared explicitly, which adds significant maintenance overhead.

Topology Patterns: Which One Fits Your Architecture

Hub-and-Spoke

The most common enterprise NoB topology. A central hub broker acts as the routing point. All spoke brokers connect to the hub. Producers on any spoke send messages; consumers on any spoke receive them via the hub.


<!– Hub broker: no networkConnectors — spokes connect TO the hub –>
<!– Spoke broker (identical config on each spoke) –>
<networkConnectors>
  <networkConnector
    name=”spoke-to-hub”
    uri=”static:(tcp://hub-broker:61616)”
    duplex=”true”
    dynamicOnly=”true”
    networkTTL=”2″
    decreaseNetworkConsumerPriority=”true”
    suppressDuplicateQueueSubscriptions=”true”/>
</networkConnectors>

Why duplex=”true” on the spoke: the hub needs to forward messages back to spokes when consumers are there. Without duplex, messages sent to the hub stay on the hub, the reverse flow (hub to spoke) never activates.

Why duplexon ONE side only: a duplex connector configured on the spoke creates a bidirectional bridge from the spoke’s side. The hub does not need (and should not have) its own connector back to the spoke — that would create two independent bridges, doubling advisory traffic and potentially double-delivering messages. When in doubt: configure network connectors on the initiating side, use duplex=true to enable the return flow.

Mesh (Fully Connected)

Each broker connects directly to every other broker. Works well for small clusters (2-4 brokers) where every node needs to communicate with every other. Does not scale well, and the connection count grows as O(n²).


<!– Broker A — connects to B and C –>
<networkConnectors>
  <networkConnector name=”a-to-b”
    uri=”static:(tcp://broker-b:61616)”
    duplex=”true” dynamicOnly=”true” networkTTL=”1″
    decreaseNetworkConsumerPriority=”true”/>
  <networkConnector name=”a-to-c”
    uri=”static:(tcp://broker-c:61616)”
    duplex=”true” dynamicOnly=”true” networkTTL=”1″
    decreaseNetworkConsumerPriority=”true”/>
</networkConnectors>
<!– Broker B — connects to C only (A→B covered by A’s duplex connector) –>
<!– Broker C — no outbound connectors needed (A→C and B→C covered) –>

In a fully connected mesh, careful naming is essential. If Broker A declares a-to-b duplex, do not also declare b-to-a on Broker B, that creates a second independent duplex bridge alongside the first, and you now have two separate bidirectional channels between A and B, each with their own advisory consumer subscriptions.

Chain (Linear / Regional)

Three or more brokers in a line. Common for geographic distribution: West Coast → Central → East Coast. Messages flow along the chain until they reach a consumer.


<!– Broker A (West) — connects forward to B –>
<networkConnectors>
  <networkConnector name=”west-to-central”
    uri=”static:(tcp://broker-b:61616)”
    duplex=”false”
    dynamicOnly=”true”
    networkTTL=”2″
    decreaseNetworkConsumerPriority=”true”/>
</networkConnectors>

<!– Broker B (Central) — connects forward to C –>
<networkConnectors>
  <networkConnector name=”central-to-east”
    uri=”static:(tcp://broker-c:61616)”
    duplex=”false”
    dynamicOnly=”true”
    networkTTL=”2″
    decreaseNetworkConsumerPriority=”true”/>
</networkConnectors>
<!– Broker C (East) — no outbound connector; consumers live here –>

Why networkTTL=”2″ in a 3-broker chain: TTL=1 means a message can cross only one broker boundary. A message on Broker A can only reach consumers on Broker B, never on Broker C. With TTL=2, a message on A can traverse A→B and B→C, reaching consumers on C. The TTL must equal the number of hops in your longest path.

The networkConnector Parameter Reference

These are the parameters that determine NoB behavior. Most defaults are either wrong for production or require careful evaluation for your specific topology.

ParameterDefaultRecommendedRationale
duplexfalsetrue (hub-and-spoke)Enables bidirectional forwarding over a single connection
networkTTL1Match hop countCritical – default of 1 breaks multi-hop topologies
messageTTL-1 (use networkTTL)Match hop countControls message forwarding depth independently
consumerTTL-1 (use networkTTL)Match hop countControls subscription propagation depth
dynamicOnlyfalsetrueForward only when a consumer exists on the remote broker
decreaseNetworkConsumerPriorityfalsetruePrefer local consumers; only forward when local is busy
conduitSubscriptionstruefalse for queues with multiple consumerstrue collapses multiple remote consumers into one; hides the load from the source broker
suppressDuplicateQueueSubscriptionsfalsetruePrevents duplicate subscriptions in mesh/ring topologies
bridgeTempDestinationstrueConsider falseControls temporary destination bridging; often unnecessary in NoB

The conduitSubscriptions Trap

This is the parameter responsible for the most common NoB load-balancing problem. When conduitSubscriptions=true (the default), the source broker treats all consumers on the same remote queue as one consumer, regardless of how many are actually subscribed. It sends messages as if it is filling one queue, not load-balancing across multiple.

The result: if Broker B has three consumers on orders.queue, and Broker A has conduitSubscriptions=true, Broker A sees one remote subscription. It may forward only one-third of the messages it should, leaving two of the three consumers on Broker B perpetually idle.

Set conduitSubscriptions=false for queue destinations where you need accurate load distribution across multiple consumers. For topic subscribers, conduitSubscriptions=true is correct, it prevents message duplication when multiple subscribers share the same queue under a topic.

Message Cycling: Why It Happens and How to Prevent It

Message cycling, where a message travels endlessly between brokers, is prevented by default through broker ID tracking. By default, it is not permissible for a message to be replayed back to the broker from which it came. The broker embeds its identity in forwarded messages and rejects replays. This mechanism prevents infinite loops in duplex and bidirectional configurations.

However, TTL exhaustion can still be a practical problem. If networkTTL is set higher than necessary, messages can traverse many more hops than intended, creating routing inefficiency and potential for backpressure propagation across the network.

The other risk is duplicate subscription creation in mesh topologies. When Broker A connects to B, and Broker B also connects to C, and Broker C connects back to A (forming a ring), the network may create multiple subscription paths between A and C: one direct (A→C, if configured) and one via B (A→B→C). Without suppressDuplicateQueueSubscriptions=true, both paths create active subscriptions, leading to load imbalance and, occasionally, the same message being forwarded along multiple paths.

Destination Filtering: Restricting What Gets Bridged

In production deployments, you almost never want to bridge all destinations across all connectors. Selective bridging via destination filters limits advisory traffic, prevents sensitive internal queues from being exposed across the network, and avoids routing inefficiency.


<!– Production hub-and-spoke with destination filtering –>
<networkConnectors>
  <networkConnector name=”spoke-to-hub”
    uri=”static:(tcp://hub:61616)”
    duplex=”true”
    dynamicOnly=”true”
    networkTTL=”2″
    decreaseNetworkConsumerPriority=”true”
    suppressDuplicateQueueSubscriptions=”true”>

    <!– Only bridge these specific destinations –>
    <dynamicallyIncludedDestinations>
      <queue physicalName=”orders.>”/>
      <queue physicalName=”payments.>”/>
      <topic physicalName=”events.>”/>
    </dynamicallyIncludedDestinations>

    <!– Never bridge these, even if consumers exist –>
    <excludedDestinations>
      <queue physicalName=”internal.>”/>
      <topic physicalName=”ActiveMQ.Advisory.>”/>
    </excludedDestinations>

  </networkConnector>
</networkConnectors>

Note on wildcards: wildcards (> and *) can only be used in excludedDestinations and dynamicallyIncludedDestinations. They cannot be used in staticallyIncludedDestinations, static inclusions require exact destination names.

Note on advisory topics: excluding ActiveMQ.Advisory.> from bridging prevents advisory messages themselves from being forwarded across the network. This is generally desirable, you do not want advisory traffic from one region flooding brokers in another region. However, verify that your discovery mechanism does not depend on advisory forwarding before applying this exclusion.

H2: Stuck Messages: The Most Common NoB Production Problem

Stuck messages are messages that exist in a broker’s persistent store but are never delivered to a consumer, even though a consumer for that destination exists somewhere in the network. They occur when consumer topology changes faster than the NoB’s routing state can track.

The canonical scenario:

  1. Consumer C1 connects to Broker B. Broker B publishes an advisory. Broker A learns that C1 exists on B and begins forwarding messages to B.
  2. C1 disconnects from Broker B and reconnects to Broker A.
  3. Messages already forwarded to Broker B sit there with no consumer. Broker A, where C1 is now connected, has no messages, they are all on Broker B.
  4. The advisory for C1 leaving Broker B may not propagate fast enough for Broker A to stop forwarding, or it may take time for Broker A to replay messages back from B.

The result: messages stuck on Broker B, consumer on Broker A, no delivery occurring.

Fix: replayWhenNoConsumers

The conditionalNetworkBridgeFilterFactory with replayWhenNoConsumers=true tells the network bridge to replay messages back to the originating broker when a destination has messages but no active consumers:


<!– activemq.xml — apply on all brokers in the NoB –>
<destinationPolicy>
  <policyMap>
    <policyEntries>
      <!– Enable replay for all queues that may experience consumer migration –>
      <policyEntry queue=”>” enableAudit=”false”>
        <conditionalNetworkBridgeFilterFactory
          replayWhenNoConsumers=”true”/>
      </policyEntry>
    </policyEntries>
  </policyMap>
</destinationPolicy>

enableAudit=”false” is required alongside replayWhenNoConsumers=”true”. The message audit tracks message IDs to prevent duplicate delivery. When replay is enabled, the audit must be disabled for the affected destination; otherwise, the replayed message is identified as a duplicate and dropped before it reaches the consumer.

Durable Subscribers and Stuck Messages

Durable topic subscribers in a NoB have an additional risk of stuck messages that virtual topics largely mitigate. If a durable subscriber connects to Broker B, a producer on Broker A forwards messages to B. 

If the subscriber disconnects and reconnects to Broker A without fully unsubscribing, messages accumulated on Broker B during the disconnection are stuck. The subscriber on Broker A receives new messages, but never the backlog on Broker B.

The recommended pattern for durable pub/sub in a NoB is to use virtual topics instead of native durable subscribers. Virtual topics convert each subscription into a queue, and queues work naturally with replayWhenNoConsumers=true, recovering stuck messages automatically. Do not change the name of a network connector or the brokerName when using durable subscribers; ActiveMQ uses the combination of network name and broker name to build a unique but repeatable durable subscriber name, and renaming either orphans the existing subscription.

Production Configuration: A Complete Multi-Datacenter Example

This is a real-world hub-and-spoke NoB spanning two datacenters, each with a master/slave HA pair:


<!– DC1 — Spoke Broker (activemq.xml) –>
<!– This broker runs with a master/slave pair behind it (shared KahaDB on SAN) –>
<broker xmlns=”http://activemq.apache.org/schema/core”
        brokerName=”dc1-spoke”
        useJmx=”true”
        advisorySupport=”true”>

  <networkConnectors>
    <!– Use masterslave:() URI so the connector survives hub failover –>
    <networkConnector
      name=”dc1-to-hub”
      uri=”masterslave:(tcp://hub-primary:61616,tcp://hub-secondary:61616)”
      duplex=”true”
      dynamicOnly=”true”
      networkTTL=”2″
      messageTTL=”2″
      consumerTTL=”2″
      decreaseNetworkConsumerPriority=”true”
      suppressDuplicateQueueSubscriptions=”true”
      conduitSubscriptions=”false”>

      <dynamicallyIncludedDestinations>
        <queue physicalName=”orders.>”/>
        <queue physicalName=”payments.>”/>
        <topic physicalName=”events.system.>”/>
      </dynamicallyIncludedDestinations>

      <excludedDestinations>
        <queue physicalName=”internal.>”/>
      </excludedDestinations>

    </networkConnector>
  </networkConnectors>

  <destinationPolicy>
    <policyMap>
      <policyEntries>
        <policyEntry queue=”>” enableAudit=”false”>
          <conditionalNetworkBridgeFilterFactory
            replayWhenNoConsumers=”true”/>
        </policyEntry>
      </policyEntries>
    </policyMap>
  </destinationPolicy>

  <persistenceAdapter>
    <kahaDB directory=”/mnt/dc1-san/kahadb” journalMaxFileLength=”64mb”/>
  </persistenceAdapter>

  <transportConnectors>
    <transportConnector name=”nio” uri=”nio://0.0.0.0:61616?maximumConnections=2000″/>
  </transportConnectors>

</broker>

Three details in this configuration that most guides omit:

  • masterslave:(…) URI for the network connector: If the hub has a master/slave HA pair, the spoke’s network connector must use the masterslave:// scheme rather than static://. Using static:// pointing only to the primary means the spoke loses its network connection when the hub fails over to the slave, even though the slave is running, and the message path could continue.
  • All three TTL parameters are set explicitly: Setting only networkTTL is common, but sets both messageTTL and consumerTTL to the same value implicitly. Setting all three explicitly prevents ambiguity when the configuration is read by someone unfamiliar with the implicit default relationship, and allows finer-grained control if message and subscription propagation depths need to differ.
  • conduitSubscriptions=”false” for queue destinations: For the orders.> and payments.> queues where multiple consumers may be running on different spokes simultaneously, conduit subscriptions must be disabled to ensure all consumers are visible to the hub and messages are load-balanced across them correctly.

NoB vs. HA: The Definitive Clarification

This confusion surfaces in every enterprise architecture conversation involving multiple ActiveMQ brokers. The table below makes the distinction unambiguous:

DimensionNetwork of BrokersMaster/Slave HA
PurposeScale and geographic routingMessage-level durability
Message ownershipOne broker owns each messageBoth master and slave own every message
Broker failure impactMessages on the failed broker are inaccessibleSlave promotes; messages available
Client reconnectionVia Failover Transport to any brokerVia Failover Transport to the new master
ConfigurationnetworkConnectors + store-and-forwardShared KahaDB / JDBC / Artemis replication
Use together?Yes, each NoB node should have its own HA pairYes, NoB provides scale, HA provides durability

The correct enterprise architecture for a large, resilient deployment: multiple HA pairs (master+slave per datacenter or zone), connected into a Network of Brokers for cross-datacenter routing and load distribution. 

NoB routes messages to where consumers are; HA ensures those messages survive broker failure. Neither architecture alone is sufficient for an enterprise deployment that requires both scale and resilience.

Monitoring a Network of Brokers

A NoB introduces monitoring challenges that don’t exist with a single broker. The key operational metrics span the entire network, not just individual brokers:

  • Network bridge connection status: is each connector actually established and healthy?
  • Per-bridge message forward rate: are messages flowing across each connector as expected?
  • Remote consumer count per bridge: how many consumers are visible to each source broker across each connector?
  • Stuck message detection: queues with depth > 0 and consumer count = 0, per broker
  • Advisory message rate: sudden spikes indicate consumer churn that may trigger stuck messages

In a distributed NoB, these metrics need to be collected and correlated across all broker nodes simultaneously. A broker that appears healthy in isolation may be silently accumulating messages that are not being forwarded because its TTL is wrong or its advisory subscription is broken.

For performance considerations specific to the NoB topology, including network consumer prefetch and decreaseNetworkConsumerPriority throughput implications, see our post on ActiveMQ Performance Tuning: 10x Throughput.

The Network of Brokers Is Powerful When Configured Correctly

A well-configured Network of Brokers is one of the most flexible enterprise messaging architectures available. It scales horizontally, routes across geographic boundaries, and adapts dynamically to consumer presence without manual configuration changes.

A misconfigured NoB is one of the most frustrating troubleshooting experiences in enterprise messaging: messages that appear to vanish, consumers that receive nothing, and brokers that seem fine individually but fail to collaborate. The difference between the two outcomes is almost always in the details, TTL values, conduit subscriptions, advisory support, and replayWhenNoConsumers configuration.

MeshIQ’s enterprise support team has designed and remediated Network of Brokers configurations at every scale, from two-broker hub-and-spoke deployments to cross-continental meshes with dozens of nodes. If your NoB isn’t behaving as designed, we can diagnose it quickly.

Get expert help with your Apache ActiveMQ NoB configuration → Talk to an Expert

Frequently Asked Questions

Q1. What is the ActiveMQ Network of Brokers?

NoB is a store-and-forward topology where multiple broker instances forward messages between themselves on demand, based on consumer presence. It enables horizontal scale and geographic distribution. Unlike master/slave HA, each message is owned by exactly one broker, if that broker fails, the message waits until it recovers.

Q2. What is the difference between NoB and master/slave HA?

NoB is for scale and routing. Each message lives on one broker and is forwarded on demand. Master/slave HA is for message durability, the same message exists on both master and slave simultaneously. The correct production architecture combines both: NoB for routing scope, HA for per-node message resilience.

Q3. How do I configure a duplex networkConnector? 

Set duplex=”true” on one broker’s connector element only. Duplex creates a bidirectional bridge over one connection. Configuring duplex on both ends creates two separate duplex bridges, doubling advisory traffic and risking message duplication.

Q4. Why are messages getting stuck in my ActiveMQ Network of Brokers? 

The four most common causes are: consumer migration between brokers (leaving messages on the broker the consumer left), advisorySupport disabled on any broker, networkTTL too low for the hop count of your topology, and conduitSubscriptions=true hiding the true number of remote consumers. Fix consumer migration with replayWhenNoConsumers=true on the destination policy.

Q5. What does networkTTL control in ActiveMQ? 

TTL controls how many broker hops a message or consumer subscription can traverse. The default is 1, meaning messages can only cross one boundary. In a three-broker chain, set TTL=2. Set it to the number of hops in your longest message path; higher TTLs increase advisory traffic and risk routing inefficiency.

Cookies preferences

Others

Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.

Necessary

Necessary
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.

Advertisement

Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.

Analytics

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.

Functional

Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.

Performance

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.