In this tutorial I will discuss about Tuning ActiveMQ covering the core aspects of the broker, IO and storage tuning.
1) Do you need persistence ?
Persistent delivery is about 20 times slower. If message persistence is not critical for your applications or also for a single queue. If you don’t want persistence at all you can disable it easily via the XML Configuration. e.g.
<broker persistent="false"> </broker>
This will make the broker use the non persistence of all messages received. Another option is setting the NON_PERSISTENT message delivery flag on your MessageProducer
MessageProducer producer=null; producer.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
2) Use vm:// transport whenever possible
The VM transport allows clients to connect to each other inside the JVM without the burden of the network communication. The connection used is not a socket connection but uses direct method invocations which leverages a high performance embedded messaging system.
The first client to use the VM connection will start an embedded broker. Subsequent connections will attach that the same broker. Once all VM connections to the broker are closed, the embedded broker will automatically shutdown.
For example, this URI uses a vm transport with persistence set to false:
vm://broker1?marshal=false&broker.persistent=false&async=false
Please notice the parameter async = false which cab be used to switch to synchronous mode. In many cases this can help as it reduces the amount of thread context switches.
3) Optimize your protocol
If not using the vm:// transport then you have to make the most of your network protocol. More in detail:
You can improve the network performance of tcp sockets by setting the socketBufferSize which controls the socket buffer size in bytes and ioBufferSize. Example:
tcp://hostA:61616?socketBufferSize=131072
However, note that TCP buffer sizes should be tuned according to the bandwidth and latency of your network. You can estimate your optimal TCP buffer size with the following formula: buffer_size = bandwidth * RTT
Where bandwidth is measured in bytes per second and network round trip time (RTT) is in seconds. RTT can be easily measured using the ping utility.
If you are interested in low-level network details, here’s how you calculate this parameter:
Bandwidth can be calculated with O/S tools, for example Solaris/Linux users can issue the iperf command:
iperf -s
Server listening on TCP port 5001 TCP window size: 60.0 KB (default)
[ 4] local 172.31.178.168 port 5001 connected with 172.16.7.4 port 2357 [ ID] Interval [4] 0.0-10.1 sec Transfer 6.5 MBytes - Bandwidth 45 Mbit/sec
And here’s the RTT, roughly calculated with the ping utility:
ping proxyserver (204.228.150.3) 56(84) bytes of data. 64 bytes from proxyserver (204.228.150.3): icmp_seq=1 ttl=63 time=0.30 ms
So by multiplying the two factors, with a little math, we estimate an optimal 165KB TCP buffer size:
45 Mbit/sec * 0.30 ms = 45e6 * 30e-3 bits / 8 / 1024 = 165 KB = 1,350,000
4) Configure the Prefetch limit (Consumer Performance)
ActiveMQ in order to be able to achieve high performance needs to stream messages to consumers as fast as possible so that the consumer always has a buffer of messages, in RAM, ready to process. This however has the danger that the aggressive pushing of messages to the consumers could flood a consumer as typically its much faster to deliver messages to the consumer than to actually process them.
So ActiveMQ uses a prefetch limit on how many messages can be streamed to a consumer at any point in time. Once the prefetch limit is reached, no more messages are dispatched to the consumer until the consumer starts sending back acknowledgements of messages.
To change the prefetch size for all consumer types you would use a connection URI similar to:
tcp://localhost:61616?jms.prefetchPolicy.all=50
To change the prefetch size for just queue consumer types you would use a connection URI similar to:
tcp://localhost:61616?jms.prefetchPolicy.queuePrefetch=1
It can also be configured on a per consumer basis using Destination Options.
queue = new ActiveMQQueue("TEST.QUEUE?consumer.prefetchSize=10"); consumer = session.createConsumer(queue);
If you have just a single consumer attached to a queue, you can leave the prefetch limit at a fairly large value. But if you are using a group of consumers to distribute the workload, it is usually better to restrict the prefetch limit to a very small number—for example, 0 or 1. (ref: https://access.redhat.com/documentation/en-US/Fuse_ESB_Enterprise/7.1/html/ActiveMQ_Tuning_Guide/files/GenTuning-Consumer-Prefetch.html)
5) Flow Control (Producer Performance)
By ‘flow control‘ we mean that if the broker detects that the memory limit for the destination, or the temp- or file-store limits for the broker, have been exceeded, then the flow of messages can be slowed down. The producer will be either blocked until resources are available or will receive a JMSException.
While it is generally a good idea to keep enabled flow control in a broker, there are some scenarios in which it is unsuitable. For example, in case a producer dispatches messages that are consumed by multiple consumers (e.g. a topic), but one of the consumers could fail without the broker becoming aware of it. In this case, after flow control kicks in, the producer also stops producing messages, until the consumer failure is discovered. This could be undesiderable because there are other active consumers interested in the messages.
In order to disable flow control, you have to set the producerFlowControl attribute to false on a policyEntry element as follows:
<broker> <destinationPolicy> <policyMap> <policyEntries> <policyEntry topic="MYTOPIC.>" producerFlowControl="false"/> </policyEntries> </policyMap> </destinationPolicy> </broker>
If producer flow control is turned off, then you have to be a little more careful about how you set up your system usages. When producer flow control is off, it basically means that the broker will accept every message that comes in, no matter if the consumers cannot keep up. This can be used to handle spikes for incoming messages to a destination.
6) Async sends
ActiveMQ supports sending messages to a broker in sync or async mode. The mode used has a huge impact in the latency of the send call. Since latency is typically a huge factor in the throughput that can achieved by producer, using async sends can increase the performance of your system dramatically.
You can set this parameter in the brokerUrl as follows:
tcp://localhost:61616?jms.useAsyncSend=true
5) Virtual topics
Virtual Destinations are a very powerful feature for your messaging applications. Although topics are great for broadcasting events but when we need to be able to consume messages that were sent while a client was offline, the only recourse is to use a durable topic subscription. Unfortunately, durable subscriptions have a number of limitations, the least of which is that only one JMS connection can be actively subscribed to a logical topic subscription. This means that we can’t load balance messages and we can’t have fast failover if a subscriber goes down. Virtual Destinations solve these problems since consumer can consume from a physical queue for a logical topic subscription, allowing many consumers to be running on many machines & threads to load balance the load.
In the following XML snippet, we configured our broker to all topics on our broker into virtual topics. We use the wildcard syntax here, which matches every topic name a client sends messages to:
<broker> <destinationInterceptors> <virtualDestinationInterceptor> <virtualDestinations> <virtualTopic name=">" prefix="VirtualTopicConsumers.*."/> </virtualDestinations> </virtualDestinationInterceptor> </destinationInterceptors> </broker>
7) Tune Default Storage
The default persistence mechanism used by ActiveMQ is the KahaDB store:
<persistenceAdapter> <kahaDB directory="${activemq.data}/kahadb"/> </persistenceAdapter>
The parameter enableJournalDiskSyncs—(default true) causes the broker to perform a disk sync (ensuring that a message has been physically written to disk) before sending the acknowledgment back to a producer. You can obtain a substantial improvement in broker performance by disabling disk syncs (setting this property to false), but this reduces the reliability of the broker somewhat.
<kahaDB directory="${activemq.data}/kahadb" enableJournalDiskSyncs="false" />
The parameter indexCacheSize—(default 10000) specifies the size of the cache in units of pages (where one page is 4 KB by default). Generally, the cache should be as large as possible, to avoid swapping pages in and out of memory. Check the size of your metadata store file, db.data, to get some idea of how big the cache needs to be.
So with the default (10000) and having each page as 4K you are devoting 40000K (or 40Mb) for the index cache size. If the db.data size is much larger than this amount, then you should consider increasing this attribute accordingly:
<kahaDB directory="${activemq.data}/kahadb" indexCacheSize="100000" />
8) Consider switching to the new Persistence LevelDB
The LevelDB Store is a file based persistence database that is local to the message broker that is using it. It has been optimized to provide even faster persistence than KahaDB. You can configure ActiveMQ to use LevelDB for its persistence adapter – like below :
<broker brokerName="broker" > <persistenceAdapter> <levelDB directory="activemq-data"/> </persistenceAdapter> </broker>
Some of the advantages of LevelDB include:
- Append mostly disk access patterns improve perf on rotational disk.
- Fewer disk syncs than KahaDB
- It maitains fewer index entries per message than KahaDB which means it has a higher persistent throughput.
- Fewer index lookups needed to load a message from disk into memory
- Uses Snappy compression to reduce the on disk size of index entries
- A send to a composite destination only stores the message on disk once.
- Pauseless data log file garbage collection cycles.
- Has a ‘Replicated’ variation where it can self replicate to ‘slave’ brokers to ensure message level HA.