Clustering Infinispan – a complete example

Infinispan can be configured to behave in different ways. These ways are called Caching Modes, and determine which strategy will be used to manage Infinispan objects within machines that are designated as the cluster members.

Infinispan uses the JGroups library to provide network communication capabilities. JGroups is a toolkit for reliable group communication and provides many useful features and enables cluster node discovery, point-to-point and point-to-multipoint communication, failure detection, and data transfer between cluster nodes.

You can configure Infinispan to either be only on local JVM or clustered. Its real world scenario is the use in cluster mode, where all nodes act as a single cache, providing a large amount of memory heap.

If in a cluster, the cache can be configured for full replication, where all new entries and changes are replicated to all the grid participants, or it can be configured to invalidate changes across the grid or finally, it can be configured for distribution on mode, where instead of full replication, you can specify how many replicas of an entry the cluster will have, allowing fault-tolerant properties to exist.

Replication is the simplest clustered mode and comprises of one or more cache instances that replicate its data to all cluster nodes; in the end, all cluster nodes will end up with the same data.

In the Distribution mode, you can define, via configuration, the number of replicas that are available for a given data grid. The distribution strategy is designed for larger in-memory data grid clusters and it’s the most scalable data grid topology. Distribution strategy improves the application’s performance dramatically and outperforms the replication mode on traditional networks.

Developing a Client-Server application that uses an Infinispan cluster

Now you need to start multiple instances of the Server, using a clustered configuration. The simplest way to do that, is to start Infinispan in Domain mode. As a matter of fact, the Server Groups in Domain mode use by default the clustered profile so that a Cluster will be formed:

    <server-group name="cluster" profile="clustered">
        <jvm name="default">
            <heap size="64m" max-size="512m"/>
        <socket-binding-group ref="clustered-sockets"/>

Now start Infinispan Server:

$ ./

If you pay attention to the Server logs, you will see that a Cluster has been formed and the Hot Rod Server will start on each node;

[Server:server-one] 14:10:08,202 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-5) ISPN000078: Starting JGroups channel clustered
[Server:server-one] 14:10:08,215 INFO  [org.infinispan.CLUSTER] (MSC service thread 1-5) ISPN000094: Received new cluster view for channel cluster: [master:server-two|1] (2) [master:server-two, master:server-one]
[Server:server-one] 14:10:08,275 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-5) ISPN000079: Channel clustered local address is master:server-one, physical addresses are []
[Server:server-two] 14:10:08,557 INFO  [] (MSC service thread 1-2) DGISPN0001: Started repl cache from clustered container
[Server:server-two] 14:10:08,597 INFO  [] (MSC service thread 1-6) DGISPN0001: Started default cache from clustered container
[Server:server-two] 14:10:08,607 INFO  [org.infinispan.server.endpoint] (MSC service thread 1-4) DGENDPT10000: HotRodServer starting
[Server:server-two] 14:10:08,618 INFO  [org.infinispan.server.endpoint] (MSC service thread 1-1) DGENDPT10000:  starting
[Server:server-two] 14:10:08,629 INFO  [org.infinispan.server.endpoint] (MSC service thread 1-4) DGENDPT10001: HotRodServer listening on
[Server:server-one] 14:10:10,882 INFO  [org.infinispan.server.endpoint] (MSC service thread 1-5) DGENDPT10000: HotRodServer starting
[Server:server-one] 14:10:10,901 INFO  [org.infinispan.server.endpoint] (MSC service thread 1-5) DGENDPT10001: HotRodServer listening on

Here is a a view of the Infinispan Web Console, which shows the two nodes that make up the cluster:

Clustering infinispan tutorial

Writing the Client Application

Now we will develop a Server-side application that will connect to the Hot Rod Server to interact with the Remote Cache. This application is to be deployed on WildFly application server. Here is the Producer class that attaches to one of the Cluster so that it’s aware of the Cluster topology:

package com.mastertheboss.producer;

import javax.enterprise.context.ApplicationScoped;
import javax.enterprise.inject.Produces; 
import org.infinispan.client.hotrod.RemoteCacheManager;

public class ApplicationBean {
	public RemoteCacheManager defaultRemoteCacheManager() {
		org.infinispan.client.hotrod.configuration.ConfigurationBuilder cb = new org.infinispan.client.hotrod.configuration.ConfigurationBuilder();
		cb.addCluster("remote-cluster").addClusterNode("server-one", 11222);
		RemoteCacheManager rcm = new RemoteCacheManager(;
		return rcm;

We will add an EJB class to act as Service, by adding some data in the Cache and a method for querying the cache:

package com.mastertheboss.ejb;

import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.Set;
import java.util.UUID;

import javax.annotation.Resource;
import javax.ejb.Stateless;
import javax.inject.Inject;

import org.infinispan.Cache;
import org.infinispan.client.hotrod.RemoteCache;
import org.infinispan.client.hotrod.RemoteCacheManager;
import org.infinispan.client.hotrod.ServerStatistics;
import org.infinispan.manager.DefaultCacheManager;

import com.mastertheboss.model.Person;
public class ApplicationEJB {
    RemoteCacheManager cacheManager;
    public void add() {
    	 RemoteCache<String, Object> cache = cacheManager.getCache("default");
    	 Person p = new Person("John","Smith");
    	 cache.entrySet().forEach(entry -> System.out.println(entry.getValue()));
    public List<Person> queryCache() {
   	 RemoteCache<String, Object> cache = cacheManager.getCache("default");
     List list = new ArrayList();
     cache.entrySet().forEach(entry -> list.add(entry.getValue()));

     return list;

We can trigger the Cache initialization with a Startup class, that calls the add method the the ApplicationEJB:

package com.mastertheboss.ejb;

import javax.annotation.PostConstruct;
import javax.ejb.EJB;
import javax.ejb.Singleton;
import javax.ejb.Startup;

public class StartupBean {
  ApplicationEJB ejb;
  /* Add some data in the cache */
  public void init() {

For the sake of completeness, we will add the model class, named Person, that will be stored in the Cache:

package com.mastertheboss.model;


public class Person implements Serializable{

String name;
String surname;
public String getName() {
	return name;
public void setName(String name) { = name;
public String getSurname() {
	return surname;
public void setSurname(String surname) {
	this.surname = surname;
public Person(String name, String surname) {
	super(); = name;
	this.surname = surname;
public String toString() {
	return "Person [name=" + name + ", surname=" + surname + "]";

Now we only need a Front-end component, like a Servlet to test our application:

package com.mastertheboss.servlet;

import java.util.List;

import javax.annotation.Resource;
import javax.ejb.EJB;
import javax.servlet.AsyncContext;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import com.mastertheboss.ejb.ApplicationEJB;
import com.mastertheboss.model.Person;

@WebServlet(name = "print", urlPatterns = { "/print" })
public class PrintCacheServlet extends HttpServlet {
	@EJB ApplicationEJB ejb; 
	public PrintCacheServlet() {

	protected void doGet(HttpServletRequest request,
            HttpServletResponse response) throws ServletException, IOException {
		  PrintWriter out = response.getWriter();
		  out.println("<h1>Printing cache content</h1>");
	      List<Person> list = ejb.queryCache();
	      for (Person p:list)
	protected void doPost(HttpServletRequest request,
			HttpServletResponse response) throws ServletException, IOException {
		// TODO Auto-generated method stub


To build and compile the Project, we will add Java EE dependencies and Infinispan dependencies:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns=""



	<name>Quickstart: Infinispan-WildFly</name>

			<name>Apache License, Version 2.0</name>








Now, we will boot WildFly on a different Port to avoid conflict with the Infinispan Server:

$/ -Djboss.socket.binding.port-offset=200

Deploy the application on WildFly and verify, by invoking the Servlet, that the cache content is printed out:

$ curl http://localhost:8280/infinispan-wildfly/print
<h1>Printing cache content</h1>
Person [name=John , surname=Smith]

You can try connecting to Infinispan Server using the CLI script and verify that, by shutting down the coordinator node of the cluster, the Cache can still be accessed transparently:

./ -c
[domain@localhost:9990 /] /host=master/server-config=server-two:stop
    "outcome" => "success",
    "result" => "STOPPING"

Now access again the application to check the Cache content:

$ curl http://localhost:8280/infinispan-wildfly/print
<h1>Printing cache content</h1>
Person [name=John , surname=Smith]

That’s all. We have gone learnt in this tutorial how to start an Infinispan Cluster and how to access it remotely from a Java EE Application.