Clustering Infinispan made simple

This tutorial will discuss how to configure and start an Infinispan cluster. As next step, we will show how to connect to the cluster remotely using a Java application.

Infinispan uses the JGroups library to provide network communication capabilities. JGroups is a toolkit for reliable group communication and provides many useful features and enables cluster node discovery, point-to-point and point-to-multipoint communication, failure detection, and data transfer between cluster nodes.

You can configure Infinispan to either be only on local JVM or clustered. Its real world scenario is the use in cluster mode, where all nodes act as a single cache, providing a large amount of memory heap.

If in a cluster, the cache can be configured for full replication or for distribution on mode.

Replication is the simplest clustered mode and comprises of one or more cache instances that replicate its data to all cluster nodes; in the end, all cluster nodes will end up with the same data.

In the Distribution mode, you can define, via configuration, the number of replicas that are available for a given data grid. The distribution strategy is designed for larger in-memory data grid clusters and it’s the most scalable data grid topology. Distribution strategy improves the application’s performance dramatically and outperforms the replication mode on traditional networks.

Setting up Infinispan cluster

Firstly, create two folders, each one to be used for an Infinispan cluster node:

$ mkdir nodeA

$ mkdir nodeB

Then, download Infinispan latest distribution from https://infinispan.org/download/

Next, unzip the Infinispan distribution in both folders:

$ unzip infinispan-server-12.1.7.Final.zip -d nodeA

$ unzip infinispan-server-12.1.7.Final.zip -d nodeB

Then, start the Infinispan cluster, using the UDP stack. Start the first node:

bin/server.sh --cluster-stack=udp

Next, start the second node:

bin/server.sh --cluster-stack=udp -Dinfinispan.socket.binding.port-offset=100

As you can see, we have used a port offset for the second node, as we are starting the cluster on the same machine.

From the Console, you will see that a new Cluster view has been generated, including both Infinispan servers:

2021-09-07 12:58:48,010 INFO  (ForkJoinPool.commonPool-worker-3) [org.infinispan.SERVER] ISPN080018: Started connector HotRod (internal)
2021-09-07 12:58:48,170 INFO  (main) [org.infinispan.SERVER] ISPN080018: Started connector REST (internal)
2021-09-07 12:58:48,394 INFO  (main) [org.infinispan.SERVER] ISPN080004: Connector SINGLE_PORT (default) listening on 127.0.0.1:11222
2021-09-07 12:58:48,395 INFO  (main) [org.infinispan.SERVER] ISPN080034: Server 'fedora-47059' listening on http://127.0.0.1:11222
2021-09-07 12:58:48,452 INFO  (main) [org.infinispan.SERVER] ISPN080001: Infinispan Server 12.1.7.Final started in 5229ms
2021-09-07 12:58:49,118 INFO  (jgroups-6,fedora-47059) [org.infinispan.CLUSTER] ISPN000094: Received new cluster view for channel cluster: [fedora-47059|1] (2) [fedora-47059, fedora-25875]
2021-09-07 12:58:49,123 INFO  (jgroups-6,fedora-47059) [org.infinispan.CLUSTER] ISPN100000: Node fedora-25875 joined the cluster

Finally, we will add an user to both servers so that we can access the Admin Console and reach the cache remotely:

./bin/cli.sh user create admin -p "password"

Create a sample Infinispan application

To test our Infinispan cluster, we will create a basic application which adds an Entry to a remote cache. This application is derived from a sample in the Infinispan quickstarts (https://github.com/infinispan/infinispan-simple-tutorials/tree/main)

Firstly, add an utility Class to connect to the Remote Infinispan Cluster:

package infinispan.demoremote;
import org.infinispan.client.hotrod.RemoteCacheManager;
import org.infinispan.client.hotrod.configuration.ClientIntelligence;
import org.infinispan.client.hotrod.configuration.ConfigurationBuilder;
import org.infinispan.client.hotrod.impl.ConfigurationProperties;

public class InfinispanUtil {
	   public static final String USER = "admin";
	   public static final String PASSWORD = "password";
	   public static final String CACHE_NAME = "test";
 

	   public static final String CACHE_CONFIG =
	         "<distributed-cache name=\"CACHE_NAME\">\n"
	         + "    <encoding media-type=\"application/x-protostream\"/>\n"
	         + "</distributed-cache>";


	   public static final ConfigurationBuilder connectionConfig() {
	      ConfigurationBuilder builder = new ConfigurationBuilder();
	      builder.addServer().host("127.0.0.1").port(11222).security()
	            .authentication()
	            .username(USER)
	            .password(PASSWORD);
	        
	      builder.addServer().host("127.0.0.1").port(11322).security()
          .authentication()
          .username(USER)
          .password(PASSWORD);


	      builder.clientIntelligence(ClientIntelligence.BASIC);

	      // Make sure the remote cache is available.
	      // If the cache does not exist, the cache will be created
	      builder.remoteCache(CACHE_NAME)
	            .configuration(CACHE_CONFIG.replace("CACHE_NAME", CACHE_NAME));
	      return builder;
	   }


	   public static final RemoteCacheManager connect() {
	     ConfigurationBuilder builder = connectionConfig();

	      RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build());

	      // Clear the cache in case it already exists from a previous running tutorial
	      cacheManager.getCache(CACHE_NAME).clear();

	      // Return the connected cache manager
	      return cacheManager;
	   }
}

As you can see from the code, we have used the ConfigurationBuilder to create a fluent Configuration with the server details of both Infinispan nodes. Within this utility Class, we have also defined the Cache name to be used, named “test”.

Next, let’s code a standalone Java class which uses the InfinispanUtil class to insert an Entry:

package infinispan.demoremote;

import org.infinispan.client.hotrod.RemoteCache;
import org.infinispan.client.hotrod.RemoteCacheManager;

public class InfinispanRemoteCache {

   public static void main(String[] args) {
      // Connect to the server
      RemoteCacheManager cacheManager = InfinispanUtil.connect();
      // Obtain the remote cache
      RemoteCache<String, String> cache = cacheManager.getCache(InfinispanUtil.CACHE_NAME);
      /// Store a value
      cache.put("key", "value");
      // Retrieve the value and print it out
      System.out.printf("key = %s\n", cache.get("key"));
      // Stop the cache manager and release all resources
      cacheManager.stop();
   }

}

Building and running the application

To build and run the example application it is sufficient to include the HotRod library which is used to connect remotely to the Infinispan server and a Bill of Material to align with an Infinispan version:

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.infinispan</groupId>
			<artifactId>infinispan-bom</artifactId>
			<version>${version.infinispan}</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

<dependencies>

	<dependency>
		<groupId>org.infinispan</groupId>
		<artifactId>infinispan-client-hotrod</artifactId>
	</dependency>
</dependencies>

By running the application will show up in the Console:

Sep 07, 2021 12:59:24 PM org.infinispan.client.hotrod.RemoteCacheManager actualStart
INFO: ISPN004021: Infinispan version: Infinispan 'Taedonggang' 12.1.7.Final
key = value

Checking our Cluster in the Admin Console

Lastly, let’s have a look at the Web console of our cluster. Access one of the cluster nodes, for example the first one at: http://127.0.0.1:11222

Login with the credentials “admin” / “password”.

As you can see from the cluster-wide statistics, there’s one entry in relation to our Cache:

If you select the Cache “test”, you will see that the Entry we have just added is there:

The source code for this example application is available at: https://github.com/fmarchioni/mastertheboss/tree/master/infinispan/infinispan-remote

Conclusion

We have covered how to install and configure a basic Infinispan cluster, using the UDP protocol stack. As proof of concept, we have created a sample application which connected to the Cluster and added an Entry

Found the article helpful? if so please follow us on Socials