Getting started with Infinispan Command Line Interface

The version 10 of Infinispan features a brand new server replacing the WildFly-based server with a smaller, leaner implementation. In this tutorial we will check how to use its Command Line Interface to connect to an Infinispan cluster

First off, let’s check some of the highlights of Infinispan 10:

  • Reduced disk (50MB vs 170MB) and memory footprint (18MB vs 40MB at boot)
  • Simpler to configure, since it shares the configuration schema with embedded with server-specific extensions
  • Single-port design: the Hot Rod, REST and management endpoint are now served through a single port (11222)
  • New REST-based API for administration
  • Security enhancements using WildFly Elytron
  • New CLI with data manipulation operations

Let’s cover the last point in this tutorial. Start by grabbing the latest stable version of infinispan from https://infinispan.org/download/

Now unzip the server in two different location of your drive, so we can make up a Cluster of servers:

$ mkdir $HOME/node1
$ mkdir $HOME/node2

$ unzip infinispan-server-10.1.3.Final.zip -d $HOME/node1
$ unzip infinispan-server-10.1.3.Final.zip -d $HOME/node2

Now let’s check the Infinispan configuration which is located in the INFINISPAN_HOME/server/conf/infinispan.xml

For the purpose of learning, let’s add some more caches to the default configuration.

We will therefore change the cache-container element from this:

   <cache-container name="default" statistics="true">
      <transport cluster="${infinispan.cluster.name}" stack="${infinispan.cluster.stack:tcp}" node-name="${infinispan.node.name:}"/>
      <metrics gauges="true" histograms="true"/>
   </cache-container>

To this:

  <cache-container default-cache="local">
      <transport cluster="${infinispan.cluster.name}" stack="${infinispan.cluster.stack:tcp}" node-name="${infinispan.node.name:}"/>
      <local-cache name="local"/>
      <invalidation-cache name="invalidation" mode="SYNC"/>
      <replicated-cache name="repl-sync" mode="SYNC"/>
      <distributed-cache name="dist-sync" mode="SYNC"/>
      <metrics gauges="true" histograms="true"/>
   </cache-container>

Cache containers declare one or more local or clustered caches that a Cache Manager controls. In our case we have added an invalidation cache, a replicated cache and a distributed cache.

Now let’s start both servers using the default port for the first server and an offset of 100 in the second one:

$ cd node1 
$ ./server.sh

$ cd node2 
$ ./server.sh -Dinfinispan.socket.binding.port-offset=100

You should be able to see in the Console that a new Cluster view has been created:

14:27:21,345 INFO  (main) [org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster
14:27:21,512 INFO  (main) [org.infinispan.CLUSTER] ISPN000094: Received new cluster view for channel cluster: [localhost-15456|1] (2) [localhost-15456, localhost-15666]

Now let’s launch the Infinispan CLI as follows:

$ ./cli.sh 

Now let’ connect to the first server:

[disconnected]> connect 127.0.0.1:11222

Once logged, we can check the caches which are available. For example the “repl-sync”:

[localhost-15456@cluster//containers/DefaultCacheManager]> describe caches/repl-sync
{
  "replicated-cache" : {
    "mode" : "SYNC",
    "remote-timeout" : 17500,
    "state-transfer" : {
      "timeout" : 60000
    },
    "transaction" : {
      "mode" : "NONE"
    },
    "locking" : {
      "concurrency-level" : 1000,
      "acquire-timeout" : 15000,
      "striping" : false
    },
    "statistics" : true
  }
}

Now let’s navigate into the “caches/repl-sync”. Then, we will add an entry in it:

[localhost-15456@cluster//containers/DefaultCacheManager]> cd caches/repl-sync
[localhost-15456@cluster//containers/DefaultCacheManager/caches/repl-sync]> put key1 value1
[localhost-15456@cluster//containers/DefaultCacheManager/caches/repl-sync]> get key1
value1

As you can see, we have added a key named “key1” with a value of “value1”. To verify that our replication cache did its job, let’s connect to the other node of the Cluster, then we will retrieve the “key1” as we already did for the first server:

[localhost-15456@cluster//containers/DefaultCacheManager/caches/repl-sync]> connect 127.0.0.1:11322

[localhost-15666@cluster//containers/DefaultCacheManager]> cd caches/repl-sync

[localhost-15666@cluster//containers/DefaultCacheManager/caches/repl-sync]> get key1

value1

You can remove single cache entries using the remove command:

[localhost-56512@cluster//containers/DefaultCacheManager/caches/repl-sync]> remove --cache=repl-sync key1
[localhost-56512@cluster//containers/DefaultCacheManager/caches/repl-sync]> get key1
Not Found 

On the other hand, if you want to clear all entries in a cache, then you can use the clearcache command:

[localhost-56512@cluster//containers/DefaultCacheManager/caches/repl-sync]> clearcache repl-sync

You can also create caches from scratch using the CLI. You can do it either using a Cache template, such as the org.infinispan.DIST_SYNC:

[localhost-56512@cluster//containers/DefaultCacheManager/caches/repl-sync]> create cache --template=org.infinispan.DIST_SYNC demo-cache
[localhost-56512@cluster//containers/DefaultCacheManager/caches/repl-sync]> cd ..
[localhost-56512@cluster//containers/DefaultCacheManager/caches]> cd demo-cache
[localhost-56512@cluster//containers/DefaultCacheManager/caches/demo-cache]> describe

{
  "distributed-cache" : {
    "mode" : "SYNC",
    "remote-timeout" : 17500,
    "state-transfer" : {
      "timeout" : 60000
    },
    "transaction" : {
      "mode" : "NONE"
    },
    "locking" : {
      "concurrency-level" : 1000,
      "acquire-timeout" : 15000,
      "striping" : false
    },
    "statistics" : true
  }
}

The other option is to use a configuration file (XML or JSON) for your cache as in this example:

[//containers/default]> create cache --file=demo_cache.xml demo-cache

To terminate the CLI session, just issue the quit command:

[localhost-56512@cluster//containers/DefaultCacheManager/caches/demo-cache]> quit
[ bin]$

On the other hand, if you want to shutdown the server or the whole cluster, you can use the following commands:

  • Use the shutdown server command to stop individual servers, for example:
[//containers/default]> shutdown server server_hostname
  • Use the shutdown cluster command to stop all servers joined to the cluster, for example:
[//containers/default]> shutdown cluster mycluster
Found the article helpful? if so please follow us on Socials