How to externalize HTTP sessions on Infinispan

This article is a step by step guide for setting up a remote Infinispan cluster as an HTTP session store for your HTTP Sessions running on WildFly application server. You can carry out equivalent steps to externalize JBoss EAP 7 session on Red Hat Data Grid.

Benefits of having external HTTP Sessions

You can use Infinispan as an external cache container for application data such as HTTP sessions in WildFly application servers. The advantages of this set up are mainly two:

  • Increased High Availability: by storing HTTP Session externally, your application data can survive also if all your application server nodes crash or need a restart.
  • Increased scalability: you can scale your caching layer independently from the WildFly cluster (and vice versa). Also, if you scale up your WildFly server no rehash process will take place to arrange your session data.

In order to get started, you will need the following products available:

Let’s start from Infinispan configuration.

Infinispan set up

We recommend having a look at this article if you are new to Infinispan: Getting Started with Infinispan data grid -the right way

Firstly, we will create an user to allow remote connections from WildFly. Open the terminal and move to the bin folder of Infinispan:

$ ./cli.sh user create
Specify a username: admin
Set a password for the user: *****
Confirm the password for the user: *****

Now, your server includes an user with credentials “admin/admin”,

Finally, start the Infinispan server:

./server.sh

Configuring WildFly Remote Cache

Next, we will configure WildFly connection to the remote Infinispan server. Start by adding an outbound socket which points to the Infinispan server host and port:

<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
  <!-- Other socket bindings here -->
        <outbound-socket-binding name="infinispan-server">
            <remote-destination host="localhost" port="11222"/>
        </outbound-socket-binding>

</socket-binding-group>

In the above example, we assume you are using Infinispan’s default Port (112222).

Next, add a Remote Cache Container Configuration to Infinispan subsystem:

<subsystem xmlns="urn:jboss:domain:infinispan:13.0">

      <!- Cache containers configuration here -->
    <remote-cache-container name="sessionCache" default-remote-cluster="infinispan-cluster" modules="org.wildfly.clustering.web.hotrod">

        <property name="infinispan.client.hotrod.auth_username">admin</property>
        <property name="infinispan.client.hotrod.auth_password">admin</property>
        <remote-clusters>
            <remote-cluster name="infinispan-cluster" socket-bindings="infinispan-server"/>
        </remote-clusters>
    </remote-cache-container>
</subsystem>

As you can see, the above Cache relies on org.wildfly.clustering.web.hotrod modules to handle the connection with HotRod protocol. Also, notice we have included the username and password to connect to the remote Cache.

Finally, we will configure the Session settings. You can do it in multiple ways. For example, by configuring an invalidation cache in the Infinispan subsystem with the proper Cache settings (This approach is discussed in JBoss EAP 7 documentation: https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.2/html/configuration_guide/configuring_high_availability#jdg_externalize_http_sessions ).

In this tutorial, we will show instead how to configure the Session through the new distributable-web subsystem which manages a set of HTTP session profiles that encapsulate the configuration of a distributable HTTP session manager.

In the following example configuration, we have added an hotrod-session-management called “mycache” as default distributable Web configuration:

 <subsystem xmlns="urn:jboss:domain:distributable-web:2.0" default-session-management="helloworld" default-single-sign-on-management="default">
            <infinispan-session-management name="default" cache-container="web" granularity="SESSION">
                <primary-owner-affinity/>
            </infinispan-session-management>
            <hotrod-session-management name="mycache" remote-cache-container="sessionCache" granularity="SESSION">
                 <local-affinity/>
            </hotrod-session-management>
            <infinispan-single-sign-on-management name="default" cache-container="web" cache="sso"/>
            <infinispan-routing cache-container="web" cache="routing"/>
 </subsystem>

Let’s have a look at the hotrod-session-management configuration:

  • We refer to the sessionCache which we have previously defined in the infinispan subsystem.
  • We set the session granularity to SESSION, which stores all session attributes within a single cache entry. This is generally more expensive than ATTRIBUTE granularity, yet it preserves any cross-attribute object references.
  • Finally, we have set the affinity to local. Therefore Web requests will have an affinity to the server that last handled a reqyest. This option corresponds to traditional sticky session behavior.

Testing with an application

In order to test our WildFly – Infinispan connection any Web application will do. You just need to make sure that the application is cluster-aware, i.e. it includes a web.xml with a distributable element.

<web-app xmlns="http://xmlns.jcp.org/xml/ns/javaee"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_4_0.xsd"
         version="4.0">
    <distributable/>
</web-app>

Next, deploy an example application (say helloworld.war) and start using it.

You can check the Status of Infinispan Caches from the CLI or the Web Console (http://127.0.0.1:11222). Here you can see that the helloworld.war application has produced a Distributed Cache using the same name:

infinispan externalize session wildfly

Also, if you look inside the Cache (provided that metrics are enabled), you can see the Cache statistics:

jboss http session on infinispan

How to reference the Cache name at application level

In our example, we have added an hotrod-session-management called “mycache” as default distributable Web configuration. You can however use the Hot Rod Cache only for your application. You can do that by including an XML descriptor named distributable-web.xml in the WEB-INF folder:

<?xml version="1.0" encoding="UTF-8"?>
<distributable-web xmlns="urn:jboss:distributable-web:2.0">
    <session-management name="mycache"/>
</distributable-web>

Marshalling with ProtoStream

In our example, you might have noticed the following WARN in the server logs:

WARN  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 15) WFLYCLINF0033: Attribute 'marshaller' is configured to use a deprecated value: LEGACY; use one of the following values instead: [JBOSS, PROTOSTREAM]

As a matter of fact, if you don’t provide a value for the marshaller attribute, it will use as default the LEGACY marshaller. To marshal the entries that Infinispan stores in the remote cache, the best practice is to use the PROTOSTREAM Marshaller.

PROTOSTREAM relies on Protocol Buffers (Protobuf) which is a lightweight binary media type for structured data. As a cache encoding, Protobuf gives you excellent performance as well as interoperability between client applications in different programming languages for both Hot Rod and REST endpoints.

To use PROTOSTREAM Marshaller in your Remote Cache Container, update your Remote Cache Container as follows:

<remote-cache-container name="sessionCache" default-remote-cluster="infinispan-server-cluster" marshaller="PROTOSTREAM" modules="org.wildfly.clustering.web.hotrod">

Conclusion

In this article we have covered the steps to offload your HTTP Sessions on Infinispan server and which are the advantages of using this approach.

Found the article helpful? if so please follow us on Socials