Clustering WildFly Application Server

Clustering means that when your application is running on a server, it can continue its task on another server exactly at the same point it was at, without any manual failover. This means that the application’s state is replicated across cluster members.

The following picture shows WildFly clustering building blocks:

As you can see, the backbone of WildFly clustering is the JGroups library, which provides a reliable multicast system used by cluster members to find each other and communicate. Next comes Infinispan, which is a data grid platform that is used by the application server to keep in sync the application data in the cluster by means of a replicated and transactional JSR-107 compatible cache. Infinispan is used both as a Cache for standard session mechanisms (HTTP Sessions and SFSB session data) and as advanced caching mechanism for JPA and Hibernate objects (aka second level cache).

To get the clustering feature, WildFly provides two profiles:

  • ha: clustering of EJB and Web applications
  • full-ha: : clustering of EJB, Web applications and JMS applications

The simplest way to start a cluster in standalone mode is therefore to use a configuration that is cluster-aware, for example:

$ ./standalone.sh -c standalone-ha.xml

Now from another console another instance of WildFly, using a port offset to avoid clashing with the ports opened by the first server:

$ ./standalone.sh -c standalone-ha.xml -Djboss.socket.binding.port-offset=100

As JGroups is a subsystem that gets activated on demand, if no clusterable applications are deployed the cluster won’t be created.

Now suppose that an EJB application is deployed in both servers, you will see that the jgroups subsystem gets activated and so does infinispan subsystem:

2019-01-11 16:49:55,815 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-5) ISPN000078: Starting JGroups channel ejb
2019-01-11 16:49:55,815 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-8) ISPN000078: Starting JGroups channel ejb
2019-01-11 16:49:55,822 INFO  [org.infinispan.CLUSTER] (MSC service thread 1-5) ISPN000094: Received new cluster view for channel ejb:  [standalone-1/web|1] [standalone-1/ejb, standalone-2/ejb]

Internally, the communication within the cluster is managed by JGroups in its own subsystem:

<subsystem xmlns="urn:jboss:domain:jgroups:6.0">
    <channels default="ee">
        <channel name="ee" stack="udp" cluster="ejb"/>
    </channels>
    <stacks>
        <stack name="udp">
            <transport type="UDP" socket-binding="jgroups-udp"/>
            <protocol type="PING"/>
            <protocol type="MERGE3"/>
            <protocol type="FD_SOCK"/>
            <protocol type="FD_ALL"/>
            <protocol type="VERIFY_SUSPECT"/>
            <protocol type="pbcast.NAKACK2"/>
            <protocol type="UNICAST3"/>
            <protocol type="pbcast.STABLE"/>
            <protocol type="pbcast.GMS"/>
            <protocol type="UFC"/>
            <protocol type="MFC"/>
            <protocol type="FRAG3"/>
        </stack>
        <stack name="tcp">
            <transport type="TCP" socket-binding="jgroups-tcp"/>
            <socket-protocol type="MPING" socket-binding="jgroups-mping"/>
            <protocol type="MERGE3"/>
            <protocol type="FD_SOCK"/>
            <protocol type="FD_ALL"/>
            <protocol type="VERIFY_SUSPECT"/>
            <protocol type="pbcast.NAKACK2"/>
            <protocol type="UNICAST3"/>
            <protocol type="pbcast.STABLE"/>
            <protocol type="pbcast.GMS"/>
            <protocol type="MFC"/>
            <protocol type="FRAG3"/>
        </stack>
    </stacks>
</subsystem>

As you can see, JGroups provides by default two different stacks, named “udp” and “tcp” , each one with its own particular protocol stack. By default JGroups is configured to use UDP and multicasting, for discovering members and to communicate / verify their state. If you want to switch to a pure TCP clustering solution we recommend reading this tutorial: How to configure WildFly and JBoss EAP to use TCPPING

While JGroups handle the communication between cluster members, the Session data of the cluster itself is managed by Infinispan subsystem.

<subsystem xmlns="urn:jboss:domain:infinispan:7.0">
        <cache-container name="server" aliases="singleton cluster" default-cache="default" module="org.wildfly.clustering.server">
            <transport lock-timeout="60000"/>
            <replicated-cache name="default">
                <transaction mode="BATCH"/>
            </replicated-cache>
        </cache-container>
        <cache-container name="web" default-cache="dist" module="org.wildfly.clustering.web.infinispan">
            <transport lock-timeout="60000"/>
            <distributed-cache name="dist">
                <locking isolation="REPEATABLE_READ"/>
                <transaction mode="BATCH"/>
                <file-store/>
            </distributed-cache>
        </cache-container>
        <cache-container name="ejb" aliases="sfsb" default-cache="dist" module="org.wildfly.clustering.ejb.infinispan">
            <transport lock-timeout="60000"/>
            <distributed-cache name="dist">
                <locking isolation="REPEATABLE_READ"/>
                <transaction mode="BATCH"/>
                <file-store/>
            </distributed-cache>
        </cache-container>
        <cache-container name="hibernate" module="org.infinispan.hibernate-cache">
            <transport lock-timeout="60000"/>
            <local-cache name="local-query">
                <object-memory size="10000"/>
                <expiration max-idle="100000"/>
            </local-cache>
            <invalidation-cache name="entity">
                <transaction mode="NON_XA"/>
                <object-memory size="10000"/>
                <expiration max-idle="100000"/>
            </invalidation-cache>
            <replicated-cache name="timestamps"/>
        </cache-container>
</subsystem>

As you can see, two cache containers are relevant for storing the session state of your applications: the “ejb” cache holds Stateful Session Beans, while the “web” cache holds HTTP Session data across the cluster.

Clustering in Domain Mode

The same concepts apply to the domain mode, which means that to get clustering capabilities, we need to choose between the ha or full-ha profile.

As you already know, in domain mode, all the available and configured profiles reside in a single file, domain.xml . Each profile is then referenced by one or more server-groups. So, in order to configure the “main-server-group” to be cluster-aware switch to the “ha” or “full-ha” profile as in this example:

<server-groups>
    <server-group name="main-server-group" profile="ha">
        <jvm name="default">
            <heap size="64m" max-size="512m"/>
        </jvm>
        <socket-binding-group ref="ha-sockets"/>
    </server-group>
    <server-group name="other-server-group" profile="full-ha">
        <jvm name="default">
            <heap size="64m" max-size="512m"/>
        </jvm>
        <socket-binding-group ref="full-ha-sockets"/>
    </server-group>
</server-groups>

JBoss farming service

JBoss farming service was available for early versions of the application server. such as JBoss 4 and JBoss 5. This tutorial discusses how to configure it and which are the alternatives available for new versions of the application server

What is JBoss Farming Service?

The farming service is an hot-deploy feature available in the “all” configuration of the application server.
Using the farming service, you can deploy an application archive (such as a WAR, EAR, or SAR or even an exploded archive) to the all/farm/ directory of any cluster member and the application will be automatically deployed across all nodes in the same cluster.
If another node joins the cluster, it will pull in all farm deployed applications in the cluster and deploy them at start-up time. Much the same way, if you delete the application from an active JBoss cluster’s farm/ directory, the application will be undeployed locally and then removed from all other clustered server nodes’ farm/ directories.
It’s even possible to fine-tune the farming service by modifying the attributes of the farm-service.xml configuration file. This file is located in the deploy/deploy.last directory:
<bean name="FarmProfileRepositoryClusteringHandler"
      class="org.jboss.profileservice.cluster.repository.
      DefaultRepositoryClusteringHandler">
  
  <property name="partition"><inject bean="HAPartition"/></property>
  <property name="profileDomain">default</property>
  <property name="profileServer">default</property>
  <property name="profileName">farm</property>
  <property name="immutable">false</property>
  <property name="lockTimeout">60000</property><!-- 1 minute -->
  <property name="methodCallTimeout">60000</property><!-- 1 minute -->
  <property name="synchronizationPolicy"><inject bean="FarmProfileSynchronizationPolicy"/></property>
</bean>
  • partition is a required attribute to inject the HAPartition service that the farm service uses for intra-cluster communication.
  • profile[Domain|Server|Name] are all used to identify the server profile.
  • immutable indicates whether or not this handler allows a node to push content changes to the cluster.
  • lockTimeout defines the number of milliseconds to wait for cluster-wide lock acquisition.
  • methodCallTimeout defines the number of milliseconds to wait for invocations on remote cluster nodes.
  • synchronizationPolicy decides how to handle content additions, reincarnations, updates, or removals from nodes attempting to join the cluster or from cluster merges. The policy is consulted on the “authoritative” node, i.e. the master node for the service on the cluster. Reincarnation refers to the phenomenon where a newly started node may contain an application in its farm/ directory that was previously removed by the farming service but might still exist on the starting node if it was not running when the removal took place.

How to replace the farming service in WildFly

If you are running WildFly or JBoss EAP 6/7 the Farming Service is not available any more. The recommended way to distribute applications to your cluster using a single distribution point is to switch to Domain mode and use management instruments.

This way, you will be able to deploy/undeploy your applications to a Server Group in a similar way you used to do with the Farming Server, although you will be using Management Instruments such as the CLI and Web Console.

Read more here:

How to deploy applications on WildFly

If your applications will be running on the Cloud, we suggest checking this tutorial

Building and deploying a Jakarta EE application on OpenShift

Clustering with JBoss mod_cluster

Mod cluster is an http based load balancer which greatly simplify the setup of an http cluster. It ships as a set of modules which need to be installed on the httpd server and a Service Archive Library (.sar) which needs to be deployed on WildFly.
Wildfly is also able to handle front-end requests with the load-balancer profile. Check this tutorial to learn more: Configuring High Availability with WildFly
Compared with the former mod_jk, the new mod_cluster has the great advantage to accept a dynamic configuration of httpd workers. This can be done through an advertise mechanism where all httpd workers communicate lifecycle events (like startup or shutdown) thus leveraging dynamic configuration of nodes.

 

The following picture describes an high level view of the communication between the httpd server and a set of JBoss nodes:

As you can see, mod_cluster issues requests to mod_proxy_ajp which forwards the AJP request to the target node of the cluster. The cluster of JBoss nodes communicates the cluster view to mod_cluster.
However note that AJP is optional: Unlike mod_jk, mod_cluster can forward connections to application server nodes using also HTTP or HTTPS. Refer to the documentation for additional information about it.

Installing mod_cluster

The installation of mod_cluster is slightly different depending on the release of the application server. If you are running JBoss AS 5.1.0 you need to execute all the steps listed in this tutorial. If you are running JBoss AS 6 or 7 you need just to execute steps 2 (Install Binaries) and 4 (Update Apache configuration).
At first move to the Download page and download the following items:
 

1) Download the Java Bundle package which contains the Service Archive (.sar) to be deployed on JBoss
2) Download the binaries for your platform which contains a built-in httpd server and its modules (including mod_cluster).

1. Installing Java Bundles

From the bundle tar.gz, copy the file mod-cluster.sar in the “deploy/cluster”  folder of your “all” configuration:

2. Installing Mod_cluster Binaries

 You have to copy the following modules to your Apache Web server installation:

  • mod_proxy.so
  • mod_proxy_ajp.so
  • mod_slotmem.so
  • mod_manager.so
  • mod_proxy_cluster.so
  • mod_advertise.so
The modules can be extracted from the archive mod-cluster-1.0.3.GA-xxx-ssl.zip. The modules need to be copied in the APACHE_HOME/modules dir.
 

3. Update JBoss AS Configuration

Now open the file server.xml which contains the embedded Web Server configuration and is located in the deploy/jbossweb.sar/ folder and add the following line at the same level of other Listeners:

<Listener className="org.jboss.web.tomcat.service.deployers.MicrocontainerIntegrationLifecycleListener"            delegateBeanName="ModClusterService"/>
In the same file, a few lines after, add the jvmRoute in the Engine element:
<Engine name="jboss.web" defaultHost="localhost"  jvmRoute="node01"> 

Next file we need to modify is jboss-beans.xml which is located in the deploy/jbossweb.sar/META-INF folder. There we need to state that the WebServer depends on the Mod Cluster Service.

Just add the following dependency to the WebServer bean (approx line 16)

 <depends>ModClusterService</depends> 

4. Update Apache Web Server configuration

Provided that you have copied all modules in the previous step, all you have to do is loading the modules in the httpd memory and configure a VirtualHost for handling requests to JBoss cluster.

In order to enable mod_cluster, the following modules need to be enabled:

LoadModule proxy_module /modules/mod_proxy.so
LoadModule proxy_ajp_module /modules/mod_proxy_ajp.so
LoadModule slotmem_module /modules/mod_slotmem.so
LoadModule manager_module /modules/mod_manager.so
LoadModule proxy_cluster_module /modules/mod_proxy_cluster.so
LoadModule advertise_module /modules/mod_advertise.so

Then, add this configuration to the bottom of your Apache’s httpd.conf file:

Listen 192.168.1.0:6666

<VirtualHost 192.168.1.0:6666>   
    <Directory />     
       Order deny,allow     
       Deny from all     
       Allow from 192.168.1.  
   </Directory> 

   KeepAliveTimeout 60
   MaxKeepAliveRequests 0  
   ManagerBalancerName mycluster
   AdvertiseFrequency 5 
</VirtualHost>

Here we assume that your Apache listen to the IP Address 192.168.1.0 and accepts requests on port 6666.

Now the configuration is complete. Restart Apache and start a set of JBoss nodes:

JBoss AS 4/5/6

run.sh -c all -b 192.168.1.10

JBoss AS 7

standalone.sh -c standalone-ha.xml -b 192.168.1.10

Verify that both Apache and JBoss serve correctly web pages:

http://192.168.1.0:6666

should serve Apache home page. (“It works!”)

Now deploy a sample Web application to JBoss cluster, for example MyWebApp context:

http://192.168.1.0:6666/MyWebApp

As you can see, mod cluster automatically configures a context for all your Web applications, unless you have explicitly excluded it in mod_cluster.sar/META-INF/mod-cluster-jboss-beans.xml file. (Look for the property excludedContexts)

Using ngnix with WildFly – JBoss

In this tutorial we will learn how to configure nginx -the popular load balancing solution- in front of a cluster of WildFly or JBoss servers.

Nginx is a powerful opensource balancing solution which offers many benefits in terms of load balancing, server health checks, HTTP/2 support, active monitoring and more. Let’s start from the installation. You can install nginx on a Fedora/RHEL distribution with:

$ yum install nginx 

Let’s see a sample configuration which can be used to front two WildFly/JBoss servers running on these HTTP ports:

JBoss1	8080
JBoss2	8180

Here is the /etc/nginx/nginx.conf configuration:

http {
    include       mime.types;
    default_type  application/octet-stream;

 

    server {
        listen       80;
        server_name  localhost;

 

        location /balancer {
           proxy_set_header  Host $host;
           proxy_set_header  X-Real-IP $remote_addr;
           proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_pass        http://jboss;
        }
  

   }

   upstream jboss {
      # Sticky session
      ip_hash;

      server localhost:8080;
      server localhost:8180;
   }

}

As you can see, JBoss servers are enlisted in the upstream section as jboss. The proxy_pass directive links the upstream servers with the http protocol. The balancing is done using Sticky Session which is done by calculating an Hash over the IP of the server where the connection started. Subsequent connections will be routed to the same server, unless the node fails. Additionally an Header is set for request intercepted by nginx

To check your balancing, start nginx:

$ systemctl start nginx.service

If you want to enable nginx to start at each boot, also execute:

$ systemctl enable nginx.service

Other Balancing options

The default algorithm, if you don’t provide any, is Round Robin:

   upstream jboss {
      server localhost:8080;
      server localhost:8180;
   }

Another options is directing the traffic to nodes which the least number of active connections:

   upstream jboss {
      least_conn;
      server localhost:8080;
      server localhost:8180;
   }

If you are using nginx, you can also use least time which selects the server with the lowest average latency and the least number of active connections:

upstream jboss {
      least_time header;
      server localhost:8080;
      server localhost:8180;
   }

The number of features available in nginx are really a lot so I strongly advise to have a look at the Site’s resouces: https://www.nginx.com/resources/admin-guide

One particularly interesting feature is the Health checking which can be used to stop sending requests to a server when it is not considered active. Health checking can be done in two ways:

Passive Health Checking

In this case, nginx considers a server unavailable when a timeout or failure happens during the balancing. Here is for example how to exclude one server when the 5 failed attempts happen with a timeout of 30 seconds on each.

   upstream jboss {
      server localhost:8080 max_fails=5 fail_timeout=30s;
      server localhost:8180 max_fails=5 fail_timeout=30s;
   }

Active Health Checking

In the latter case, some heartbeats are sent to servers to check if they are available or not. Let’s see an example:

http {
    upstream jboss {
        zone jboss 64k;

      server localhost:8080;
      server localhost:8180;
    }

    server {
        location / {
            proxy_pass http://jboss;
             health_check interval=30 fails=3 passes=3;
        }
    }
}

In this case, an health check will be performed every 30 seconds. A Jboss server will be considered unhealthy after 3 consecutive failed health checks. The JBoss server will be considered back healthy after 3 successful health checks.

References: https://www.nginx.com/resources/admin-guide/load-balancer/

JBoss AS 7 HA tutorial

Please note: This tutorial has been written for JBoss AS 7 / EAP 6
If you want to read the latest articles about JBoss / WildFly HA please check the following:

Configuring High Availability with WildFly
JBoss Clustering a Web Application
Clustering EJB 3 with JBoss AS

In this tutorial we will show how to achieve High Availability with your Enterprise Java Beans using a simple Stateful clustered EJB and a remote Client.

Clustering stateful session beans requires JBoss AS to manage the state information. The component that is in charge to manage state information is Infinispan, which by default uses Replication to keep the session synchronized between components, each time the state of a bean changes.

In this tutorial we will set up and test a standalone cluster made up of two nodes and we will deploy our Stateful EJB on both nodes, by simply dropping the artifact on the deployments folder. Here’s a picture of our cluster definition:

Now let’s code a Simple SFSB with a counter variable instance that will be used to keep track of the session:

package com.sample.ejb;

import javax.annotation.PostConstruct;
import javax.ejb.Remote;
import javax.ejb.Stateful;
import org.jboss.ejb3.annotation.Clustered;


@Stateful
@Clustered

@Remote(SampleBeanRemote.class) 
public class  SampleBeanRemoteImpl implements SampleBeanRemote  {
    int counter=0;

    @PostConstruct
    public void init() {
           System.out.println("EJB inited!");
    }


    @Override
    public int sum() {

               counter++;
               System.out.println("Value of the counter:"+counter);
               return counter;
    }


}

And here’s the corresponding SampleBeanRemote interface:

package com.sample.ejb;

public interface SampleBeanRemote {
    public int sum();

}
Important Notice

If you are using Eclipse and JBoss Tools (3.3) the JBoss AS 7 Server library does not includes by default the libraries where the @Clustered annotation is contained. Until this issue is solved, you have to include the jboss-ejb3-ext-api-2.0.0.jar manually as shown by the following picture:

On the other hand, if you are using Maven to build your project, you need to add the following dependency in order to compile your Clustered EJB:

<dependency>
            <groupId>org.jboss.ejb3</groupId>
            <artifactId>jboss-ejb3-ext-api</artifactId>
            <version>2.0.0</version>
</dependency>

Fine, now start your cluster with a cluster-aware configuration as in the following example:

NodeA

standalone -c standalone-ha.xml -Djboss.node.name=nodeA                    

NodeB

standalone -c standalone-ha.xml -Djboss.socket.binding.port-offset=200 -Djboss.node.name=nodeB

And deploy your EJB application to both nodes of your cluster (simply copy them in the deployments folder, if you are using a standalone cluster)

Now it’s time to code your client application which will invoke the sum() method and display the value for that field.

package com.sample.client;

import javax.naming.*;

import com.sample.ejb.SampleBeanRemote;
import com.sample.ejb.SampleBeanRemoteImpl;

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.*;


public class RemoteEJBClient {

    public static void main(String[] args) throws Exception {
        testRemoteEJB();

    }

    private static void testRemoteEJB() throws NamingException {

        final SampleBeanRemote ejb = lookupRemoteEJB();
        int s = ejb.sum();

        System.out.println("Value of Counter " +s);

        s = ejb.sum();

        System.out.println("Value of Counter " +s);

        System.out.println("Shut down the pinned JBoss AS 7 node and press ENTER");

        pressAKey();

        s = ejb.sum();

        System.out.println("Value of Counter " +s);
    }

    private static SampleBeanRemote lookupRemoteEJB() throws NamingException {
        final Hashtable jndiProperties = new Hashtable();
        jndiProperties.put(Context.URL_PKG_PREFIXES, "org.jboss.ejb.client.naming");

        final Context context = new InitialContext(jndiProperties);


        final String appName = "";
        final String moduleName = "jboss-as-ejb-remote-app";
        final String distinctName = "";
        final String beanName = SampleBeanRemoteImpl.class.getSimpleName();

        final String viewClassName = SampleBeanRemote.class.getName();
        System.out.println("Looking EJB via JNDI ");
        System.out.println("ejb:" + appName + "/" + moduleName + "/" + distinctName + "/" + beanName + "!" + viewClassName);

        return (SampleBeanRemote) context.lookup("ejb:" + appName + "/" + moduleName + "/" + distinctName + "/" + beanName + "!" + viewClassName+"?stateful");


    }


    // Compatible with Eclipse Environment
    public static void pressAKey() {

        InputStreamReader istream=null;

        BufferedReader bufRead=null;


        istream = new InputStreamReader(System.in) ;

        bufRead = new BufferedReader(istream) ;

        String returnval = null;
        try {
            returnval =  bufRead.readLine();
            bufRead.close();
            istream.close();
        } catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }

    }

}

As you can see, after the second invocation, you need to press the ENTER key, so you are allowed to shut down the JBoss AS 7 instance which is pinned to our client and see if the session is correctly moved to the other node.

Last thing we need adding is the jboss-ejb-client.properties which will contain the list of nodes which will be used by the client application. (Since the second node is running with a port-offset of 200m the remoting node port will be 4647)

remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED=false

remote.connections=node1,node2

remote.connection.node1.host=localhost
remote.connection.node1.port = 4447
remote.connection.node1.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS=false
remote.connection.node1.username=userejb
remote.connection.node1.password=userejb123

 

remote.connection.node2.host=localhost
remote.connection.node2.port = 4647
remote.connection.node2.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS=false
remote.connection.node2.username=userejb
remote.connection.node2.password=userejb123

Ready to run !
Launch the client application and verify on the console that the EJB correctly prints out the total:

Now kill this server instance (Control-C on the window will suffice), and press the ENTER key in your Client shell.

Et voilà ! The clustered session has been recovered by the other server node. In the next tutorial we will check high availability using a clustered Web application.

How to configure WildFly and JBoss EAP to use TCPPING

TCPPING is used with TCP as transport, and uses a static list of cluster members’s addresses.

If you are using WildFly 14 or newer, the recommended way to do that is to use the <socket-discovery-protocol /> element which points to set the socket bindings (one for each cluster note). This decouples the cluster definition from the JGroups configuration:

<stack name="tcpping">
    <transport type="TCP" socket-binding="jgroups-tcp"/>
    <socket-discovery-protocol type="TCPPING" socket-bindings="jgroups-host-a jgroups-host-b"/>
    <protocol type="MERGE3"/>
    <protocol type="FD_SOCK"/>
    <protocol type="FD_ALL"/>
    <protocol type="VERIFY_SUSPECT"/>
    <protocol type="pbcast.NAKACK2"/>
    <protocol type="UNICAST3"/>
    <protocol type="pbcast.STABLE"/>
    <protocol type="pbcast.GMS"/>
    <protocol type="MFC"/>
    <protocol type="FRAG2"/>
</stack>

Then in your socket-binding-group define the list of cluster members:

<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">

    <!-- other configuration here -->
    <outbound-socket-binding name="jgroups-host-a">
        <remote-destination host="localhost" port="7600"/>
    </outbound-socket-binding>
    <outbound-socket-binding name="jgroups-host-b">
        <remote-destination host="localhost" port="7750"/>
    </outbound-socket-binding>
</socket-binding-group>

You can change to the default channel’s stack to use tcpping as follows:

/subsystem=jgroups/channel=ee:write-attribute(name=stack,value=tcpping)

Legacy TCPPING configuration

If you are using an older version of WildFly or JBoss EAP 6, then you need to use properties to specify your static cluster composition.In the following example we have configured as default stack tcpping for a cluster made up of two nodes listening on the IP Addresses 192.168.10.1 and 192.168.10.2 with the default tcp port 7600:

<subsystem xmlns="urn:jboss:domain:jgroups:2.0" default-stack="tcpping">
	<stack name="tcpping">
		<transport type="TCP" socket-binding="jgroups-tcp"/>
		<protocol type="TCPPING">
			<property name="initial_hosts">192.168.10.1[7600],192.168.10.2[7600]</property>
			<property name="port_range">10</property>
			<property name="timeout">3000</property>
			<property name="num_initial_members">2</property>
		</protocol>
		<protocol type="MERGE2"/>
		<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
		<protocol type="FD"/>
		<protocol type="VERIFY_SUSPECT"/>
		<protocol type="BARRIER"/>
		<protocol type="pbcast.NAKACK"/>
		<protocol type="UNICAST2"/>
		<protocol type="pbcast.STABLE"/>
		<protocol type="pbcast.GMS"/>
		<protocol type="UFC"/>
		<protocol type="MFC"/>
		<protocol type="FRAG2"/>
		<protocol type="RSVP"/>
	</stack>
</subsystem>

Initial_hosts is a comma delimited list of hosts to be contacted for initial membership.

Num_initial_members specifies the maximum number of responses to wait for unless timeout has expired. The default is 2.

TCPPING will try to contact both 192.168.10.1 and 192.168.10.2, starting at port 7600 and ending at port 7600 + the port_range, in the above example ports 7600 – 7610.

Legacy TCPPING from the CLI

Here is a script which can be used to create the TCPPING stack directly from the CLI:

connect
batch
/subsystem="jgroups"/stack="tcpping":add()
/subsystem="jgroups"/stack="tcpping":add-protocol(type="TCPPING")
/subsystem="jgroups"/stack="tcpping"/protocol="TCPPING"/property="initial_hosts":add(value="192.168.10.1[7600],192.168.10.2[7600]")
/subsystem="jgroups"/stack="tcpping"/protocol="TCPPING"/property="port_range":add(value="10")
/subsystem="jgroups"/stack="tcpping"/protocol="TCPPING"/property="timeout":add(value="3000")
/subsystem="jgroups"/stack="tcpping"/protocol="TCPPING"/property="num_initial_members":add(value="2")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="MERGE2")
/subsystem="jgroups"/stack="tcpping":add-protocol(socket-binding="jgroups-tcp-fd",type="FD_SOCK")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="FD")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="VERIFY_SUSPECT")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="BARRIER")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="pbcast.NAKACK")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="UNICAST2")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="pbcast.STABLE")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="pbcast.GMS")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="UFC")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="MFC")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="FRAG2")
/subsystem="jgroups"/stack="tcpping":add-protocol(type="RSVP")
/subsystem="jgroups"/stack="tcpping"/transport="TRANSPORT":add(socket-binding="jgroups-tcp",type="TCP")
run-batch

 

A close look inside Infinispan Distribution mode

What you like to have a close look over the Infinispan distribution of data in your JBoss EAP 6 / WildFly cluster? I’ll tell you how to do it.

The default algorithm used by WildFly application server for clustering is based on the Infinispan distribution. This means that cache entries are copied to a fixed number of cluster nodes (2, by default) regardless of the cluster size. Distribution uses a consistent hashing algorithm to determine which nodes will store a given entry.

 

Caches in turn are available to the application server in the form of Cache containers. Some of them are already available in the infinispan configuration such as the web cache and the ejb cache and they can be retrieved in your application as follows:

@Resource(lookup="java:jboss/infinispan/container/web")
private CacheContainer container; 

Now we will show how to create a simple Web application which stores the keys in the HTTP Session, then we show in a table where keys are located in the server and the server elected as primary server.

Let’s build a simple EJB which can be used for this purpose:

@Stateless
public class CacheInspector {
	@Resource(lookup="java:jboss/infinispan/container/web")
	 private CacheContainer container; 
	 
	 private org.infinispan.Cache<String, String> cache;
	 
	 @PostConstruct
	 public void init() {
		 this.cache = container.getCache();
	 }
	 public String locateServers(String key) {
		     			 
			List<Address> list = this.cache.getAdvancedCache().getDistributionManager().locate(key);
			if (list != null)
				return
					list.toString();
			else return null;
		}
	 
	 public String locatePrimary(String key) {
		    		 
			Object list = this.cache.getAdvancedCache().getDistributionManager().getPrimaryLocation(key);
			if (list != null)
				return
					list.toString();
			else return null;
		}
}

The locateServers method is used to retrieve the list of Servers where a particular key is stored. The number of elements in the List<Address> corresponds to the owners in the infinispan configuration. To keep it simple we return the ArrayList as a String of servers.

The locatePrimary method, on the other hand, returns the Node which has been elected as primary node in the cluster.

Our application is almost ready, all we need is some “glue” to reach out the EJB:

@RequestScoped
@ManagedBean
public class Bean {

	private String key;
	private String value;
	List propertyList = new ArrayList();
 
    @EJB CacheInspector ejb;
 
    // Getter/setters here

	public void save() {

		FacesContext facesContext = FacesContext.getCurrentInstance();
		HttpSession session = (HttpSession) facesContext.getExternalContext()
				.getSession(false);

		session.setAttribute(key, value);
		
		propertyList = new ArrayList();
		Enumeration e = session.getAttributeNames();
		
		while (e.hasMoreElements()) {
			String attr = (String) e.nextElement();
			Item item = new Item();
			item.setKey(attr);
			item.setValue(session.getAttribute(attr).toString());
			String location = ejb.locateServers(attr);
			String primary = ejb.locatePrimary(attr);
			item.setServers(location);
			item.setPrimary(primary);
			propertyList.add(item);
		}

	}
}

 And finally a simple view to add some key/values in the HttpSession, through the save method of the Bean class:

<html xmlns="http://www.w3.org/1999/xhtml"
	xmlns:ui="http://java.sun.com/jsf/facelets"
	xmlns:h="http://java.sun.com/jsf/html"
	xmlns:f="http://java.sun.com/jsf/core"
	xmlns:c="http://java.sun.com/jsp/jstl/core">
<h:head>
	<style type="text/css">
 
   </style>
</h:head>
<h:body>
	<h2>Cache distribution demo</h2>
	<h:form id="jsfexample">
		<h:panelGrid columns="2" styleClass="default">

			<h:outputText value="Enter key:" />
			<h:inputText value="#{bean.key}" />

			<h:outputText value="Enter value:" />
			<h:inputText value="#{bean.value}" />

			<h:commandButton actionListener="#{bean.save}"
				styleClass="buttons" value="Save key/value" />
		

			<h:messages />


		</h:panelGrid>


		<h:dataTable value="#{bean.propertyList}" var="item"
			styleClass="table" headerClass="table-header"
			rowClasses="table-odd-row,table-even-row">
			<h:column>
				<f:facet name="header">Key</f:facet>
				<h:outputText value="#{item.key}" />
			</h:column>
			
			<h:column>
				<f:facet name="header">Location</f:facet>
				<h:outputText value="#{item.servers}" />
			</h:column>
 			<h:column>
				<f:facet name="header">Primary</f:facet>
				<h:outputText value="#{item.primary}" />
			</h:column>

		</h:dataTable>
	</h:form>
</h:body>
</html>

 Our application is ready so let’s start a JBoss EAP 6 – WildFly in HA mode with a couple of nodes. Deploy the application and start adding some entries:

As you can see from the above picture, with only two servers and owners=2 the distribution acts like a replication. If we add some more nodes to the cluster you can see that each key is available just on two nodes (out of the four nodes of the cluster):

You can use the above code as foundation for learning the algorithms used by Infinispan to distribute cache entries and how they can be influenced by setting server hints in the JGroups transport section. Have fun with distribution!

EJB to EJB communication in a cluster (JBoss AS 7 – WildFly)

This tutorial covers an advanced clustering scenario where you have an EJB client on a JBoss AS 7/WildFly server that is communicating with another EJB that is running on a cluster of servers.

The purpose of this trail is to demonstrate that the EJB client can get a view of the cluster topology on first lookup and this view is updated dynamically as the cluster topology changes. Here is a picture which shows our scenario:

Configuration on the EJB server side

The EJB server is running on a cluster of JBoss AS 7/WildFly nodes. All you have to do is creating an application user so that the call can be routed from a the remote client:

What type of user do you wish to add?
 a) Management User (mgmt-users.properties)
 b) Application User (application-users.properties)
(a): b

Enter the details of the new user to add.
Using realm 'ApplicationRealm' as discovered from the existing property files.
Username : ejbserver
Password :
Re-enter Password :
What groups do you want this user to belong to? (Please enter a comma separated list, or leave blank for none)[  ]:
. . .
To represent the user add the following to the server-identities definition <secret value="UGFzc3dvcmQxIQ==" />

Configuration on the EJB client side

The configuration has to be done on the JBoss/WildFly server is more complex as we have to define a set of outbound connections to your remote servers;

<subsystem xmlns="urn:jboss:domain:remoting:1.1">
	<connector name="remoting-connector" socket-binding="remoting" security-realm="ApplicationRealm"/>
	<outbound-connections>
		<remote-outbound-connection name="remote-ejb-connection1" outbound-socket-binding-ref="remote-connection-1" username="ejbserver" security-realm="ejb-security-realm">
			<properties>
				<property name="SASL_POLICY_NOANONYMOUS" value="false"/>
				<property name="SSL_ENABLED" value="false"/>
			</properties>
		</remote-outbound-connection>
		<remote-outbound-connection name="remote-ejb-connection2" outbound-socket-binding-ref="remote-connection-2" username="ejbserver" security-realm="ejb-security-realm">
			<properties>
				<property name="SASL_POLICY_NOANONYMOUS" value="false"/>
				<property name="SSL_ENABLED" value="false"/>
			</properties>
		</remote-outbound-connection>				
	</outbound-connections>
</subsystem>

. . . . .
<outbound-socket-binding name="remote-connection-1">
            <remote-destination host="192.168.10.1" port="8080"/>
        </outbound-socket-binding>
		 <outbound-socket-binding name="remote-connection-2">
            <remote-destination host="192.168.10.2" port="8080"/>
</outbound-socket-binding>

  If you are using JBoss AS/ EAP 6 you have to reference the default remoting port 4547, adding the offset if included.

Now complete your configuration by including in the security-realm section the password with the secret we have just created on the server:

<security-realms>
   <security-realm name="ejb-security-realm">
       <server-identities>
           <secret value="UGFzc3dvcmQxIQ=="/>
       </server-identities>
    </security-realm>
	. . .
</security-realms>

Packaging your application on the client side

The EJB client application needs to be include into the META-INF folder the file jboss-ejb-client.xml which contains a reference to the outbound connections:

<jboss-ejb-client xmlns="urn:jboss:ejb-client:1.2">
    <client-context>
        <ejb-receivers>
            <remoting-ejb-receiver outbound-connection-ref="remote-ejb-connection1" />
          <remoting-ejb-receiver outbound-connection-ref="remote-ejb-connection2" />
        </ejb-receivers>
		
        <clusters>
            <cluster name="ejb">
                <connection-creation-options>
                    <property name="org.xnio.Options.SSL_ENABLED" value="false" />
                    <property name="org.xnio.Options.SASL_POLICY_NOANONYMOUS" value="false" />
                </connection-creation-options>
            </cluster>
        </clusters>
    </client-context>
</jboss-ejb-client>

The other key element here is the cluster reference which is pointing to the “ejb” cluster view running on the remote EJB.This allows to collect the cluster view by the EJB proxy client. Now deploy the EJB client and server applications.

Mind it if you are using Jboss AS 7 /EAP 6 you need to enable the ejb cluster channel by either adding the @Clustered annotation on your EJBs or including the jboss-ejb.xml file:

<jboss:ejb-jar xmlns:jboss="http://www.jboss.com/xml/ns/javaee"
               xmlns="http://java.sun.com/xml/ns/javaee"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xmlns:c="urn:clustering:1.0"
               xsi:schemaLocation="http://www.jboss.com/xml/ns/javaee http://www.jboss.org/j2ee/schema/jboss-ejb3-2_0.xsd http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/ejb-jar_3_1.xsd"
               version="3.1"
               impl-version="2.0">
    <enterprise-beans>
        <session>
            <ejb-name>ClusteredEJB</ejb-name>
            <ejb-class>com.sample.ClusteredEJB</ejb-class>
            
        </session>
    </enterprise-beans>
    <assembly-descriptor>
        <c:clustering>
            <ejb-name>ClusteredEJB</ejb-name>
            <c:clustered>true</c:clustered>
        </c:clustering>
    </assembly-descriptor>
</jboss:ejb-jar>

As you can see from the following snapshot, the EJB client is balancing the load between the two servers in a cluster:

Now let’s vary the composition of the cluster by shutting down server-two and adding a new node named server-three:

As you can see, now the balancing is adjusted dynamically by the proxy sitting on the client side:

 

 

Load Balancing EJBs in WildFly

One of the main advantages of using clustered Session Beans is that load can be spread across several cluster members. This by default happens on a random basis. Today we will learn how to customize the Session bean load balancing in WildFly or JBoss EAP 6.

By default, calls from a client to a cluster of Stateless Session Beans (SLSB) will be distributed among the SLSBs in a random like behavior. Things are different in case you are using Stateful EJBs (SFSB) and this is discussed in the next part of the article.

As you can see from the following image, before Clustering enters in the picture, when an EJB is invoked a Deployment Node Selector is used to choose a corresponding EJB receiver for an eligible server instance that can handle the EJB invocation.

The default node selector is a RandomNode Selector. If you want, you can provide a custom DeploymentNodeSelector by implementing the org.jboss.ejb.client.DeploymentNodeSelector. Here is an example of it:

package com.sample;

import org.jboss.ejb.client.DeploymentNodeSelector;

public class RoundRobinDeploymentNodeSelector implements DeploymentNodeSelector {

   private int serverIndex = 0;

   @Override
  public String selectNode(String[] eligibleNodes, String appName, String moduleName,String distinctName)

   {

   String selectedNode = eligibleNodes[serverIndex++ % eligibleNodes.length];

   System.out.println("Selected node: " + selectedNode);

   return selectedNode;

   }

}

The DeploymentNodeSelector can be configured in the jboss-ejb-client.properties file as follows:

deployment.node.selector=com.sample.RoundRobinDeploymentNodeSelector

Include the following dependency in order to compile your Deployment Selector:

<dependency>
	<groupId>org.jboss</groupId>
	<artifactId>jboss-ejb-client</artifactId>
</dependency>

Clustered EJB Scenario

When running in a cluster, after the first invocation (with DeploymentNodeSelector) a cluster-view is returned from the server and the client creates a ClusterContext with all cluster-members and the ClusterNodeSelector. From now on this context is used.

Besides this, in a cluster scenario, an important factor is the type of EJB.

In case of Stateless Session beans, subsequent invocations of the EJBs will be balanced by the ClusterNodeSelector since there is a weak affinity with the server that completed the first call.

In case of Stateful session beans, there will be an strong affinity with the specific node where the communication started. This means that the ClusterNodeSelector will be bypassed until a new cluster view is returned. In other words, the client will be pinned to the server where the Stateful EJB is, as long as that EJB instance is available.

That being said, let’s see how to code a cluster node selector. This is a Java class that implements the org.jboss.ejb.client.ClusterNodeSelector. JBoss EAP 6 and WildFly ships with a RandomClusterNodeSelector, that randomly picks up a node in the cluster.

Using a RandomClusterNodeSelector cloud be a pitfall as it is a non predictable algorithm for choosing the EJB that will be invoked. So, in order to overcome these issues, a class which implements the ClusterNodeSelector can be made available on the client application side. Here is an example, which uses a Round Robin Node Selector:

public class RoundRobinNodeSelector implements ClusterNodeSelector {

 private AtomicInteger clusterNode;

 public RoundRobinNodeSelector() {

  clusterNode = new AtomicInteger(0);

 }

 @Override

 public String selectNode(String clusterName,

  String[] connectedNodes, String[] availableNodes) {

  if (availableNodes.length < 2) {
 return availableNodes[0];

  }

  return availableNodes[clusterNode.getAndIncrement() % availableNodes.length];

 }

}

Now let’s see in detail the example:

The selectedNode is invoked when a new EJB needs to be reached in the cluster. It receives the current clusterName. The String[] availableNodes contains all registered nodes which are able to handle the current EJB invocation.

Finally, connectedNodes contains all the already connected nodes and is a subset of availableNodes.

In the above example, the class named RoundRobinNodeSelector uses a RoundRobin algorithm by cycling over an index that is incremented on each call

Please note that we need to use an AtomicInteger object as we need a thread-safe, atomically incremented counter. This is performed via the getAndIncrement method.

Now let’s see how to configure the selector in two possible scenarios:

Remote Java clientFor remote Java clients you have to include in your jboss-ejb-client.properties a reference to your selector and the:

remote.clusters=ejb

remote.cluster.ejb.clusternode.selector=com.sample.RoundRobinNodeSelector

If the EJBClientContext is set programmaticaly

Properties p = new Properties();

p.put(.......)  // other properties

p.put("remote.clusters", "ejb");

p.put("remote.cluster.ejb.clusternode.selector", RoundRobinNodeSelector.class.getName());

EJBClientConfiguration cc = new PropertiesBasedEJBClientConfiguration(p);

ContextSelector<EJBClientContext> selector = new ConfigBasedEJBClientContextSelector(cc);

EJBClientContext.setSelector(selector);

If the remote EJB client is part of the Java EE application (let’s say a Servlet) the outbound connections are configured in the jboss-ejb-client.xml descriptor. The Selector needs to be added to the cluster configuration as follows:

<jboss-ejb-client xmlns:p="urn:jboss:ejb-client:1.2" xmlns:xsi="htt//www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:jboss:ejb-client:1.2 sample.xsd ">

  <client-context>

 <clusters>

  <cluster name="ejb" cluster-node-selector="com.sample.RoundRobinNodeSelector" max-allowed-connected-nodes="25" connect-timeout="5000" >

  </cluster>

  </clusters>

  </client-context>

</jboss-ejb-client>

Please notice we have used the max number of nodes to override the default implementation which allows just 20 nodes.

Porting JBoss EAP 5 applications to WildFly

Last but not least we will mention where load balancing was configured in the earlier version of the application server. This was done the jboss.xml, by means of the load-balance-policy attribute as follows:

<jboss>  

 <enterprise-beans>

  <session>

 <ejb-name>NonAnnotationStateful</ejb-name>

 <clustered>true</clustered>

 <cluster-config>
  <partition-name>FooPartition</partition-name>
  <load-balance-policy>
 org.jboss.ha.framework.interfaces.RandomRobin
  </load-balance-policy>
 </cluster-config>

  </session>  

</enterprise-beans>

</jboss>

Therefore if you have coded your own load balacing policy then you can move it into the Deployment Node Selector or Cluster Node Selector as described in this article.

Dispatching commands on WildFly cluster

This tutorial has been updated, thanks to the suggestions received on developer.jboss.org. We thank our readers for reporting any issue found in our tutorials.

This is the second article about WildFly clustering API. You can read here the first tutorial: Monitoring a cluster using WildFly API. In this article we will learn how to execute commands on Node/s of the cluster using WildFly’s Dispatch API

The Dispatcher is a standard Java pattern used to group the execution of commands to a component named, in fact the dispatcher. In WildFly terms the Dispatcher is org.wildfly.clustering.dispatcher.CommandDispatcher

 

The CommandDispatcher can be used in conjunction with the org.wildfly.clustering.dispatcher.CommandDispatcherFactory. which can be used, as you can imagine, to create a CommandDispatcher.

The advantage of using this API is that you can direct the execution of Commands to all Nodes of a Cluster of to a single Node of the cluster.

Let’s see it in practice. At first you need a CommandDispatcherFactory which will be in charge to create a CommandDispatcher and a Bean which is triggered when the application is deployed, and will make available the CommandDispatcher in the Server acting as a glue between the applications and the CommandDispatcher.

Let’s see at first our Factory Bean:

package com.sample;

 
import javax.annotation.Resource;
 
import javax.ejb.Singleton;
import javax.ejb.Startup;

import org.wildfly.clustering.dispatcher.CommandDispatcher;
import org.wildfly.clustering.dispatcher.CommandDispatcherFactory;
import org.wildfly.clustering.group.Group;

@Singleton
@Startup
public class CommandDispatcherFactoryBean implements CommandDispatcherFactory {

    @Resource(lookup = "java:jboss/clustering/dispatcher/web")
    private CommandDispatcherFactory factory;

 
    public <C> CommandDispatcher<C> createCommandDispatcher(Object service, C context) {
        return this.factory.createCommandDispatcher(service, context);
    }

    @Override
    public Group getGroup() {
        return this.factory.getGroup();
    }
}

As you can see, the above code is a candidate for a web application, since it uses the “web” cluster channel. Now the CommandDispatcherBean, which is also a SingletonBean, and it’s in charge to init the CommandDispatcher after deployment:

package com.sample;

import java.util.Map;
import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
import javax.ejb.EJB;
 
import javax.ejb.Singleton;
import javax.ejb.Startup;

import org.wildfly.clustering.dispatcher.Command;
import org.wildfly.clustering.dispatcher.CommandDispatcher;
import org.wildfly.clustering.dispatcher.CommandDispatcherFactory;
import org.wildfly.clustering.dispatcher.CommandResponse;
import org.wildfly.clustering.group.Node;

@Singleton
@Startup
public class CommandDispatcherBean  {
    @EJB
    private CommandDispatcherFactory factory;
    private CommandDispatcher<Node> dispatcher;

    @PostConstruct
    public void init() {
        this.dispatcher = this.factory.createCommandDispatcher("CommandDispatcher", this.factory.getGroup().getLocalNode());
    }

    @PreDestroy
    public void destroy() {
        this.close();
    }

    public <R> CommandResponse<R> executeOnNode(Command<R, Node> command, Node node) throws Exception {
        return this.dispatcher.executeOnNode(command, node);
    }
   
    public <R> Map<Node, CommandResponse<R>> executeOnCluster(Command<R, Node> command, Node... excludedNodes) throws Exception  {
        return this.dispatcher.executeOnCluster(command, excludedNodes);
    }

    public void close() {
        this.dispatcher.close();
    }
}

To complete our example, we will implement a org.wildfly.clustering.dispatcher.Command with the following Class GarbageCollectorCommand, that executes a Garbage collector on the JVM:

package com.sample;

import org.wildfly.clustering.dispatcher.Command;
import org.wildfly.clustering.group.Node;

public class GarbageCollectorCommand implements Command<String, Node> {

    @Override
    public String execute(Node node) {
        
       System.gc();           
       return node.getName();
    }
 
}

Running the Command on a Node

The simplest case is when we execute the Dispatcher’s executeOnNode command which can direct the execution of a single node of the cluster. You can imagine several scenarios for it such as CPU intensive operations which should be executed on one Node of the cluster which has the best available hardware. The following Servlet will execute the Command on the first Node registered in the cluster:

package com.sample;

import java.io.IOException;
import java.io.PrintWriter;

import java.util.logging.*;

import javax.annotation.Resource;
import javax.ejb.EJB;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.*;

import org.wildfly.clustering.dispatcher.Command;
import org.wildfly.clustering.dispatcher.CommandDispatcherFactory;
import org.wildfly.clustering.dispatcher.CommandResponse;
import org.wildfly.clustering.group.Group;
import org.wildfly.clustering.group.Node;

 
@WebServlet(name = "TestCommandNode", urlPatterns = {"/TestCommandNode"})
public class TestCommandNode extends HttpServlet {

    @Resource(lookup = "java:jboss/clustering/group/web")
    private Group channelGroup;

    @Resource(lookup = "java:jboss/clustering/dispatcher/web")
    private CommandDispatcherFactory factory;

    private final Command<String, Node> command = new GarbageCollectorCommand();

    @EJB
    CommandDispatcherBean ejb;

    protected void processRequest(HttpServletRequest request, HttpServletResponse response)
            throws ServletException, IOException {
        PrintWriter out = response.getWriter();
        try {
            // Execute command on the first Node
            Node node = channelGroup.getNodes().get(0);

            CommandResponse cr = ejb.executeOnNode(command, node);
            out.println(cr + " executed on " + node.getName());

        } catch (Exception ex) {
            Logger.getLogger(TestCommandCluster.class.getName()).log(Level.SEVERE, null, ex);
        }

        out.close();
    }


    @Override
    protected void doGet(HttpServletRequest request, HttpServletResponse response)
            throws ServletException, IOException {
        processRequest(request, response);
    }


    @Override
    protected void doPost(HttpServletRequest request, HttpServletResponse response)
            throws ServletException, IOException {
        processRequest(request, response);
    }

}

Running the Command on all the cluster

Running the Command on all cluster nodes is not much more complicated: you will execute the executeOnCluster method, passing as argument the Command and the list of excluded Nodes (in our case none):

package com.sample;

import java.io.IOException;
import java.io.PrintWriter;
import java.util.List;
import java.util.Map;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.annotation.Resource;
import javax.ejb.EJB;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.*;

import org.wildfly.clustering.dispatcher.Command;

import org.wildfly.clustering.dispatcher.CommandDispatcherFactory;
import org.wildfly.clustering.dispatcher.CommandResponse;
import org.wildfly.clustering.group.Group;
import org.wildfly.clustering.group.Node;

 
@WebServlet(name = "TestCommandCluster", urlPatterns = {"/TestCommandCluster"})
public class TestCommandCluster extends HttpServlet {

    @Resource(lookup = "java:jboss/clustering/group/web")
    private Group channelGroup;

    @Resource(lookup = "java:jboss/clustering/dispatcher/web")
    private CommandDispatcherFactory factory;

    private final Command<String, Node> command = new GarbageCollectorCommand();
 
    @EJB
    CommandDispatcherBean ejb;

    protected void processRequest(HttpServletRequest request, HttpServletResponse response)
            throws ServletException, IOException {
        PrintWriter out = response.getWriter();
        try {
            List<Node> nodes = channelGroup.getNodes();
            Map<Node, CommandResponse<String>> map = ejb.executeOnCluster(command, null);

            for (Map.Entry<Node, CommandResponse<String>> entry : map.entrySet()) {
                out.println(entry.getValue() + " executed on " + entry.getKey());
            }

        } catch (Exception ex) {
            Logger.getLogger(TestCommandCluster.class.getName()).log(Level.SEVERE, null, ex);
        }

        out.close();
    }

 
    @Override
    protected void doGet(HttpServletRequest request, HttpServletResponse response)
            throws ServletException, IOException {
        processRequest(request, response);
    }

 
    @Override
    protected void doPost(HttpServletRequest request, HttpServletResponse response)
            throws ServletException, IOException {
        processRequest(request, response);
    }
 
}

Deploying the application

Before deploying the application, you will need to activate the org.wildfly.clustering.api dependencies. You can do it through the jboss-deployment-structure.xml file as follows:

<jboss-deployment-structure xmlns="urn:jboss:deployment-structure:1.2">
    <deployment>
        <dependencies>
            <module name="org.wildfly.clustering.api"   services="export"/>
 
        </dependencies>
    </deployment>
</jboss-deployment-structure>

References: https://developer.jboss.org/wiki/ClusteringChangesInWildFly8?_sscc=t  

The source code for the NetBean project is available at: https://github.com/fmarchioni/mastertheboss/tree/master/cluster-dispatcher