Configuring WildFly as Load Balancer for a Cluster

This tutorial is about Load Balancing a cluster of WildFly servers using a WildFly front-end server configured using the load-balancer profile.

Since WildFly 9, you can use an instance of the Application Server as a mod_cluster front-end for your back-end applications. This removes the need to use a native Web server like Apache (and mod_cluster libs installed on it) as load balancer to a cluster of WildFly servers.

Here is a sample view of a cluster of WildFly backend servers fronted by a WildFly frontend server configured to route request using the Mod_cluster Management Protocol (MCMP):

wildfly load balancing tutorial jboss load balancing

You can configure a WildFly Load Balancer using one of the following options:

  1. Configure the Load Balancer to use Multicast (default)
  2. Configure a static list of Proxies (Load Balancers)
  3. Configure undertow server as static load balancer (reverse proxy)

The common requirements to all configurations are the following ones:

  • You need to configure the cluster nodes with an ha / full-ha profile for your cluster to work.
  • Besides it, make sure that you are deploying a cluster-aware application on the top of WildFly. For example, an application which includes in the web.xml file
<distributable />

Now let’s see these options in detail.

Configuring the Load Balancer using Multicast

This is the simplest option and it will work out of the box if you are using WildFly 10 or newer of JBoss EAP 7. Simply make sure the WildFly Load Balancer uses the “loadbalancer” profile and the cluster nodes the “ha” or “full-ha” profile.

Here is an excerpt from domain.xml to set up Load Balancing in your WildFly Domain:

<server-groups>
    <server-group name="balancer-group" profile="load-balancer">
        <jvm name="default">
            <heap size="64m" max-size="512m"/>
        </jvm>
        <socket-binding-group ref="load-balancer-sockets"/>
    </server-group>
    <server-group name="application-group" profile="ha">
        <jvm name="default">
            <heap size="64m" max-size="512m"/>
        </jvm>
        <socket-binding-group ref="ha-sockets"/>
        <deployments>
            <deployment name="myapp.war" runtime-name="myapp.war"/>
        </deployments>
    </server-group>
</server-groups>

Notice the profile settings for your Domain. Also, an application named “myapp.war” has been deployed on the application-group.

This configuration is paired (host.xml) by the following server definitions:

<servers>
    <server name="balancer" group="balancer-group" auto-start="true"/>
    <server name="server-one" group="application-group" auto-start="true">
        <jvm name="default"/>
        <socket-bindings port-offset="100"/>
    </server>
    <server name="server-two" group="application-group" auto-start="true">
        <jvm name="default"/>
        <socket-bindings port-offset="200"/>
    </server>
</servers>

The “balancer” server is running on the default Port (8080) therefore you can access the backend Web application using this port:

$ curl http://localhost:8080/myapp

Hello there!

Configuring a static list of Proxies

In the second configuration, we will disable multicast to detect the list of Front End proxies. Instead, we will provide a static Proxy List of load balancers. The following changes are required:

On the Load Balancer server, we will disable the advertising by setting the advertise-frequency
attribute to zero. Also, we will unset the advertise-socket-binding attribute:

/profile=load-balancer/subsystem=undertow/configuration=filter/mod-cluster=load-balancer:write-attribute(name=advertise-frequency,value=0)

/profile=load-balancer/subsystem=undertow/configuration=filter/mod-cluster=load-balancer:undefine-attribute(name=advertise-socket-binding)

Then, on the cluster nodes we need to add as proxies the list of Front End WildFly Server:

/socket-binding-group=ha-sockets/remote-destination-outbound-socket-binding=proxy1:add(host=localhost, port=8090)

/profile=ha/subsystem=modcluster/proxy=default:write-attribute(name=proxies,value=[proxy1])

As you can see, we have set as remote destination the mod_cluster management Port which is by default 8090:

<socket-binding-group name="load-balancer-sockets" default-interface="public">
    <socket-binding name="http" port="${jboss.http.port:8080}"/>
    <socket-binding name="https" port="${jboss.https.port:8443}"/>
    <socket-binding name="mcmp-management" interface="private" port="${jboss.mcmp.port:8090}"/>
    <socket-binding name="modcluster" interface="private" multicast-address="${jboss.modcluster.multicast.address:224.0.1.105}" multicast-port="23364"/>
</socket-binding-group>

As for our first example, can access the backend Web application using the load balancer’s port:

$ curl http://localhost:8080/myapp

Hello there!

Configuring Undertow as static load balancer (reverse proxy)

To configure undertow as static load balancer, you need to configure a proxy handler in its subsystem. This requires to complete a set of steps in your server that will act as static load balancer:

  • Add a reverse proxy handler
  • Define the outbound socket bindings for each remote host
  • Add each remote host to the reverse proxy handler
  • Add the reverse proxy location

We have added a tutorial to discuss more in detail this configuration: Configuring WildFly as Reverse Proxy

Load Balancer configuration for old WildFly Servers

Please note, if you are using a version prior to WildFly 10.1, there is no out of the box Undertow filter which connects to Mod Cluster via the advertising mechanism. You will need to manually add the Undertow filter, which will use mod_cluster’s advertise ports (port=23364, multicast-address=224.0.1.105). Here is the batch script we will need to execute on the WildFly front end CLI:

batch
/subsystem=undertow/configuration=filter/mod-cluster=modcluster:add(management-socket-
binding=http,advertise-socket-binding=modcluster)
/subsystem=undertow/server=default-server/host=default-host/filter-ref=modcluster:add
# The following is needed only if you are not running an ha profile !**
/socket-binding-group=standard-sockets/socket-binding=modcluster:add(port=23364,
multicast-address=224.0.1.105)
run-batch

Now reload your configuration for the changes to take effect.

[standalone@localhost:9990/] reload

As a result, the modcluster filter has been added to the default-host server. Congratulations! You have just configured a WildFly server to act as Load Balancer!

Found the article helpful? if so please follow us on Socials