How to build and deploy Microprofile applications on Wildfly

WildFly provides support with the latest version of Microprofile API. This means you can combine the Jakarta EE API which is included in WildFly modules with the MicroProfile API to provide advanced Enterprise application. Let’s check in this tutorial how to build applications which use the Microprofile API.

The recommended way to build and deploy MicroProfile applications on WildFly is to include the following BOM dependency:

<dependency>
    <groupId>org.wildfly.bom</groupId>
    <artifactId>wildfly-microprofile</artifactId>
    <version>${wildfly.version}</version>
    <scope>import</scope>
    <type>pom</type>
</dependency>

This will keep in sync the version of all your Microprofile dependencies:

<dependency>
   <groupId>org.eclipse.microprofile.config</groupId>
   <artifactId>microprofile-config-api</artifactId>
</dependency>

On the other hand, if you prefer to sync your Microprofile dependencies against a specific Microprofile specification rather than to the version included in WildFly, then you can use the following BOM in your dependencies:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.eclipse.microprofile</groupId>
            <artifactId>microprofile</artifactId>
            <version>3.3</version>
            <scope>import</scope>
            <type>pom</type>
        </dependency>
    </dependencies>
</dependencyManagement>

If you want to check some examples of Microprofile applications running on WildFly, we recommend checking the following tutorials: http://www.mastertheboss.com/category/eclipse/eclipse-microservices/

Using JWT Role Based Access Control with WildFly

WildFly 19 includes support for Microprofile JWT Api. In this tutorial we will see how to set up and deploy a REST Application which uses Microprofile JWT for Role Based Access Control. The application will run on the top of Wildly 19 and uses Keycloak as Identity and Access management service.

Today, the most common solutions for handling security of RESTful microservices are by means of solid standard which are based on OAuth2, OpenID Connect and JSON Web Token (JWT). In a nutshell JWT (JSON Web Token) is a compact, URL-safe means of representing claims to be transferred between two identities. The claims in a JWT are encoded as a JSON object that is used as the payload of a JSON Web Signature (JWS) structure or as the plaintext of a JSON Web Encryption (JWE) structure, enabling the claims to be digitally signed or integrity protected with a Message Authentication Code (MAC) and/or encrypted.

Triggering the “jwt” subsystem on WildFly

In WildFly, the JWT API is provided by means of this extension:

<extension module="org.wildfly.extension.microprofile.jwt-smallrye"/>

With the current release of WildFly, there are no specific properties you can set into the microprofile-jwt-smallrye. However for the purpose of using its API, we can set some specific attributes of JWT in the Microprofile configuration file (microprofile-config.properties). But let’s go with order. The first thing we need to do is marking a JAX-RS Application as requiring JWT RBAC.
This can be done through the annotation @LoginConfig to your existing @javax.ws.rs.core.Application subclass. So, here is our first element we will include in our application:

package com.mastertheboss.jwt;

import javax.annotation.security.DeclareRoles;
import javax.ws.rs.ApplicationPath;
import javax.ws.rs.core.Application;
import org.eclipse.microprofile.auth.LoginConfig;

@ApplicationPath("/rest")
@LoginConfig(authMethod = "MP-JWT", realmName = "myrealm")
@DeclareRoles({"admin","user"})
public class JaxRsActivator extends Application {
    
}

Next, we will add a secure REST Endpoint, which contains the following methods:

import org.eclipse.microprofile.jwt.Claim;
import org.eclipse.microprofile.jwt.Claims;
import javax.annotation.security.DenyAll;
import javax.annotation.security.RolesAllowed;
import javax.enterprise.context.ApplicationScoped;
import javax.inject.Inject;
import javax.ws.rs.*;
import java.util.Set;

@Path("customers")
@ApplicationScoped
@Produces("application/json")
@Consumes("application/json")
@DenyAll
public class CustomerEndpoint {

    @Inject
    @Claim(standard = Claims.groups)
    private Set<String> groups;

    @Inject
    @Claim(standard = Claims.sub)
    private String subject;

    @GET
    @Path("goadmin")
    @RolesAllowed("admin")
    public String adminMethod() {
        return "You are logged with " +this.subject + ": " + this.groups.toString();
    }
    @GET
    @Path("gouser")
    @RolesAllowed("user")
    public String userMethod() {
        return "You are logged with " +this.subject + ": " + this.groups.toString();
    }
}

As you can see:

  • The adminMethod is allowed only to users belonging to the “admin” Role.
  • the userMethod is allowed only to users belonging to the “user” Role.

In order to return the username and the groups whom the user belongs to, we use the @Claim annotation which captures a set of claims. The JWT specification lists several “registered claims” to achieve specific goals. All of them, however, are optional. The mandatory ones are:

  • exp: This will be used by the MicroProfile JWT implementation to ensure old tokens are rejected.
  • sub: This will be the value returned from any getCallerPrinciple() calls made by the application.
  • groups: This will be the value used in any isCallerInRole() calls made by the application and any @RolesAllowed checks.

JWT settings required in your application

The next thing we need to do is configuring the public key to verify the signature of the JWT that is attached in the Authorization header. This is typically configured within the src/main/resources/META-INF/microprofile-config.properties this way:

mp.jwt.verify.publickey.location=/META-INF/keycloak-public-key.pem

Since MP JWT 1.1, however the key may be provided as a string in the mp.jwt.verify.publickey config property or as a file location or URL.

If you are using Keycloak version 6.0.0, or newer, this is even simpler as there is built-in Client Scope which makes it really easy to issue tokens and you don’t have to add the public key to the PEM file. Instead you point to Keycloak JWK’s endpoint.
So, once you have defined your Keycloak realm with a Client configuration bound to it, you can just add a reference to it into the src/main/resources/META-INF/microprofile-config.properties file

Here is how to do it in practice:

mp.jwt.verify.publickey.location=http://localhost:8180/auth/realms/myrealm/protocol/openid-connect/certs
mp.jwt.verify.issuer=http://localhost:8180/auth/realms/myrealm

In the above configuration, we have assumed that you are running a realm named “myrealm” and that Keycloak is running on localhost:8180. In the second part of this tutorial we will learn how to configure Keycloak so that it can work as authentication service for our JWT application.

Now complete the configuration adding in your web.xml:

<web-app version="3.1"
         xmlns="http://xmlns.jcp.org/xml/ns/javaee"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd">

    <context-param>
        <param-name>resteasy.role.based.security</param-name>
        <param-value>true</param-value>
     </context-param>

     <security-role>
         <role-name>admin</role-name>
         <role-name>user</role-name>
     </security-role>
</web-app>

So we have enavled Role Based Security in our application and declared the available Roles.

Compiling the project

In order to be able to compile applications using JWT, you need to include the following dependency in your application:

<dependency>
  <groupId>org.eclipse.microprofile.jwt</groupId>
  <artifactId>microprofile-jwt-auth-api</artifactId>
  <version>1.1</version>
</dependency>

In addition, your application also needs to include jaxrs, cdi and jboss-annotation API in order to build the above example:

<dependency>
  <groupId>jakarta.enterprise</groupId>
  <artifactId>jakarta.enterprise.cdi-api</artifactId>
  <scope>provided</scope>
</dependency>
<dependency>
  <groupId>org.jboss.spec.javax.ws.rs</groupId>
  <artifactId>jboss-jaxrs-api_2.1_spec</artifactId>
  <scope>provided</scope>
</dependency>
<dependency>
  <groupId>org.jboss.spec.javax.annotation</groupId>
  <artifactId>jboss-annotations-api_1.3_spec</artifactId>
  <scope>provided</scope>
</dependency>

Finally, we will import the jakartaee8-with-tools BOM to add version for our dependencies:

<dependencyManagement>
    <dependencies>
      <dependency>
        <groupId>org.wildfly.bom</groupId>
        <artifactId>wildfly-jakartaee8-with-tools</artifactId>
        <version>${version.server.bom}</version>
        <type>pom</type>
        <scope>import</scope>
      </dependency>
    </dependencies>
</dependencyManagement>

Setting up a Keycloak Realm

In order to run our example you can import the following Realm: https://github.com/fmarchioni/mastertheboss/blob/master/micro-services/mp-jwt-demo/myrealm.json
This can be done in one step by launching Keycloak’s docker image with the KEYCLOAK_IMPORT option to import our sample domain.
So let’s copy the file myrealm.json into a folder:

$ cp myrealm.json /tmp

Next start up Keycloak as follows:

docker run --rm  \
   --name keycloak \
   -e KEYCLOAK_USER=admin \
   -e KEYCLOAK_PASSWORD=admin \
   -e KEYCLOAK_IMPORT=/tmp/myrealm.json  -v /tmp/myrealm.json:/tmp/myrealm.json \
   -p 8180:8180 \
   -it quay.io/keycloak/keycloak:7.0.1 \
   -b 0.0.0.0 \
   -Djboss.http.port=8180 \
   -Dkeycloak.profile.feature.upload_scripts=enabled  

Now login into Keycloak console (http://localhost:8180) with the credentials admin/admin. The following Realm should be available:


Within the Realm, a Client application named jwt-client has been added:


The jwt-client contains the following settings to use OpenId-connect as Client Protocol, using a Confidential Access type, and redirecting to the root URL of WildFly (http://localhost:8080).
Within the Credentials tab, you will see the Client secret (“mysecret”) needed to authenticate:

In order to provide support for the OAuth 2 scope parameter, which allows a client application to request claims or roles in the access token, we have added the following Client scope named “roles”:


Lastly, the following users have been added:

  • The “admin” user (credentials: admin/test) belongs to “admin” group
  • The “test” user (credentials:test/test) belongs to “user” group

Testing our Application

In order to test our application, we will build a simple Test class which uses REST Assured API:

public class SampleEndpointTest {

    String keycloakURL ="http://localhost:8180";

    @Test
    public void testJWTEndpoint() {

        String secret ="mysecret";
        RestAssured.baseURI = keycloakURL;
        Response response = given().urlEncodingEnabled(true)
                .auth().preemptive().basic("jwt-client", secret)
                .param("grant_type", "password")
                .param("client_id", "jwt-client")
                .param("username", "test")
                .param("password", "test")
                .header("Accept", ContentType.JSON.getAcceptHeader())
                .post("/auth/realms/myrealm/protocol/openid-connect/token")
                .then().statusCode(200).extract()
                .response();

        JsonReader jsonReader = Json.createReader(new StringReader(response.getBody().asString()));
        JsonObject object = jsonReader.readObject();
        String userToken = object.getString("access_token");

        response = given().urlEncodingEnabled(true)
                .auth().preemptive().basic("jwt-client", secret)
                .param("grant_type", "password")
                .param("client_id", "jwt-client")
                .param("username", "admin")
                .param("password", "test")
                .header("Accept", ContentType.JSON.getAcceptHeader())
                .post("/auth/realms/myrealm/protocol/openid-connect/token")
                .then().statusCode(200).extract()
                .response();

        jsonReader = Json.createReader(new StringReader(response.getBody().asString()));
        object = jsonReader.readObject();
        String adminToken = object.getString("access_token");

        RestAssured.baseURI = "http://localhost:8080/jwt-demo/rest/jwt";

        given().auth().preemptive()
                .oauth2(userToken)
                .when().get("/gouser")
                .then()
                .statusCode(200);

        given().auth().preemptive()
                .oauth2(adminToken)
                .when().get("/goadmin")
                .then()
                .statusCode(200);

     }

}

The testJWTEndpoint method executes a POST request against the Auth URL of our Keycloak Realm. The request contains as a parameter the username and password along with the Client’s ID (“jwt-client”), and its secret (“mysecret”). The RESTAssured fluent’s API verifies that a Status code of 200 is returned and finally it returns the Response object.

We have then extracted the token contained in the JSON Response, under the key “access_token”.

Once that you have obtained the token for both the “test” user and the “admin” user, we will use them both to invoke the available endpoints and verifying that the status code is 200.

Testing our application using curl

We can also test our application using curl, using the same logic as our REST Assured Test class:

#Get a token for an user belonging to "user" group
export TOKEN=$(\
curl -L -X POST 'http://localhost:8180/auth/realms/myrealm/protocol/openid-connect/token' \
-H 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'client_id=jwt-client' \
--data-urlencode 'grant_type=password' \
--data-urlencode 'client_secret=mysecret' \
--data-urlencode 'scope=openid' \
--data-urlencode 'username=test' \
--data-urlencode 'password=test'  | jq --raw-output '.access_token' \
 )

The token will be saved into the environment variable TOKEN. We can verify the token content as follows:

#Display token
JWT=`echo $TOKEN | sed 's/[^.]*.\([^.]*\).*/\1/'`
echo $JWT | base64 -d | python -m json.tool

Here is our JSON Token:

{
    "acr": "1",
    "allowed-origins": [
        "http://localhost:8080"
    ],
    "aud": "account",
    "auth_time": 0,
    "azp": "jwt-client",
    "email": "tester@localhost",
    "email_verified": false,
    "exp": 1582653314,
    "family_name": "Tester",
    "given_name": "Theo",
    "groups": [
        "offline_access",
        "uma_authorization",
        "user"
    ],
    "iat": 1582653014,
    "iss": "http://localhost:8180/auth/realms/myrealm",
    "jti": "ad8715e6-17a2-4093-b7cd-ea14f60f7706",
    "name": "Theo Tester",
    "nbf": 0,
    "preferred_username": "test",
    "realm_access": {
        "roles": [
            "offline_access",
            "uma_authorization",
            "user"
        ]
    },
    "resource_access": {
        "account": {
            "roles": [
                "manage-account",
                "manage-account-links",
                "view-profile"
            ]
        }
    },
    "scope": "openid email profile",
    "session_state": "c842f816-92ba-45fd-ab1c-14bfabf49f64",
    "sub": "a19b2afc-e96e-4939-82bf-aa4b589de136",
    "typ": "Bearer"
}

Now let’s test our application using this token. At first we will attempt to use the REST API “/customers/gousers” which is authorized for the “user” Role:

curl -H "Authorization: Bearer $TOKEN" http://localhost:8080/jwt-demo-1.0.0-SNAPSHOT/rest/customers/gouser

The above test should pass, returning the user’s role as output.

Next, we will attempt to call the “/customers/goadmin” API using our Token:

curl -H "Authorization: Bearer $TOKEN" http://localhost:8080/jwt-demo-1.0.0-SNAPSHOT/rest/customers/goadmin

 A 401 Unauthorized error should be returned, as our Token does not authorize us to access an API that requires the “admin” Role.

To be able to call the “/customers/goadmin” API, we need to request a Token for the “admin” user as follows:

export TOKEN=$(\
curl -L -X POST 'http://localhost:8180/auth/realms/myrealm/protocol/openid-connect/token' \
-H 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'client_id=jwt-client' \
--data-urlencode 'grant_type=password' \
--data-urlencode 'client_secret=mysecret' \
--data-urlencode 'scope=openid' \
--data-urlencode 'username=admin' \
--data-urlencode 'password=test'  | jq --raw-output '.access_token' \
 )

Source code

The source code for this tutorial is available at: https://github.com/fmarchioni/mastertheboss/tree/master/micro-services/mp-jwt-demo

MicroProfile CLI Cheatsheet

Here is a MicroProfile CLI Cheatsheet that can be used to kickstart your Microprofile projects.

First of all, we like to remind you that a Cloud Microprofile starter is available at this address: https://start.microprofile.io/

That is the simplest way to create a brand new Microprofile project for any runtime that supports it.

Anyway if you want to automate the process or adding some cool bash aliases to your environment, then you should check out the MicroProfile CLI. The Microprofile CLI enables you to generate applications with curl or PowerShell, without the distraction of opening a web browser

How to list the available Microprofile Versions in the starter:

curl https://start.microprofile.io/api/mpVersion

["MP30","MP22","MP21","MP20","MP14","MP13","MP12"]

How to list all available API Starters for all Runtime Environments in a specific Microprofile version:

curl https://start.microprofile.io/api/mpVersion/MP30

{"supportedServers":["LIBERTY","THORNTAIL_V2","HELIDON"],"specs":["CONFIG","FAULT_TOLERANCE","JWT_AUTH","METRICS","HEALTH_CHECKS","OPEN_API","OPEN_TRACING","REST_CLIENT"]}]

How to create a MicroProfile project for a specified Runtime environment (include all available API)

curl -O -J 'https://start.microprofile.io/api/project?supportedServer=THORNTAIL_V2'

How to create a MicroProfile project for a specified Runtime environment (include only a subset of API)

curl -O -J 'https://start.microprofile.io/api/project?supportedServer=THORNTAIL_V2&selectedSpecs=METRICS'

How to create a MicroProfile project setting all available options

$ curl -O -J -L 'https://start.microprofile.io/api/1/project?supportedServer=LIBERTY&groupId=com.example&artifactId=myapp&mpVersion=MP22&javaSEVersion=SE8&selectedSpecs=CONFIG&selectedSpecs=FAULT_TOLERANCE&selectedSpecs=JWT_AUTH&selectedSpecs=METRICS&selectedSpecs=HEALTH_CHECKS&selectedSpecs=OPEN_API&selectedSpecs=OPEN_TRACING&selectedSpecs=REST_CLIENT'

Getting started with MicroProfile FaultTolerance API

The Microprofile Fault Tolerance specification has been specifically built to make your microservices to common failures like network problems or an IOException.

In this tutorial we will learn how to use some patterns like @Timeout, @Retry, @Fallback and @CircuitBreaker policies to drive an alternative result when an execution does not complete successfully.

Let’s start from the @Timeout annotation which prevents from the execution from waiting forever.

Example:

@GET
@Path("/hello")
@Timeout(250)
public String getData()
{
	randomSleep();

	return "Random string " + UUID.randomUUID().toString();
}

You can also add a @Retry and @Fallback policy to refine your fault-tolerance policy.

  • @Retry lets you retry the execution of the method in case of failure, up to a maximum number of retries
  • @Fallback lets you drive the execution to a fallback method, to avoid returning a TimeoutException.
package com.mastertheboss.microprofile.service;

import org.eclipse.microprofile.faulttolerance.Fallback;
import org.eclipse.microprofile.faulttolerance.Timeout;

import javax.annotation.PostConstruct;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import java.util.Random;
import java.util.UUID;

@Path("/")
public class SimpleRESTService {

	String cachedData;

	@PostConstruct
	public void init() {
		cachedData = UUID.randomUUID().toString();
	}
	@GET
	@Path("/hello")
	@Timeout(250)
	@Retry(maxRetries = 2)
	@Fallback(fallbackMethod = "getCachedData")
	public String getData()
	{
		randomSleep();

		return "Random string " + UUID.randomUUID().toString();
	}
	private void randomSleep() {
		try {
			Thread.sleep(new Random().nextInt(400));
		} catch (Exception e) {
            System.out.println("The application is taking too long...");

		}
	}

	public String getCachedData()
	{

		return "Random (cached) String is "+cachedData;
	}
}

Sometimes you might want an application to wait before it issues a retry request. For example, if the failure is caused by a sudden spike in requests or a loss of connectivity, waiting might decrease the chance that a previous failure occurs. In these cases, you can define a delay in the Retry policy using these parameters:

  • delay: The amount of time to wait before retrying a request. The value must be an integer greater than or equal to 0 and be less than the value for maxDuration. The default is 0.
  • delayUnit: The unit of time for the delay parameter as described by the java.time.temporal.ChronoUnit class. The default is ChronoUnit.MILLIS.
  • durationUnit: defines a standard set of date period units, including NANOS, MICROS, MILLIS, SECONDS, MINUTES, and HOURS.
@Retry(retryOn = TimeoutException.class,
       maxRetries = 4,
       maxDuration = 10,
       durationUnit = ChronoUnit.SECONDS,
       delay = 200, delayUnit = ChronoUnit.MILLIS)           

Finally, you can specify an Exception class that triggers a retry. You can identify more than one exception as an array of values. For example,

@Retry(retryOn = {RuntimeException.class, TimeoutException.class}).

The default is java.lang.Exception.class.

Using a Circuit Breaker Policy

A Circuit Breaker policy can be used to prevent repeated failures when a service is rated as not functional. There are three circuit states:

  • Closed: In normal conditions, the circuit is closed. In case of failure occurs, the Circuit Breaker tracks the event and the requestVolumeThreshold and failureRatio parameters determine the conditions under which the breaker will transition the circuit to open.
  • Open: When the circuit is open, any call to the service will fail immediately. A delay may be configured for the Circuit Breaker. After the delay, the circuit transitions to half-open state.
  • Half-open: In half-open state, by default one trial call to the service is allowed. Should it fail, the circuit will return to open state. You can configure the number of trials with the successThreshold parameter. After the specified number of successful executions, the circuit will be closed.

A method or a class can be annotated with @CircuitBreaker, as in this example:

@CircuitBreaker(successThreshold = 10, requestVolumeThreshold = 4, failureRatio=0.75, delay = 1000)
public Service checkBalance() {
    counterForInvokingService++;
    return checkBalanceService();
}

The above code excerpt applies the CircuitBreaker policy, which is to open the circuit once 3 (4×0.75) failures occur among the rolling window of 4 consecutive invocation. The circuit will stay open for 1000ms and then back to half open. After 10 consecutive successful invocations, the circuit will be back to close again.

Setting up projects with FaultTolerance API

There are several platforms which are compatible with FaultTolearance API. In particular you can run the above examples in any of the following platforms:

Thorntail:

Include the following dependencies:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>io.thorntail</groupId>
            <artifactId>bom-all</artifactId>
            <version>${version.thorntail}</version>
            <scope>import</scope>
            <type>pom</type>
        </dependency>
    </dependencies>
</dependencyManagement>

<dependency>
    <groupId>io.thorntail</groupId>
    <artifactId>microprofile-fault-tolerance</artifactId>
</dependency>

Quarkus:

Include the following dependencies:

<dependencyManagement>
  <dependencies>
	  <dependency>
	    <groupId>io.quarkus</groupId>
	    <artifactId>quarkus-bom</artifactId>
	    <version>${quarkus.version}</version>
	    <type>pom</type>
	    <scope>import</scope>
	  </dependency>
  </dependencies>
</dependencyManagement>

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-smallrye-fault-tolerance</artifactId>
</dependency> 

Tom EE:

<dependency>
	<groupId>org.eclipse.microprofile</groupId>
	<artifactId>microprofile</artifactId>
	<version>${version.microprofile}</version>
	<type>pom</type>
	<scope>provided</scope>
</dependency>

Source code for this tutorial: https://github.com/fmarchioni/mastertheboss/tree/master/micro-services/mp-fault-tolerance

Building Microservices from WildFly applications

This tutorial is about building Microservices applications for WildFly developers. The main objective of this tutorial is to discuss the key challenges and solutions around building Microservices using Enterprise API in the Enterprise application environment.

Microservice architecture is one of the most widely discussed and adopted architecture pattern in the modern IT Industry. With the rise of cloud-native architecture, software applications are no longer built as monolithic applications, rather as a collection of services or serverless functions, leveraging technologies such as Docker and Kubernetes. Let’s see how we can turn our applications running on WildFly into a service.

Option 1: Trim your WildFly Server using Galleon Layers

WildFly is an application server therefore it can be used to deploy multiple applications in a monolithic environment. Its subsystems are designed for this purpose, so they can provide you a full Java Enterprise stack for your applications. In addition, you can leverage some of the Eclipse Microprofile features, to provide essential features for Microservices like Opentracing, Health checking, Metrics and Configuration:

<extension module="org.wildfly.extension.microprofile.config-smallrye"/>
<extension module="org.wildfly.extension.microprofile.health-smallrye"/>
<extension module="org.wildfly.extension.microprofile.metrics-smallrye"/>
<extension module="org.wildfly.extension.microprofile.opentracing-smallrye"/>

Starting with WildFly 16, we can use Galleon layers to control the features included in a WildFly server. A Galleon layer identifies one or more server features that can be installed on its own or in combination with other layers. For example, if you want to deliver a microservice-like application which makes use of only the jaxrs and cdi server features, then you can choose to install just these layers with Galleon. The configuration in standalone.xml would then contain only the required subsystems and their dependencies.

In this tutorial you can check more about building a WildFly server with Galleon: Provisioning WildFly with Galleon

Advantage  No changes are required at all in your code, nor in configuration. You can also retain Management features of Domain mode
 Challenge  Still running a monolithic architecture. The amount of resources used for this service will be typically higher than a standalone microservice.

Option 2: Convert your application to a Runnable WilFly jar

WildFly bootable jars are a new way to package applications which specifically target microservice deployments, allowing us to boot our Enteprirse/Microprofile application just from one jar file.

Thanks to Galleon technology it is possible to combine layers of the application servers in order to produce a WildFly distribution according to your application needs. On the top of that a Maven plugin has been developer to allow combining your application and the server in a single bootable JAR.

You can read more about Runnable WildFly Jars in this tutorial: Turn your WildFly applications in bootable JARs

Advantage  No changes are required at all in your code. You just have to add the Maven plugin in your pom.xml
Advantage  Tha amount of memory used by Hollow Jar files produced is significantly lower than the standard WildFly server which makes it a good git for microservice development.
 Challenge  The project is still in Beta stage.

Option 3: Turn your application into a Quarkus application

Quarkus is a Kubernetes Native Java stack tailored for GraalVM & OpenJDK HotSpot, crafted from the best of breed Java libraries and standards.

Besides it, Quarkus gives you a massive improvement in speed and memory footprint as it can produce native executables to be deployed on the cloud. It includes as well a native hot swapping feature that increases your productivity by several magnitudes. Here is an Hello World Quarkus tutorial: Getting started with QuarkusIO

In terms of migration, the first step to migrate a WildFly application into Quarkus is adding the parent bom file:

<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-bom</artifactId>
    <version>${io.quarkus.version}</version>
    <type>pom</type>
    <scope>import</scope>
</dependency>

Then you will need adding the single dependencies, in replacement for WildFly dependencies:

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-resteasy</artifactId>
</dependency>

In terms of configuration, Quarkus does not have an XML configuration file like WildFly nor it uses YAML file as Thorntail. Its configuration is based on the Microprofile Config API, therefore an application.properties file is used instead:

quarkus.log.console.enable=true
quarkus.log.console.format=%d{HH:mm:ss} %-5p [%c] %s%e%n
quarkus.log.console.level=TRACE

In terms of code, Quarkus fully supports the Microprofile stack, making it a perfect fit for Microservices, plus a subset of Java Enterprise API (focusing mostly on CDI, JAXRS, JSONB, JSONP). In addition, you can plugin in some key extensions like Vert.x, Kafka, Kogito, Keycloak and many more. (You can check it with the ‘mvn quarkus:list-extensions’ command)

Advantage Capability to create native executables
Advantage Out of the box packaging of the applications for cloud deployments.
Advantage Native live coding.
Advantage Perfect to design Stream-based and Reactive applications
Advantage Already a large set of extensions available, including core Enterprise API
Challenge Some legacy API (EJB, JAX-WS, JSF, Corba) are not included in QuarkusIO. Some reengineering might be required.
Challenge You have to replace Domain Mode Management features with Openshift/Kubernates management

Option 4: Turn your application into a MicroProfile compatible solution with Thorntail (deprecated)

IMPORTANT: The Thorntail project (formerly known as WildFly Swarm) has reached End Of Life. The last Thorntail release is the 2.7.0. You are recommended to evaluate a migration plan for your applications. Check this tutorial to learn more: How to migrate Thorntail applications

This solution requires some adjusting in your pom.xml file in order to replace the Java EE dependencies with Thorntail dependencies. A Thorntail application is typically configured as follows: A BOM file which contains the version for all fractions:

  <dependencyManagement>
    <dependencies>
      <dependency>
        <groupId>io.thorntail</groupId>
        <artifactId>bom-all</artifactId>
        <version>${version.thorntail}</version>
        <scope>import</scope>
        <type>pom</type>
      </dependency>
    </dependencies>
  </dependencyManagement>

Then declare the fractions you need:

<dependency>
  <groupId>io.thorntail</groupId>
  <artifactId>jaxrs</artifactId>
</dependency>

Then, generate the uberjar and run it:

$ mvn package
$ java -jar <myapp>-thorntail.jar

In addition, you will have to turn the configuration of WildFly subsystems into YAML configuration snippets, as in this example, which configures a Datasource:

swarm:
  datasources:
    jdbc-drivers:
      org.postgresql:
        driver-class-name: org.postgresql.Driver
        xa-datasource-class-name: org.postgresql.xa.PGXADataSource
        driver-module-name: org.postgresql
    data-sources:
      ExampleDS:
        driver-name: org.postgresql
        connection-url: jdbc:postgresql://localhost:5432/postgres"
        user-name: postgres
password: postgres

One advantage of moving your application to Thorntail is the large list of fractions which greatly enhance the standard Java Enterprise API. Besides all Java EE, you can include fractions for Monitoring, the Netflix API, the full Eclipse MicroProfile stack, NoSQL data and much more!

On the other hand, you will not be able to retain the management features of WildFly in Domain mode when switching to Thorntail. One best practice would be to deploy the application on a Kubernates environment, like Openshift, and use Openshift HA and network capabilities as a replacement for  WildFly Domain configuration. Here is how you can run Thorntail on Openshift: 2 Ways to run Thorntail applications on Openshift

Advantage No code changes are normally required to port your existing Enterprise applications on Thorntail.
Advantage Complements the Enterprise API with a large set of popular API that are a perfect fit for Microservices applications
Challenge The Thontail project has reached EoL. Consider migrating to Quarkus or WildFly runnable Jars
Challenge Requires to transform your configuration from the standalone.xml file to YAML fractions
Challenge You cannot run Thorntail in Domain mode as WildFly server

That was all about! Enjoy your Microservices applications with WildFly, Thorntail or Quarkus!

3 ways to create Microprofile applications

Microprofile applications can be used in a large variety of contexts. In this tutorial we will learn how to use its API in the most common runtime environments.

Configuring Microprofile API with Thorntail

Thorntail offers a modern approach to packaging and running Java EE applications, including just enough of the environment to “java -jar” your application. Most interestingly, it’s full MicroProfile compatible.

In order to use Microprofile API, you can just plug in the Microprofile API needed by your project, leaving aside the version which is defined in the Thorntail’s BOM file. Here is the set of Microprofile dependencies you can include in your applications:

  <dependencyManagement>
    <dependencies>
      <dependency>
        <groupId>io.thorntail</groupId>
        <artifactId>bom-all</artifactId>
        <version>${version.thorntail}</version>
        <scope>import</scope>
        <type>pom</type>
      </dependency>
    </dependencies>
  </dependencyManagement>

  <build>
    <finalName>demo</finalName>
    <plugins>
      <plugin>
        <groupId>io.thorntail</groupId>
        <artifactId>thorntail-maven-plugin</artifactId>
        <version>${version.thorntail}</version>
        
        <executions>
          <execution>
            <goals>
              <goal>package</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>

  <dependencies>
    
    <dependency>
      <groupId>io.thorntail</groupId>
      <artifactId>microprofile-openapi</artifactId>
    </dependency>
    <dependency>
      <groupId>io.thorntail</groupId>
      <artifactId>microprofile-jwt</artifactId>
    </dependency>
    <dependency>
      <groupId>io.thorntail</groupId>
      <artifactId>opentracing</artifactId>
    </dependency>
    <dependency>
      <groupId>io.thorntail</groupId>
      <artifactId>microprofile-fault-tolerance</artifactId>
    </dependency>
    <dependency>
      <groupId>io.thorntail</groupId>
      <artifactId>microprofile-health</artifactId>
    </dependency>
    <dependency>
      <groupId>io.thorntail</groupId>
      <artifactId>microprofile-restclient</artifactId>
    </dependency>
    <dependency>
      <groupId>io.thorntail</groupId>
      <artifactId>microprofile-metrics</artifactId>
    </dependency>
  </dependencies>

Aside from that, you can use also an “umbrella” fraction named “microprofile” which brings into the game all the above fractions:

<dependency>
      <groupId>io.thorntail</groupId>
      <artifactId>microprofile</artifactId>
</dependency>

In this tutorial Managing Microprofile Health Checks you can check how to use Microprofile Health extension on Thorntail and deploy it on Openshift:

Configuring Microprofile API with Quarkus

Quarkus has been designed from the grounds up to be fully compatible with Microprofile specifications. Once that you have a basic JAXRS application, we can start adding the Microprofile extensions we need.

First a basic project creation:

mvn io.quarkus:quarkus-maven-plugin:0.20.0:create \
    -DprojectGroupId=org.acme \
    -DprojectArtifactId=quarkus-demo \
    -DclassName="org.acme.Endpoint" \
    -Dpath="/secured"

Now let’s see how to add the single extensions:

Health:

$ mvn quarkus:add-extension -Dextensions=health

Related dependency:

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-smallrye-health</artifactId>
</dependency>

Fault Tolerance:

$ mvn quarkus:add-extension -Dextensions=fault-tolerance

Related dependency:

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>io.quarkus:quarkus-smallrye-fault-tolerance</artifactId>
</dependency>

Metrics:

$ mvn quarkus:add-extension -Dextensions=metrics

Related dependency:

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>io.quarkus:quarkus-smallrye-metrics</artifactId>
</dependency>

OpenAPI:

$ mvnw quarkus:add-extension -Dextensions="openapi"

Related dependency:

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-smallrye-openapi</artifactId>
</dependency>

OpenTracing:

$ mvn quarkus:add-extension -Dextensions=opentracing

Related dependency:

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>io.quarkus:quarkus-smallrye-opentracing</artifactId>
</dependency>

REST Client:

$ mvn quarkus:add-extension -Dextensions=rest-client

Related dependency:

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>io.quarkus:quarkus-smallrye-rest-client</artifactId>
</dependency>

JWT:

$ mvn quarkus:add-extension -Dextensions=jwt

Related dependency:

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>io.quarkus:quarkus-smallrye-jwt</artifactId>
</dependency>

Configuring MicroProfile with WildFly application server

WildFly application server ships out of the box with support for the latest Microprofile specification. In the current version of WildFly, it includes support for the following extensions:

<extension module="org.wildfly.extension.microprofile.config-smallrye"/>
<extension module="org.wildfly.extension.microprofile.fault-tolerance-smallrye"/>
<extension module="org.wildfly.extension.microprofile.health-smallrye"/>
<extension module="org.wildfly.extension.microprofile.jwt-smallrye"/>
<extension module="org.wildfly.extension.microprofile.metrics-smallrye"/>
<extension module="org.wildfly.extension.microprofile.openapi-smallrye"/>
<extension module="org.wildfly.extension.microprofile.opentracing-smallrye"/>

Not all extensions are available in the “default” WildFly configuration which includes just a subset of Microprofile API by default:

<extension module="org.wildfly.extension.microprofile.config-smallrye"/>
<extension module="org.wildfly.extension.microprofile.health-smallrye"/>
<extension module="org.wildfly.extension.microprofile.jwt-smallrye"/>
<extension module="org.wildfly.extension.microprofile.metrics-smallrye"/>
<extension module="org.wildfly.extension.microprofile.opentracing-smallrye"/>

You can check for standalone-microprofile.xml and standalone-microprofile-ha.xml to have full Microprofile support, or just add the extension by yourself in your configuration:

[standalone@localhost:9990 /] /extension=org.wildfly.extension.microprofile.openapi-smallrye:add()
{"outcome" => "success"}

[standalone@localhost:9990 /] /subsystem=microprofile-openapi-smallrye:add()
{
    "outcome" => "success",
    "response-headers" => {
        "operation-requires-reload" => true,
        "process-state" => "reload-required"
    }
}

In order to buid your Microprofile applications on WildFly it is also recommended to include the following Bill Of Material in your pom.xml:

<dependency>
    <groupId>org.wildfly.bom</groupId>
    <artifactId>wildfly-microprofile</artifactId>
    <version>${wildfly.version}</version>
    <scope>import</scope>
    <type>pom</type>
</dependency>

The following article will provide more details on how to build Microprofile applications on WildFly: How to build Microprofile application on wildfly

If you want to see an example of Microprofile application on WildFly, it is worth starting with this tutorial Configuring Applications with Eclipse MicroProfile Config

How to handle Fault Tolerance in your Enterprise applications and Microservices

Microprofile fault tolerance specification is a way for your application to handle the unavailability of a service by using a set of different policies. In this tutorial we will see how we can apply these policies in an application that can be run in Thorntail runtime.

Thorntail is a Microprofile compliant environment, therefore you can combine the power of Enterprise API with the enhancement provided by Microprofile. Let’s see which policies can be used to improve the resiliency of your applications by using the FaultTolerance specification available in Microprofile:

Timeout

By using the @Timeout annotation, you can specify the maximum time (in ms) allowed to return a response in a method. Here is an example:

@GET
@Path("/json")
@Produces(MediaType.APPLICATION_JSON)
@Timeout(250)
public SimpleProperty getPropertyJSON () 
{
    SimpleProperty p = new SimpleProperty("key","value");
    randomSleep();
	return p;
}

private void randomSleep() {
 	try {  
      Thread.sleep(new Random().nextInt(500));
    }
    catch (Exception exc) {
      exc.printStackTrace();
    }	
}

You can further specify a @Fallback policy which states which fallback method should be specified in case of failure:

@GET
@Path("/json")
@Produces(MediaType.APPLICATION_JSON)
@Timeout(250)
@Fallback(fallbackMethod = "fallbackJSON")
public SimpleProperty getPropertyJSON () 
{
    SimpleProperty p = new SimpleProperty("key","value");
    randomSleep();
	return p;
}
public SimpleProperty fallbackJSON() 
{
    SimpleProperty p = new SimpleProperty("key","fallback");
	return p;
}

Retry

The retry annotation is essential to deal with unstable services. MicroProfile uses @Retry to specify the retry when a certain Exception is raised. You can allow retrying for a certain number of times before cancelling the retry. In this example we allow up to 4 retries when the RuntimeException is thrown:

@GET
@Path("/text")
@Retry(maxRetries = 4, retryOn = RuntimeException.class)
public String getHello () 
{
	 if (new Random().nextFloat() < 0.5f) {
        System.out.println("Error!!!");
        throw new RuntimeException("Resource failure.");
    }
	return "hello world!";
}

Circuit Breaker

Circuit Breaker is a key pattern for creating resilient Microservices. It can be used to prevent repeatable timeout Exceptions by instantly rejecting the requests. MicroProfile Fault Tolerance uses @CircuitBreaker to control the client calls.

The software circuit breaker works much like an electrical circuit breaker therefore it can be in three states:

 

 

  • Closed state: A closed circuit represents a fully functional system, which is available to its clients
  • Half open circuit: When some failures are detected the state can change to half-open. In this state, it checks whether the failed component is restored. If so, it closes the circuit. otherwise, it moves to the Open state
  • Open state. An open state means the service is temporarly disabled. You can specify how much time after checks are done to verify if it’s safe to switch to half-open state.

Here is an example:

@GET
@Path("/xml")
@Produces(MediaType.APPLICATION_XML)
@CircuitBreaker(successThreshold = 5, requestVolumeThreshold = 4, failureRatio=0.75, delay = 1000)
public SimpleProperty getPropertyXML () 
{
	SimpleProperty p = buildPOJO();      

	return p;
}

private SimpleProperty buildPOJO() {
 	if (new Random().nextFloat() < 0.5f) {
        System.out.println("Error!!!");
        throw new RuntimeException("Resource failure.");
    }
    SimpleProperty p = new SimpleProperty("key","value");
    return p;
}

The above code-snippet means the method getPropertyXML applies the CircuitBreaker policy. For the last 4 invocations, if 75% failed then open the circuit. The circuit will stay open for 1000ms and then back to half open. When a circuit is open, A CircuitBreakerOpenException will be thrown instead of actually invoking the method.

org.jboss.resteasy.spi.UnhandledException: org.eclipse.microprofile.faulttolerance.exceptions.CircuitBreakerOpenException: getPropertyXML
	at org.jboss.resteasy.core.ExceptionHandler.handleApplicationException(ExceptionHandler.java:78)
	at org.jboss.resteasy.core.ExceptionHandler.handleException(ExceptionHandler.java:222)

Here is a full code example:

package com.itbuzzpress.microprofile.service;

import java.io.InputStream;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import java.util.logging.Logger;

import javax.inject.Inject;
import javax.ws.rs.Consumes;
import javax.ws.rs.DefaultValue;
import javax.ws.rs.FormParam;
import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.container.AsyncResponse;
import javax.ws.rs.container.Suspended;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.UriInfo;
import java.util.Random;

import com.itbuzzpress.microprofile.model.SimpleProperty;
import org.eclipse.microprofile.faulttolerance.Retry;
import org.eclipse.microprofile.faulttolerance.Timeout;
import org.eclipse.microprofile.faulttolerance.Fallback;
import org.eclipse.microprofile.faulttolerance.CircuitBreaker;
@Path("/simple")
public class SimpleRESTService {
	@GET
	@Path("/text")
	@Retry(maxRetries = 4, retryOn = RuntimeException.class)
	public String getHello () 
	{
		 if (new Random().nextFloat() < 0.5f) {
            System.out.println("Error!!!");
            throw new RuntimeException("Resource failure.");
        }
		return "hello world!";
	}

	@GET
	@Path("/json")
	@Produces(MediaType.APPLICATION_JSON)
	@Timeout(250)
    @Fallback(fallbackMethod = "fallbackJSON")
	public SimpleProperty getPropertyJSON () 
	{
        SimpleProperty p = new SimpleProperty("key","value");
        randomSleep();
		return p;
	}


	public SimpleProperty fallbackJSON() 
	{
        SimpleProperty p = new SimpleProperty("key","fallback");
		return p;
	}

	@GET
	@Path("/xml")
	@Produces(MediaType.APPLICATION_XML)
	@CircuitBreaker(successThreshold = 5, requestVolumeThreshold = 4, failureRatio=0.75, delay = 1000)
	public SimpleProperty getPropertyXML () 
	{
		SimpleProperty p = buildPOJO();      
	
		return p;
	}

	 private void randomSleep() {
	 	try {  
          Thread.sleep(new Random().nextInt(500));
        }
        catch (Exception exc) {
          exc.printStackTrace();
        }	
  
    }

    private SimpleProperty buildPOJO() {
	 	if (new Random().nextFloat() < 0.5f) {
            System.out.println("Error!!!");
            throw new RuntimeException("Resource failure.");
        }
        SimpleProperty p = new SimpleProperty("key","value");
        return p;
    }

}

In order to compile and run the above example, the microprofile-fault-tolerance fraction needs to be included in your pom.xml, plus the other fractions needed such as jaxrs and cdi,:

<dependency>
    <groupId>io.thorntail</groupId>
    <artifactId>jaxrs</artifactId>
</dependency>
<dependency>
    <groupId>io.thorntail</groupId>
    <artifactId>cdi</artifactId>
</dependency>
<dependency>
  <groupId>io.thorntail</groupId>
  <artifactId>microprofile-fault-tolerance</artifactId>
</dependency>

Getting Started with MicroProfile applications

With the transition of Java EE to Jakarta EE, you can now adopt both Jakarta EE standards and MicroProfile stack for your projects. In this tutorial we will learn more about the MicroProfile API and the difference with other standars we have formerly used such as Java EE and SOA.

The era of monolithic application appears obsolete as many enterprises in the industries are looking for frameworks and APIs that would enable them to move efficiently into the cloud and microservies world.

Therefore, in parallel to the evolution of Jakarta EE, that will remain the main reference for complex scenarios (such as XA Transactions or communication with communication a Common Object Request Broker Architecture) a new architecture called MicroProfile emerged.

MicroProfile API components are built upon the model of Java EE making natural the transition to microservices development. Therefore, you will be able to reuse the valuable knowledge of java EE you have accumulated in these years, while having the flexibility to use multiple vendor specs to define application requirements. At this point we should briefly define what MicroServices are.

The best fitting definition for MicroServices is:

“You can think of microservices as modular, decoupled applications that can be independently deployed, modified, scaled and integrated with other systems and services.

Microservices are agile components that can be easily updated therefore you won’t need to stop the entire platform for days, until changes are propagated.”

Microservices vs SOA

You can think of Microservices as an evolution of the Service Oriented Architecture (SOA), with some similarities and notable differences though.

Let’s start from the similarities: both SOA and microservices architectures, unlike monolithic apps, deliver components which are solely responsible for a particular task. It follows that each component can also be developed using a different technology stack.

The first difference is that when developing microservices, each team can develop and deploy applications independently of other services. On the other hand, when using SOA, each team developing a service needs to know about the common communication mechanism to be used to achieve the SOA.

The second main difference between SOA and microservices is that in a SOA the Service Bus, used as communication layer between services, can potentially become a bottleneck or single point of failure. This does not happen in a microservices architecture where each service works independently from others so you can can fine tune its fault tolerance and resiliency.

In terms of storage, a SOA architecture normally used traditional shared data storage solution like a RDBMS. A microservice architecture, on the other hand, has a more flexible approach which is often declined as container native storage. That is, you define your storage as a container itself, just like your applications. Every approach as pros and cons: using a shared data solution makes easier to re-use data between applications. On the other hand, for applications that run independently from each other, an independent storage solution (like container storage) can be easier to bootstrap and to maintain.

Finally, the last difference, as you can imagine, is the size. A Microservice is a significantly smaller artifact and can be deployed indendently from other services. On the other hand, a SOA can be either deployed as a monolith or it can be as well comprised of multiple smaller services.

Developing with MicroProfile

Having clear what a MicroService is, how do we develop them using the MicroProfile architecture ? The main standard that is emerging is Eclipse MicroProfile which is a collection of community-driven open source specifications that define an enterprise Java microservices platform.

Here is an overview of the technology stack that is available using the MicroProfile stack:

As you can see, the technology stack mostly focuses on REST and CDI Services for the interaction between the microservices. JSON-P and JSON-B is also included out of the box as common transport language. (In blue you can see recently updated API in the Microprofile 2.0 spec)

There are multiple implementations of MicroProfile on the top of which vendors can add their customizations. For example, if you want to add Persistence stack to your microservices, then you can use Thorntail persistence implementation which is the well-known JPA Stack.

Throntail is not the only available option to develop with MicroProfile. Here is a list of the top options:

  • Red Hat – Thorntail
  • Red Hat – Red Hat Application Runtimes
  • IBM – WebSphere Liberty
  • IBM – Open Liberty
  • Payara Foundation – Payara Micro
  • Payara Foundation – Payara Server
  • Tomitribe – TomEE
  • Oracle – Helidon
  • Fujitsu – Launcher
  • SmallRye
  • Hammock
  • KumuluzEE

Coding a MicroProfile application

In order to kickstart a MicroProfile application, you can either use the multi-technology starter available at: https://start.microprofile.io/

Or you can use the starter for your specific implementation of MicroProfile. For example, if you are going to use Thorntail, then you can begin your journey with: https://thorntail.io/generator/

We have discussed in this article how to bootstrap a project using Thorntail generator: Introduction to Thorntail

We will now have a look how to kick start a basic MicroProfile project that includes a mix of technologies. So get to https://start.microprofile.io/

As you can see from the Generator, the most important selection is related to the MicroProfile Server. The effect of your choice will be simply the Maven plugin that will be added to compile and bootstrap the microservice JAR.

For example, if you selected Payara Micro as Server, the following Maven plugin will be added to your pom.xml

 <build>
    <plugins>
      <plugin>
        <groupId>fish.payara.maven.plugins</groupId>
        <artifactId>payara-micro-maven-plugin</artifactId>
        <version>1.0.1</version>
        <executions>
          <execution>
            <phase>package</phase>
            <goals>
              <goal>bundle</goal>
            </goals>
          </execution>
        </executions>
        <configuration>
          <payaraVersion>5.191</payaraVersion>
        </configuration>
      </plugin>
    </plugins>
</build>

On the other hand, if you selected a Thorntail server, the Thorntail BOM and plugin will be added instead:

<build>
    <plugins>
      <plugin>
        <groupId>io.thorntail</groupId>
        <artifactId>thorntail-maven-plugin</artifactId>
        <version>${version.thorntail}</version>
        <executions>
          <execution>
            <goals>
              <goal>package</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    </plugins>
</build>
<dependencyManagement>
    <dependencies>
      <dependency>
        <groupId>io.thorntail</groupId>
        <artifactId>bom-all</artifactId>
        <version>${version.thorntail}</version>
        <type>pom</type>
        <scope>import</scope>
      </dependency>
    </dependencies>
</dependencyManagement>      

Let’s try generating a Thorntail-compatible application. The self-generated archive will contain the following artifacts:

├── pom.xml
├── src
│   ├── main
│   │   ├── java
│   │   │   └── com
│   │   │       └── example
│   │   │           └── demo
│   │   │               ├── config
│   │   │               │   └── ConfigTestController.java
│   │   │               ├── DemoRestApplication.java
│   │   │               ├── health
│   │   │               │   └── ServiceHealthCheck.java
│   │   │               ├── HelloController.java
│   │   │               ├── metric
│   │   │               │   └── MetricController.java
│   │   │               ├── resilient
│   │   │               │   └── ResilienceController.java
│   │   │               └── secure
│   │   │                   └── ProtectedController.java
│   │   ├── resources
│   │   │   ├── jwt-roles.properties
│   │   │   ├── META-INF
│   │   │   │   ├── microprofile-config.properties
│   │   │   │   └── MP-JWT-SIGNER
│   │   │   └── project-defaults.yml
│   │   └── webapp
│   │       ├── index.html
│   │       └── WEB-INF
│   │           ├── beans.xml
│   │           └── web.xml
│   └── test
│       ├── java
│       │   └── com
│       │       └── example
│       │           └── demo
│       │               ├── JWTClient.java
│       │               └── MPJWTToken.java
│       └── resources
│           └── privateKey.pem

As you can see, a mini demo of MicroProfile healt, Config and Metrics has been generated. A configuration file which is peculiar for Thorntail has been included as well in resources/META-INF/project-defaults.yml

You can build and run the project with:

$ mvn thorntail:run

As Thorntail is bootstrapped, you can reach the demo at: http://localhost:8080

The example application is organized as a simple REST application, exposing a set of MicroProfile functionalities.

If you want to learn more about the Metrics API, check this article: Microprofile Metrics using WildFly application Server

Microprofile Metrics using WildFly application Server

In the second tutorial about monitoring WildFly we will learn how to collect application metrics by applying MicroProfile Metrics API  that will show up in WildFly 15 and above.

So in the first article Monitoring WildFly with Prometheus we have learned how to install and configure Prometheus so that WildFly metrics can be captured. In order to capture application metrics we can apply the annotations contained in the org.eclipse.microprofile.metrics.annotation package. These annotations cover several crucial aspects such as how many time a method has been executed, the rate at which an application event occurred, or the time it took to complete one business method.

Let’s start from the @Counted annotation which counts how many time a request has been made.

The @Counted metric

package demomp;

import javax.ws.rs.DefaultValue;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.QueryParam;

import org.eclipse.microprofile.metrics.MetricUnits;
import org.eclipse.microprofile.opentracing.Traced;

@Path("/")
@Traced
public class RESTEndpoints {

    @GET
    @Path("/hello")
    @Counted(description = "How many greetings", absolute = true, monotonic = true)
    public String hello(@DefaultValue("my friend!") @QueryParam("name") String name) {
        return "Hello "+name;

    }

}

Within the @Counted annotation, besides the description we have also the absolute flag, as true, that means the name of the package and class will not be prepended to the metric name. The monotonic flag, as true, means it will count total invocations of the annotated method. If the flag is false (default) it will count concurrent invocations.

When invoked the above REST Service, the following metric will show up on WildFly at http://localhost:9990/metrics:

# HELP application:hello How many greetings
# TYPE application:hello counter
application:hello 1.0

One further customization can be done through the org.eclipse.microprofile.config.inject.ConfigProperty annotation that is able to inject key/values into your Service. For example:

@Inject
@ConfigProperty(name = "greeting", defaultValue = "my dear friend")
private String greeting;

In this case, the method will be transformed into:

@GET
@Path("/hello")
@Counted(description = "How many greetings", absolute = true, monotonic = true)
public String hello(@DefaultValue("my friend!") @QueryParam("name") String name) {
    return "Hello " + greeting;
}

Let’s move to some other annotations.

The @Gauge metric

The @Gauge is the most basic metric type that you can use as it just returns a value.

@GET
@Path("/time")
@Gauge(unit = "time", name = "currentime", absolute = true)
public Long getTime() {
    return  System.currentTimeMillis();
}

It will display the result in the metrics as follows:

# TYPE application:currentime_time gauge
application:currentime_time 1.550847591437E12

The @Metered annotation

The @Metered annotation measures the rate at which a set of events occur.

@Metered(name = "sell-share", unit = MetricUnits.MINUTES, description = "Metrics to monitor frequency of share sale.", absolute = true)
@GET
@Path("/sell")
public String sellStock() {

    // do business
    return "sold!";
}

The @Timed annotation

Finally, the @Timed annotatoin can keep track of the duration of an event, such as a business transaction:

@Timed(name = "tx-time",
        description = "Metrics to monitor time to complete ticket purchase.",
        unit = MetricUnits.MINUTES,
        absolute = true)
@GET
@Path("/buy")
public String buyTicket() {

    try {
        Thread.sleep((long) (Math.random() * 1000));
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
    return "transaction completed!";
}

Once applied, a comprehensive list of statistics will be provided for all @Timed events:

# TYPE application:tx_time_rate_per_second gauge
application:tx_time_rate_per_second 0.4317018836985329
# TYPE application:tx_time_one_min_rate_per_second gauge
application:tx_time_one_min_rate_per_second 1.374706127912254
# TYPE application:tx_time_five_min_rate_per_second gauge
application:tx_time_five_min_rate_per_second 7.780132092018116
# TYPE application:tx_time_fifteen_min_rate_per_second gauge
application:tx_time_fifteen_min_rate_per_second 10.386035934806122
# TYPE application:tx_time_min_seconds gauge
application:tx_time_min_seconds 1.290288978E11
# TYPE application:tx_time_max_seconds gauge
application:tx_time_max_seconds 1.290288978E11
# TYPE application:tx_time_mean_seconds gauge
application:tx_time_mean_seconds 1.290288978E11
# TYPE application:tx_time_stddev_seconds gauge
application:tx_time_stddev_seconds 0.0
# HELP application:tx_time_seconds Metrics to monitor time to complete ticket purchase.
# TYPE application:tx_time_seconds summary
application:tx_time_seconds_count 1.0
application:tx_time_seconds{quantile="0.5"} 1.290288978E11
application:tx_time_seconds{quantile="0.75"} 1.290288978E11
application:tx_time_seconds{quantile="0.95"} 1.290288978E11
application:tx_time_seconds{quantile="0.98"} 1.290288978E11
application:tx_time_seconds{quantile="0.99"} 1.290288978E11
application:tx_time_seconds{quantile="0.999"} 1.290288978E11
# HELP base:classloader_total_loaded_class_count Displays the t

Building the project

In order to build a project using Microprofile metrics, we need to add the following set of dependencies, which cover OpenTracing JAX-RS tracing, Microprofile configurations and MicroProfile metrics API:

<dependency>
    <groupId>org.eclipse.microprofile.opentracing</groupId>
    <artifactId>microprofile-opentracing-api</artifactId>
    <version>1.1</version>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>org.eclipse.microprofile.config</groupId>
    <artifactId>microprofile-config-api</artifactId>
    <version>1.3</version>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>org.eclipse.microprofile.metrics</groupId>
    <artifactId>microprofile-metrics-api</artifactId>
    <version>1.1</version>
    <scope>provided</scope>
</dependency>

Using OpenTracing API with WildFly application server

The Microprofile Opentracing specification defines a set of API for accessing an OpenTracing compliant Tracer object within the JAX-RS application. The behaviors specify how in/out requests will have OpenTracing Spans automatically created. The API also defines how to explicitly disable or enable tracing for given endpoints.

WildFly 14 provides initial support for OpenTracing API and in this tutorial we will learn how to trace a simple JAX-RS application.

You can use the @org.eclipse.microprofile.opentracing.Traced annotation to define explicit span creation for specific classes and methods. If you place the annotation on a class, then it’s automatically applied to all methods within that class. If you place the annotation on a method, then it overrides the class annotation if one exists.

Here’s an example of an EJB annotated with @Traced:

import javax.ejb.Stateless;
import org.eclipse.microprofile.opentracing.Traced;

@Stateless
@Traced
public class TracedEJB {
    public void onNewOrder() {
        System.out.println("Action through EJB!");
    }
}

Typically we would include tracing for JAX-RS endpoints which are the entrypoint for remote requests. Here’s an endpoint eligible to be Traced:

@Stateless
@Path("/restendpoint")
public class ExampleEndpoint {
    @POST
    @Path("/post")
    public String execute() {
        return "Action executed!";
    }
}

TECH TIP: JAX-RS endpoint are automatically marked as “CDI archive”, which is enough to trigger the MicroProfile OpenTracing integration and provide metrics without any annotation.

In order to be able to collect metrics we need at first to start a distributed tracing server like Jaeger to collect tracing metrics. We can do it convenientely with Docker:

docker run \
  --rm \
  --name jaeger \
  -p6831:6831/udp \
  -p16686:16686 \
jaegertracing/all-in-one:1.6 

Now we can start WildFly. Before launching the application server, some environment variables needs to be exported:

export JAEGER_REPORTER_LOG_SPANS=true 
export JAEGER_SAMPLER_TYPE=const 
export JAEGER_SAMPLER_PARAM=1
export JAEGER_SERVICE_NAME=opentracing-example
  • JAEGER_SAMPLER_PARAM is used to make the same decision for all traces. It either samples all traces (param=1) or none of them (param=0).
  • JAEGER_REPORTER_LOG_SPANS cheks whether the reporter should also log the spans.
  • JAEGER_SAMPLER_TYPE is used to configure the sampler type.
  • JAEGER_SERVICE_NAME is the service name

WARNING: Support for OpenTracing is still work-in-progress for WildFly 14.0.0.Final. We recommend using a nightly build of it or version newer than 14.0.0 to run this example.

When WildFly starts, you can check from the logs that the JaegerTracer environment has been correctly initialized:

19:02:07,216 INFO [io.jaegertracing.Configuration] (ServerService Thread Pool -- 34) Initialized tracer=JaegerTracer(version=Java-0.30.6, serviceName=opentracing-example, reporter=CompositeReporter(reporters=[RemoteReporter(sender=UdpSender(), closeEnqueueTimeout=1000), LoggingReporter(logger=org.slf4j.impl.Slf4jLogger(io.jaegertracing.internal.reporters.LoggingReporter))]), sampler=ConstSampler(decision=true, tags={sampler.type=const, sampler.param=true}), tags={hostname=localhost, jaeger.version=Java-0.30.6, ip=127.0.0.1}, zipkinSharedRpcSpan=false, expandExceptionLogs=false)

Now you can file some requests for your application, for example:

curl -X POST localhost:8080/mp-opentracing/restendpoint/post

curl -X GET localhost:8080/mp-opentracing/trace

You should be able to see on the application server console that some Spans have been found:

19:02:43,814 INFO [io.jaegertracing.internal.reporters.LoggingReporter] (default task-17) Span reported: fee6c5acf732f13b:fee6c5acf732f13b:0:1 - POST:com.mastertheboss.opentracing.ExampleEndpoint.execute

19:02:49,140 INFO [io.jaegertracing.internal.reporters.LoggingReporter] (default task-17) Span reported: 2ba4a2d55b38dbba:2ba4a2d55b38dbba:0:1 - com.mastertheboss.opentracing.TracedEJB.onNewOrder

Now reach out Jaeger UI to check if the Spans have been collected: http://localhost:16686/search

Click on Find Traces to see the samples you have collected so far:

WildFly Administration Guide

This article is an excerpt from “WildFly Administration Guide“, the only book which is always updated with the newest features of the application server!

The source code for this example is available at: https://github.com/fmarchioni/mastertheboss/tree/master/micro-services/mp-opentracing