How to deploy WildFly Bootable jar on OpenShift

The WildFly Bootable JAR Maven plugin allows to package both the server and your application in a bootable JAR. The Bootable jar can then be run from the command line having Java installed on your machine. In this tutorial we will learn how to deploy a Bootable Jar application on a Enterprise Kubernetes (OpenShift) environment.

Continue reading How to deploy WildFly Bootable jar on OpenShift

How to execute CLI scripts in WildFly Bootable Jar

WildFly Bootable jar is a plugin that lets you run your WildFly applications into bootable microservice-like components. This tutorial covers how to execute CLI script during the packaging or at runtime, to customize your bootable JAR configuration.

There are two ways to run CLI management scripts on the top of your executable Bootable Jar:

  • Execute CLI scripts during the packaging of the bootable JAR
  • Execute CLI scripts on the top of a runnable JAR file

Before diving into the two operation modes, let’s create a sample JAX-RS application that can run as a Bootable JAR file:

package com.mastertheboss.jaxrs.service;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
import java.util.ArrayList;
import java.util.List;

@Path("/")
public class SimpleRESTService {

	@GET
	@Path("property/{name}")
	public String helloProperty(@PathParam("name") final String name) {
		return "Hello " +System.getProperty(name);
	}
 
}

This REST Endpoint returns the value of a System Property. We will use the CLI scripts to inject two System Properties.

To activate the JAX-RS endpoint, an Activator is needed:

import javax.ws.rs.ApplicationPath;
import javax.ws.rs.core.Application;

@ApplicationPath("/rest")
public class JaxRsActivator extends Application {
   /* class body intentionally left blank */
}

Done with the application code, let’s add the pom.xml:

<?xml version="1.0"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>com.mastertheboss.jaxrs</groupId>
  <artifactId>rest-demo</artifactId>
  <packaging>war</packaging>
  <version>1.0.0</version>
  <name>Demo REST Service</name>
  <url>http://www.mastertheboss.com</url>
  <properties>
    <version.bootable.jar>4.0.3.Final</version.bootable.jar>
    <version.wildfly>23.0.0.Final</version.wildfly>
    <plugin.fork.embedded>true</plugin.fork.embedded>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
    <failOnMissingWebXml>false</failOnMissingWebXml>
    <version.server.bom>${version.wildfly}</version.server.bom>
  </properties>
  <dependencyManagement>
    <dependencies>
      <dependency>
        <groupId>jakarta.platform</groupId>
        <artifactId>jakarta.jakartaee-api</artifactId>
        <version>8.0.0</version>
        <scope>provided</scope>
      </dependency>
    </dependencies>
  </dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>jakarta.platform</groupId>
      <artifactId>jakarta.jakartaee-api</artifactId>
      <scope>provided</scope>
    </dependency>
    <dependency>
      <groupId>jakarta.xml.bind</groupId>
      <artifactId>jakarta.xml.bind-api</artifactId>
      <version>2.3.3</version>
    </dependency>
  </dependencies>

  <build>
    <finalName>${project.artifactId}</finalName>
    <plugins>
      <plugin>
        <groupId>org.wildfly.plugins</groupId>
        <artifactId>wildfly-jar-maven-plugin</artifactId>
        <version>${version.bootable.jar}</version>
        <configuration>
          <feature-pack-location>wildfly@maven(org.jboss.universe:community-universe)#${version.server.bom}</feature-pack-location>
          <layers>
            <layer>jaxrs</layer>
            <layer>management</layer>
          </layers>
          <excluded-layers>
            <layer>deployment-scanner</layer>
          </excluded-layers>
            <cli-sessions>
                <cli-session>
                    <script-files>
                        <script>packagescript.cli</script>
                    </script-files>
                </cli-session>
            </cli-sessions>
        </configuration>
        <executions>
          <execution>
            <goals>
              <goal>package</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>

</project>

As you can see, the wildfly-jar-maven-plugin has been added as a Maven plugin. Within this file, we have added a cli-sessions element so that a CLI script will be run during the packaging.

Here is the packagescript.cli:

/system-property=name:add(value=john)

Let’s build our application:

$ mvn clean install

[INFO] Executing CLI, Server configuration
WARN: can't find jboss-cli.xml. Using default configuration values.
[INFO] CLI scripts execution done.
[INFO] Executing CLI, CLI Session, scripts=[packagescript.cli], resolve-expressions=true, properties-file=null
     . . . . .
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  14.391 s
[INFO] Finished at: 2021-05-24T08:51:43+02:00
[INFO] ------------------------------------------------------------------------

Within the build log, there is evidence of the CLI script execution which has been completed.

We will now add another CLI script named runtimescript.cli:

/system-property=surname:add(value=Smith)

Now let’s run the application passing as argument the “–cli-script” to allow the runtime execution of the runtimescript.cli:

java -jar target/rest-demo-bootable.jar --cli-script=runtimescript.cli

The application server will boot and the application (“rest-demo”) will be deployed in the ROOT context:

08:53:48,366 INFO  [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0010: Deployed "rest-demo.war" (runtime-name : "ROOT.war")
08:53:48,382 INFO  [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0212: Resuming server
08:53:48,383 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full 23.0.0.Final (WildFly Core 15.0.0.Final) started in 1611ms - Started 160 of 166 services (33 services are lazy, passive or on-demand)
08:53:48,384 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://127.0.0.1:9990/management
08:53:48,384 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0054: Admin console is not enabled

Let’s test the first property:

$ curl http://localhost:8080/rest/property/name
Hello john

And then the other:

$ curl http://localhost:8080/rest/property/surname
Hello Smith

As you can see, both System Properties have been injected by the CLI scripts.

Please note that Runtime execution of CLI scripts, passing as argument the “–cli-script“,  is available for WildFly 23 or newer.

Using Properties in CLI scripts

It is worth mentioning, that the Bootable Jar plugin allows to store CLI properties in a separate file, just like the jboss-cli.sh tool does.

To do that, add a properties-file block in the cli-session element:

  <cli-sessions>
        <cli-session>
            <properties-file>
               cli.properties
            </properties-file>
            <script-files>
                <script>packagescript.cli</script>
            </script-files>
        </cli-session>
    </cli-sessions>

Then, in your CLI script, you can use properties:

/system-property=name:add(value=${name})

The value of ${name} is resolved in the file cli.properties:

name=john

Source code for this demo: https://github.com/fmarchioni/mastertheboss/tree/master/bootable-jar/scripts

Building Reactive Applications with WildFly

In this tutorial we will learn how to design, configure and deploy a reactive application on WildFly 23, using smallrye-reactive-messaging version 3.0.0. We will use Apache Kafka as distributed data streaming platform for our demo application.

Reactive Streams aims to provide a standard for exchanging data streams across an asynchronous boundary. At the same time, it guarantees that the receiving side is not forced to buffer arbitrary amounts of data.

In order to familiarize with MicroProfile Reactive Messaging, we need proper knowledge of some key concepts. First of all, MicroProfile Reactive Messaging is a specification which uses CDI Beans to drive the flow of messages towards some specific channels.

A Message is the basic interface that contains a payload to be streamed. The Message interface is parametrized in order to describe the type of payload it contains. Additionally, a message contains attributes and metadata which are specific to the broker used for message exchange (e.g. Kafka ).

A Channel, on the other hand, is a String indicating which source or destination of messages is used.

As MicroProfile Reactive Messaging is fully governed by the CDI model, two core annotations are used to indicate if a method is a producer or consumer of messages:

@Incoming: This annotation indicates that it consumes messages from the specified channel. The name of the channel is added in the annotation as an attribute. Here is an example:

@Incoming("channel")
public void consume(Message<String> s) {
// Consume message here:
}

When you place this annotation on a method, the method will be called each time a message is sent to that channel. However, you may decide to make it clear that the method consumes a specific kind of Message, such as a KafkaMessage (which inherits from Message). Here is an example:

@Incoming("channel")
public void consume(KafkaMessage<String> s) {
// Consume message here:
}

@Outgoing: This annotation indicates that a method publishes messages to a channel. Much the same way, the name of the channel is stated in the annotation’s attribute:

@Outgoing("channel")
public Message<String> produce() {
// Produce and return a Message implementation
}

Within the method annotated with @Outgoing, we return a concrete implementation of the Message interface.

You can also annotate a method with both @Incoming and @Outgoing so that it behaves like a Message Processor, which transforms the content of the message data:

@Incoming("from")
@Outgoing("to")
public String translate(String text) {
return MyTranslator.translate(text);
}

Streaming Messages with WildFly

WildFly supports, since the version 23, the distributed streaming of messages. The required subsystems (microprofile-reactive-streams-operators-smallrye and microprofile-reactive-messaging-smallrye) are not however included by default in the configuration. In order to add them, start at first WildFly:

$ ./standalone.sh -Djboss.as.reactive.messaging.experimental=true -c standalone-microprofile.xml

Please note the jboss.as.reactive.messaging.experimental is required to use some features (such as the @Channel annotation) which are available in the version 3.0.0 of reactive messaging.

When WildFly is up and running, connect from the CLI and execute the following short script to allow the required extensions:

batch
/extension=org.wildfly.extension.microprofile.reactive-messaging-smallrye:add
/extension=org.wildfly.extension.microprofile.reactive-streams-operators-smallrye:add
/subsystem=microprofile-reactive-streams-operators-smallrye:add
/subsystem=microprofile-reactive-messaging-smallrye:add
run-batch

reload

Boostrapping Kafka

Apache Kafka is a distributed data streaming platform that can be used to publish, subscribe, store, and process streams of data from multiple sources in real-time at amazing speeds.

Apache Kafka can be plugged into streaming data pipelines that distribute data between systems, and as well into the systems and applications that consume that data. Since Apache Kafka reduces the need for point-to-point integrations for data sharing, it is a perfect fit for a range of use cases where high throughput and scalability are vital.

To manage the Kafka environment, you need a software named Zookeeper which manages naming and configuration data so to provide flexible and robust synchronization within distributed systems. Zookeeper controls the status of the Kafka cluster nodes and it also keeps track of Kafka topics, partitions plus all the Kafka services you need.

There are several ways to start a Kafkacluster. The simplest one is to use the docker-compose tool so to orchestrate both Kafka and Zookeeper container images with a single file. The docker-compose.yaml file is located at the root of our example project. (Here you can take a look at it: https://github.com/fmarchioni/mastertheboss/blob/master/kafka/microprofile-kafka-demo/docker-compose.yaml )

First, make sure that docker is up and running:

$ service docker start

Then, start Apache Kafka and Zookeeper with:

$ docker-compose up

Creative a Reactive Application

In order to demonstrate Apache Kafka and Microprofle Streaming’s powerful combination on WildFly, we will design a simple application which simulates a Stock Trading ticker, updated in real time by purchases and sales.

We will create the following channels:

1. An Outgoing Producer bound to the “stock-quote” channel where messages containing stock orders will be written into a Topic named “stocks”.

2. An Incoming Consumer bound to the “stocks” channel which read messages available in the “stocks” Topic.

3. An Outgoing Producer bound to the “in-memory-stream” channel which broadcasts internally the new Stock Quote to all available subscribers

4. An Incoming Consumer bound to the “in-memory-stream” channel which reads the new Stock Quote and sends it is as Server Side Event to Clients

The following picture depicts the basic stream of messages used in our example:

The first class we will add is QuoteGenerated which is an ApplicationScoped CDI Bean that produces random quotes for a Company every two seconds. Here is the code of it:

@ApplicationScoped
public class QuoteGenerator {

    @Inject
    private MockExternalAsyncResource externalAsyncResource;

    @Outgoing("stock-quote")
    public CompletionStage<String> generate() {
        return externalAsyncResource.getNextValue();
    }
    
}

This class use an external resource to produce the messages that will be written to Kafka through the channel “stock-quote“.

Here is our MockExternalAsyncResource which produces a Json string with the Stock quote at regular intervals using a ScheduledExecutorService:

@ApplicationScoped
public class MockExternalAsyncResource {
    private static final int TICK = 2000;
    private Random random = new Random();
    String company[] = new String[] {
            "Acme","Globex","Umbrella","Soylent","Initech" };

    private ScheduledExecutorService delayedExecutor = Executors.newSingleThreadScheduledExecutor(Executors.defaultThreadFactory());
    private final AtomicInteger count = new AtomicInteger(0);
    private long last = System.currentTimeMillis();

    @PreDestroy
    public void stop() {
        delayedExecutor.shutdown();
    }

    public CompletionStage<String> getNextValue() {
        synchronized (this) {
            CompletableFuture<String> cf = new CompletableFuture<>();
            long now = System.currentTimeMillis();
            long next = TICK + last;
            long delay = next - now;
            last = next;
            NextQuote nor = new NextQuote(cf);
            delayedExecutor.schedule(nor, delay , TimeUnit.MILLISECONDS);
            return cf;
        }
    }

    private class NextQuote implements Runnable {
        private final CompletableFuture<String> cf;

        public NextQuote(CompletableFuture<String> cf) {
            this.cf = cf;
        }

        @Override
        public void run() {
            String _company = company[random.nextInt(5)];
            int amount = random.nextInt(100);
            int op = random.nextInt(2);
            Jsonb jsonb = JsonbBuilder.create();
            Operation operation = new Operation(op, _company, amount);
            cf.complete(jsonb.toJson(operation));

        }
    }
}

At the end of the day, the getNextValue method will produce a Message which contains a JSON String like in the following example:

{"amount":32,"company":"Soylent","type":0}

Next, is the Operation Class, which is a wrapper to a random stock operation:

public class Operation   {

    public static final int SELL = 0;
    public static final int BUY = 1;
    private int amount;
    private String company;
    private int type;

    // Constructors / getter/setters omitted for brevity

Next, the following QuoteConverter Class will do the job of converting a Stock Order into a new quotation for the Company involved in the transaction:

@ApplicationScoped
public class QuoteConverter {
    HashMap<String,Double> quotes;

    private Random random = new Random();
    @PostConstruct
    public void init() {
        quotes = new HashMap<>();
        String company[] = new String[] {
                "Acme","Globex","Umbrella","Soylent","Initech" };

        for (String c: company)
        quotes.put(c, new Double(random.nextInt(100) + 50));

    }
 
    @Incoming("stocks")
    @Outgoing("in-memory-stream")
    @Broadcast
    public String newQuote(String quoteJson) {
        Jsonb jsonb = JsonbBuilder.create();

        Operation operation = jsonb.fromJson(quoteJson, Operation.class);

        double currentQuote = quotes.get(operation.getCompany());
        double newQuote;
        double change = (operation.getAmount() / 25);

        if (operation.getType() == Operation.BUY) {
              newQuote = currentQuote + change;
        }
        else  {
            newQuote = currentQuote - change;
        }
        if (newQuote < 0) newQuote = 0;

        quotes.replace(operation.getCompany(), newQuote);
        Quote quote = new Quote(operation.getCompany(), newQuote);
        return jsonb.toJson(quote);

    }

}

The init method of this class, simply bootstraps the initial quotation of every Company with some random values.

The newQuote method is the real heart of our transaction system. By reading the Operation data contained in the JSON file, a new quote is generated, using a basic algorithm: for any 25 stocks transacted, there will be one point’s impact on the value of the stock. The returned JSON String, wraps the Quote Class, and it’s broadcasted to all matching subscribers of the”in-memory-stream” channel, by means of the @Broadcast annotation on the top of the

method.

For the sake of completeness, we also include the Quote Java Class, which will be sent as JSON to the Client:

public class Quote {
	String company;
	Double value;
	public Quote(String company, Double value) {
	this.company = company;
	this.value = value;
	}
// Getters Setters method omitted for brebity
}

Within our example, we have the following subscriber for the “in-memory-stream” channel, where the Quote is published:

@Path("/quotes")
public class QuoteEndpoint {

    @Inject
    @Channel("in-memory-stream")
    Publisher<String> quote;

    @GET
    @Path("/stream")
    @Produces(MediaType.SERVER_SENT_EVENTS)
    @SseElementType("text/plain")
    public Publisher<String> stream() {

        return quote;
    }

}

The QuoteEndpoint is our REST Endpoint. Within this, we are using the @Channel qualifier to inject the Channel “in-memory-stream” into the Bean.

All the above components need a broker where we publish the stock quotes and from where they can be read as well. Here is the META-INF/microprofile-config.properties file which keeps all pieces together:

mp.messaging.connector.smallrye-kafka.bootstrap.servers=localhost:9092
# Kafka sink (we write to it)
mp.messaging.outgoing.stock-quote.connector=smallrye-kafka
mp.messaging.outgoing.stock-quote.topic=stocks
mp.messaging.outgoing.stock-quote.value.serializer=org.apache.kafka.common.serialization.StringSerializer
# Configure the Kafka source (we read from it)
mp.messaging.incoming.stocks.connector=smallrye-kafka
mp.messaging.incoming.stocks.topic=stocks
mp.messaging.incoming.stocks.value.deserializer=org.apache.kafka.common.serialization.StringDeserializer

The first block is related to the Kafka destination, also known as sink, where we write the Stock Quote produced by the QuoteGenerator .

In the second block, we configure the source topic and connector where we read the Stock Quote as JSON Serialized Stream.

What is left to do, is to add a Client application which is able to capture the Server Side Event and display the text of it in a nicely formatted table of data. For the sake of brevity, we will add here just the core Javascript function that collects the Server Side Event:

<script>
    var source = new EventSource("/reactive/rest/quotes/stream");
    source.onmessage = function (event) {
    var data = JSON.parse(event.data);
    var company = data['company'];
    var value = data['value'];
        document.getElementById(company).innerHTML = value;
    };
</script>

The above code, is in the index.html page you will find in the source code of this example.

You can deploy the application on WildFly with:

$ mvn clean install wildfly:deploy

And here’s our beautiful Stock Ticker demo running on WildFly:

Source code for this tutorial: https://github.com/fmarchioni/mastertheboss/tree/master/kafka/microprofile-kafka-demo

Many thanks to Kabir Khan for his help in fixing a couple of issues I’ve hit during the example set up and for providing a first excellent overview of Reactive Messaging in WildFly in this post: https://www.wildfly.org/news/2021/03/11/WildFly-MicroProfile-Reactive-specifications-feature-pack-2.0/

How to live reload your WildFly applications

The next major release of WildFly Bootable JAR (3.0.0) is going to bring a super interesting feature. You can use a Maven goal to achieve live reload of your applications. Let’s check it out.

In order to use the live reload feature, you have to use the version 3 of the WildFly Bootable Jar plugin:

 <build>
    <finalName>${project.artifactId}</finalName>
    <plugins>
      <plugin>
        <groupId>org.wildfly.plugins</groupId>
        <artifactId>wildfly-jar-maven-plugin</artifactId>
        <version>3.0.0.Beta1</version>
        <configuration>
          <feature-pack-location>wildfly@maven(org.jboss.universe:community-universe)#${version.server.bom}</feature-pack-location>
          <layers>
            <layer>jaxrs</layer>
            <layer>management</layer>
          </layers>
          <excluded-layers>
            <layer>deployment-scanner</layer>
          </excluded-layers>
            <plugin-options>
                <jboss-fork-embedded>true</jboss-fork-embedded>
            </plugin-options>
        </configuration>
        <executions>
          <execution>
            <goals>
              <goal>package</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>

By updating to the new plugin version, you can use the new “dev-watch” goal, which is supposed to build, deploy and watch for changes in your application.

Let’s try it with a simple REST Service:

@Path("/")
public class SimpleRESTService {

	@GET
	public String hello()
	{
		return "Hello from Bootable JAR!\n ";
	}

}

Now build and run your application with:

$ mvn wildfly-jar:dev-watch

Check the application endpoint:

$ curl http://localhost:8080/rest
Hello from Bootable JAR!

Now edit the Endpoint so that t returns a different String:

	@GET
	public String hello()
	{
		return "Hello from Bootable JAR changed!\n ";
	}

As you save the file, you will see that the application is automatically re-deployed:

14:42:35,200 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 18) WFLYUT0022: Unregistered web context: '/' from server 'default-server'
14:42:35,206 INFO  [org.jboss.as.server.deployment] (MSC service thread 1-3) WFLYSRV0028: Stopped deployment ROOT.war (runtime-name: ROOT.war) in 7ms
14:42:35,228 INFO  [org.jboss.as.server] (management-handler-thread - 4) WFLYSRV0009: Undeployed "ROOT.war" (runtime-name: "ROOT.war")
14:42:35,239 INFO  [org.jboss.as.server.deployment] (MSC service thread 1-3) WFLYSRV0027: Starting deployment of "ROOT.war" (runtime-name: "ROOT.war")
14:42:35,345 INFO  [org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool -- 18) RESTEASY002225: Deploying javax.ws.rs.core.Application: class com.mastertheboss.jaxrs.activator.JaxRsActivator
14:42:35,347 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 18) WFLYUT0021: Registered web context: '/' for server 'default-server'
14:42:35,352 INFO  [org.jboss.as.server] (management-handler-thread - 4) WFLYSRV0010: Deployed "ROOT.war" (runtime-name : "ROOT.war")

Check again the application endpoint:

$ curl http://localhost:8080/rest
Hello from Bootable JAR changed!

As you can see, we managed to reload our application without any manual re-deployment. Pretty neat isn’t it?

You can check the source code of this example here: https://github.com/fmarchioni/mastertheboss/tree/master/bootable-jar/dev-watch

Turn your WildFly applications in bootable JARs

WildFly bootable jars have reached the version 2.0.0.Final therefore we have updated this tutorial. WildFly bootable jars are a new way to package applications which specifically target microservice deployments, allowing us to boot our Enterprise/Microprofile application just from one jar file!

Thanks to Galleon technology, it is possible to combine layers of the application servers in order to produce a WildFly distribution according to your application needs. On the top of that, a Maven plugin has been developer to allow combining your application and the server in a single bootable JAR.

How to create a bootable WildFly JAR

In order to create a bootable WildFly JAR you need to include the wildfly-jar-maven-plugin plugin which does the magic to transform your application into a bootable WildFly application server with the application running on the top of it:

<plugin>
  <groupId>org.wildfly.plugins</groupId>
  <artifactId>wildfly-jar-maven-plugin</artifactId>
  <version>2.0.0.Final</version>
</plugin> 

In order to specify which WildFly version you will be using and which Galleon layers have to be included in your server structure, you will use the configuration element of the wildfly-jar-maven-plugin:

<plugin>
  <groupId>org.wildfly.plugins</groupId>
  <artifactId>wildfly-jar-maven-plugin</artifactId>
  <version>2.0.0.Final</version>
  <configuration>
    <feature-pack-location>wildfly@maven(org.jboss.universe:community-universe)#${version.server.bom}</feature-pack-location>
    <layers>
      <layer>jaxrs</layer>
      <layer>management</layer>
    </layers>
    <excluded-layers>
      <layer>deployment-scanner</layer>
    </excluded-layers>
  </configuration>
  <executions>
    <execution>
      <goals>
        <goal>package</goal>
      </goals>
    </execution>
  </executions>
</plugin> 

Some information about the configuration. First of all, the feature-pack-location specifies the WildFly version to be provisioned. So to provision WildFly 21:

<feature-pack-location>wildfly@maven(org.jboss.universe:community-universe)#21.0.0.Final</feature-pack-location>

If you don’t specify the version, the latest WildFly server will be provisioned:

<feature-pack-location>wildfly@maven(org.jboss.universe:community-universe)</feature-pack-location>

Within the layers section you can specify which layers will be included/excluded when building up the WildFly server. For example, to provision a server containing jaxrs and management support without the deployment scanner:

<layers>
    <layer>jaxrs</layer>
    <layer>management</layer>
</layers>

<excluded-layers>
    <layer>deployment-scanner</layer>
</excluded-layers>

A concrete application example

Let’s transform a standard JAX-RS application into a Bootable JAR File. The original application is taken from our RESTEasy tutorial

The updated source code with the wildfly-jar-maven-plugin is available here: https://github.com/fmarchioni/mastertheboss/tree/master/bootable-jar/basic

You can build the application as usual with:

mvn install

You will notice that the JAR file rest-demo-bootable.jar has been created. You can run it with:

java -jar target/rest-demo-bootable.jar

You should expect the following output, which shows our example application was added under the rest-demo context:

10:09:40,069 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-5) WFLYUT0006: Undertow HTTP listener default listening on 127.0.0.1:8080
10:09:40,076 INFO  [org.jboss.as.server.deployment] (MSC service thread 1-7) WFLYSRV0027: Starting deployment of "rest-demo.war" (runtime-name: "ROOT.war")
10:09:41,070 INFO  [org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool -- 16) RESTEASY002225: Deploying javax.ws.rs.core.Application: class com.mastertheboss.jaxrs.activator.JaxRsActivator
10:09:41,117 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 16) WFLYUT0021: Registered web context: '/' for server 'default-server'
10:09:41,120 INFO  [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0010: Deployed "rest-demo.war" (runtime-name : "ROOT.war")
10:09:41,142 INFO  [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0212: Resuming server
10:09:41,145 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full 21.0.0.Final (WildFly Core 13.0.1.Final) started in 2335ms - Started 154 of 158 services (32 services are lazy, passive or on-demand)
10:09:41,147 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://127.0.0.1:9990/management

You can test any of the REST Service (which has been bound to the Root Web Context) as follows:

curl -s http://localhost:8080/rest/itemListJson | jq
[
  {
    "description": "computer",
    "price": 2500
  },
  {
    "description": "chair",
    "price": 100
  },
  {
    "description": "table",
    "price": 200
  }
]

Creating an hollow jar

By setting the hollow-jar option to true in the pom.xml you will be able to generate an Hollow Jar of the application server. This means, you will create the Runtime environment to start WildFly, in case you want to provide the application at runtime through the management interface:

 <hollow-jar>true</hollow-jar>

You can boot the hollow-jar application server just the same way:

java -jar target/rest-demo-bootable.jar

You should expect that the application server starts up but no applications are deployed on the top of it:

10:17:07,781 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-5) WFLYUT0006: Undertow HTTP listener default listening on 127.0.0.1:8080
10:17:07,843 INFO  [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0212: Resuming server
10:17:07,846 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full 21.0.0.Final (WildFly Core 13.0.1.Final) started in 1202ms - Started 110 of 114 services (29 services are lazy, passive or on-demand)
10:17:07,847 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://127.0.0.1:9990/management

Checking the start up time and memory usage of the Bootable jar

By running the application as bootable jar, shows a remarkable increase in server start up, which accounts for just 1.1 seconds:

WFLYSRV0025: WildFly Full 20.0.0.Final (WildFly Core 12.0.1.Final) started in 1185ms - Started 83 of 87 services (23 services are lazy, passive or on-demand)

Also, if you check the Resident Memory Size of the Hollow Jar application, it takes about 213 MB as you can see from the top output:

PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND 
17470 frances+  20   0   11.8g 213184  16984 S   0.3   0.7   1:15.11 java  

That’s less than the half of the standard WildFly server distribution with our application on top of it:

PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND 
16307 frances+  20   0 1936116 472296  20300 S   2.0   1.5   0:28.03 java  

Conclusion

WildFly bootable jar is a remarkable advancement to build a self-contained application using the technologies you are familiar with and the management options of WildFly application server. In the next tutorial we will learn how to create a Clustered Application using WildFly bootable JAR: Creating a clustered application with WildFly bootable JAR

Getting started with GraphQL using Java applications

This tutorial will introduce you to the GraphQL Microprofile specification covering the basics of this emerging technology and showing a full example application developed on WildFly application server, which is a Microprofile compatible runtime environment.

GraphQL is a specialized query language with a syntax that defines the process of data-request and is useful when conveying data to a client from a server. One of GraphQL’s key aspect is its ability to aggregate data from various sources with a single endpoint API.

GraphQL is already widely used in Microservices architectures and there are several Java-based GraphQL libraries available. Now a specification for GraphQL is available in the MicroProfile project and we can expect that will help in increasing the popilarity of this API and reach both user community and vendor support.

GraphQL in a nutshell

Let’s start with the basic GraphQL Types.

The Query type describes the data that you want to fetch from the GraphQL server.

GraphQL queries access not just the properties of one resource but also smoothly follow references between them. While typical REST APIs require loading from multiple URLs, GraphQL APIs get all the data your app needs in a single request.

For example, let’s consider the following relation:

The query below will fetch all the users and their publicly visible todos:

 query {
   users {
     name
     todos {
       title
     }
   }
 }

The Mutation is the second kind of “root type” which lets us write data to the DB. Think of Mutation as analogous to POST and PUT requests of REST. Let’s look at an example:

mutation createUser{
  addUser(name: "John", age: 34) {
    id
  }
}

What this means is, we are defining a mutation called “createUser” that adds a user with fname, “John” and age, “34”. The response type of this mutation is the function body. It responds with a JSON with the id of the posted data.

data : {
  addUser : "a21c2h"
}

The third type of operation available in GraphQL is Subscription. A Subscription gives the client the ability to listen to real-time updates on data. Subscriptions use web sockets under the hood. Let’s take a look at an example:

subscription listenLikes {
  listenLikes {
    name
    likes
  }
}

The above subscription responds with the list of users with their first name and number of likes whenever the like count of the user changes.

Setting up a GraphQL Server with WildFly

In order to setup a GraphQL server that responds for queries, mutations and subscriptions we will create a custom WildFly distribution using Galleon provisioning tool.

  1. Download the Galleon tool from here: https://github.com/wildfly/galleon/releases
  2. Then, download the provision.xml file which contains the feature pack definition for GraphQL: https://github.com/wildfly-extras/wildfly-graphql-feature-pack/blob/master/provision.xml

Now, create the custom WildFly distribution using the galleon.sh script, found in the bin folder of the Galleon tool:

$ ./galleon.sh provision ./provision.xml --dir=wildfly-graphql

A folder named “wildfly-graphql” will be created with the latest WildFly version. The XML configuration file includes the graphql extension in it:

<extension module="org.wildfly.extension.microprofile.graphql-smallrye"/>

Creating a GraphQL application

Our example consists of a CDI Bean annotated with @GraphQLApi. This annotation indicates that the CDI bean will be a GraphQL endpoint.

The Bean contains a couple of @Query methods and two @Mutation methods. The @Query methods are used to retrieve a graph from the Item object. The @Mutation are used to insert or remove an existing Item:

package com.mastertheboss.graphql.sample;

import com.mastertheboss.graphql.model.Item;
import org.eclipse.microprofile.graphql.*;

import javax.enterprise.context.ApplicationScoped;
import java.util.ArrayList;
import java.util.List;
import java.util.stream.Collectors;

@GraphQLApi
@ApplicationScoped
public class ItemService {

    List items = new ArrayList<>();

    @Query("item")
    @Description("Get a single Item by id")
    public Item getItem(@Name("id") Integer id){
        return findItem(id);
    }

    @Query("allItems")
    @Description("Get all Items")
    public List getAllItems() {
        return items;
    }

    @Mutation
    public Item createItem(@Name("item") Item item) {
        items.add(item);
        return item;
    }
    @Mutation
    public Item deleteItem(Integer id) {
        Item item = findItem(id);
        items.removeIf(e -> e.getId().equals(id));
        return item;
    }

    public Item findItem(Integer id) {
        Item item = items.stream().filter(a -> a.getId() == id).collect(Collectors.toList()).get(0);
        return item;
    }
 }

The Item object is a simple Java Bean. For the sake of brevity, just fields are included here:

   public class Item implements Serializable {

	private Integer id;
	private String type;
	private String model;
	private int price;
           
    // Getters and Setters omitted 
   }

Now, we will build and deploy the application on WildFly and show different ways to test it.

Building the GraphQL example

In order to build the GraphQL example application, it is recommended to include the pom file for the microprofile-graphql-api. Then, include the smallrye-graphql-ui-graphiql:

<dependency>
    <groupId>org.eclipse.microprofile.graphql</groupId>
    <artifactId>microprofile-graphql-api</artifactId>
    <version>1.0.3</version>
    <type>pom</type>
    <scope>provided</scope>
</dependency>

<dependency>
    <groupId>io.smallrye</groupId>
    <artifactId>smallrye-graphql-ui-graphiql</artifactId>
    <scope>provided</scope>        
</dependency>

You can deploy the Web application as a regular WildFly application:

mvn clean install wildfly:deploy

Retrieving the GraphQL schema

Once the ItemService is deployed a schema will be automatically generated, from the @Query and @Mutation annotations we have included in our Service. In order to retrieve the existing schema, we can execute a simple request. Assumed that the application is running under the “mp-graphql” Web context, the schema is available under the “graphql” Path:

$ curl http://localhost:8080/mp-graphql/graphql/schema.graphql
type Item {
  id: Int
  model: String
  price: Int!
  type: String
}

"Mutation root"
type Mutation {
  createItem(item: ItemInput): Item
  deleteItem(id: Int): Item
}

"Query root"
type Query {
  "Get all Items"
  allItems: [Item]
  "Get a single Item by id"
  item(id: Int): Item
}

input ItemInput {
  id: Int
  model: String
  price: Int!
  type: String
}

Testing with the GraphQL User Interface

The simplest way to test our GraphQL Query and Mutation is via the UI which is available once we have included the following dependency in our application:

<dependency>
    <groupId>io.smallrye</groupId>
    <artifactId>smallrye-graphql-ui-graphiql</artifactId>
    <version>1.0.7</version>
</dependency>

The GraphQL will be available upon deployment of your application under the Context http://localhost:8080/[app-context]/graphql-ui/

Let’s start by adding an Item object using the available Mutation:

mutation Add {
  createItem(item: {
      id: 1,
      type: "Laptop"
      model: "Lenovo"
      price: 500
  	}
  )
  {
    type
    model
    price
  }
}

Here is the expected result from the UI:

Next, let’s try to execute a Query to fetch all available Item objects, returning a Graph which includes just the “type” and “model” fields:

query all {
  allItems {
    type
    model
  }
}

The above Query returns:

{
  "data": {
    "allItems": [
      {
        "type": "Laptop",
        "model": "Lenovo"
      }
    ]
  }
}

You might have also executed the following Query which fetches a single Item by id:

query item {
  item(id: 1) {
    type
    model
  }
}

Finally, let’s delete the existing Item with the following Mutation:

mutation Delete {
  deleteItem(id :1){
    price
    model
  }
}

That will return a Graph of the deleted Item:

{
  "data": {
    "item": {
      "type": "Laptop",
      "model": "Lenovo"
    }
  }
}

Testing with cURL

Since curl is a command line tool available on any linux machine, it’s the simplest way to operate with GraphQL Queries that are not particularly complex. Without further ado, here’s a curl command that will fetch a GraphQL query from our API, which returns the temperatureF attribute for one location:

curl -s  -X POST   -H "Content-Type: application/json"   --data '{ "query": "{ allItems { type model} }" }'   http://localhost:8080/mp-graphql/graphql | jq

The returned output is:

{
  "data": {
    "allItems": [
      {
        "type": "Laptop",
        "model": "Lenovo"
      }
    ]
  }
}

Testing with a Java Client

A GraphQL service can also be tested with a bare simple Java Client, for example a Servlet, which executes the available GraphQL query:

package com.mastertheboss.graphql.sample.client;

import com.mastertheboss.graphql.model.Item;
import io.smallrye.graphql.client.typesafe.api.GraphQlClientBuilder;

import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;
import java.util.List;


@WebServlet("/demo")
public class GraphQLServlet extends HttpServlet {

    protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
        ItemApi api = GraphQlClientBuilder.newBuilder().endpoint("http://localhost:8080/"+ request.getContextPath()+"/graphql").build(ItemApi.class);

        List list = api.getAllItems();
        for (Item i:list) {
            response.getWriter().append(i.toString());
        }

    }

}

The GraphQlClientBuilder registers a new endpoint for WeatherApi using the application’s Web context. The ItemAPI is merely an interface containing the available Query methods available:

public interface ItemApi {    
    public Item getItem(@Name("id") Integer id);
    public List getAllItems();
    public Item createItem(Item item);
    public Item deleteItem(Integer id);
}

In order to build an application which includes GraphQL client API, you have to include also the following dependency:

<dependency>
    <groupId>io.smallrye</groupId>
    <artifactId>smallrye-graphql-client</artifactId>
    <version>1.0.7</version>
</dependency>

Now you can test your Servlet Client as follows:

$ curl http://localhost:8080/mp-graphql/demo
Item{id=1, type='Laptop', model='Lenovo', price=500}

Source code for this tutorial: https://github.com/fmarchioni/mastertheboss/tree/master/micro-services/mp-graphql

References:https://www.wildfly.org/news/2020/08/13/Introducing-the-WildFly-GraphQL-feature-pack/

Creating a clustered application with WildFly bootable JAR

In this article we will learn how to create a simple clustered HTTP application using WildFly bootable jar option

First of all, we recommend having a look at the first tutorial which gives some basic exposure to WildFly bootable jars: Turn your WildFly applications in bootable JARs

That being said, in order to provision a bootable WildFly instance with HTTP Clustering we need the web-server and web-clustering layer in our wildfly-jar-maven-plugin configuration:

<build>
        <finalName>${project.artifactId}</finalName>
        <plugins>
            <plugin>
                <groupId>org.wildfly.plugins</groupId>
                <artifactId>wildfly-jar-maven-plugin</artifactId>
                <version>2.0.0.Final</version>
                <configuration>
                    <feature-pack-location>wildfly@maven(org.jboss.universe:community-universe)#${version.wildfly}</feature-pack-location>
                    <layers>
                        <layer>web-server</layer>
                        <layer>web-clustering</layer>
                    </layers>
                    <excluded-layers>
                        <layer>deployment-scanner</layer>
                    </excluded-layers>
                </configuration>
                <executions>
                    <execution>
                        <goals>
                            <goal>package</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

Please notice that the web-clustering layer does not include mod_cluster but uses Infinispan and jgroups to make up the cluster. Therefore, putting a mod_cluster front-end in the picture won’t enable discover of servers.

That being said, we will create a minimal Servlet application which simply stores a counter in the session, so that we can check if the HTTP Session is working:

@WebServlet(urlPatterns = {"/"})
public class DemoServlet extends HttpServlet {

    protected void processRequest(HttpServletRequest request, HttpServletResponse response)
            throws ServletException, IOException  {
        response.setContentType("text/html");
        PrintWriter out = response.getWriter();
        Integer count = 1;
        HttpSession session = request.getSession(false);
        if (session == null)
        {

            session = request.getSession();
            out.println("Session created: "+session.getId());
            session.setAttribute( "counter", count );
        }

        else
        {
            out.println("Welcome Back!");
            count = (Integer) session.getAttribute( "counter" );
            ++count;
            session.setAttribute( "counter", count );
        }
            out.println("<br>Counter " + count);
    }

    @Override
    protected void doGet(HttpServletRequest request, HttpServletResponse response)
            throws ServletException, IOException {
        processRequest(request, response);
    }


    @Override
    protected void doPost(HttpServletRequest request, HttpServletResponse response)
            throws ServletException, IOException {
        processRequest(request, response);
    }

}

Also, enable in web.xml the application to be distributable:

<web-app  xmlns="http://java.sun.com/xml/ns/j2ee"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee
                              http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"
          version="2.4">
    <distributable/>
</web-app>

You can build you application with:

mvn package

Now open two shells and start two instances of the application server:

$ java -jar target/cluster-demo-bootable.jar -Djboss.node.name=node1 
$ java -jar target/cluster-demo-bootable.jar -Djboss.node.name=node2 -Djboss.socket.binding.port-offset=100

You will see from the Console logs that the Cluster has been created:

09:57:39,478 INFO  [org.infinispan.CLUSTER] (thread-6,ejb,node1) ISPN000094: Received new cluster view for channel ejb: [node1|1] (2) [node1, node2]
09:57:39,492 INFO  [org.infinispan.CLUSTER] (thread-6,ejb,node1) ISPN100000: Node node2 joined the cluster
09:57:40,160 INFO  [org.infinispan.CLUSTER] (remote-thread--p3-t1) [Context=default-server] ISPN100002: Starting rebalance with members [node1, node2], phase READ_OLD_WRITE_ALL, topology id 2

Now you can try reaching the Servlet (deployed on the Root context) at http://localhost:8080

Now you can try crashing this server (Control+C for example). You will see from the other server’s logs that:

09:57:55,687 INFO  [org.infinispan.CLUSTER] (thread-7,ejb,node2) ISPN100001: Node node1 left the cluster

Now, reach the second server and check that the counter is still increasing:

You can check the source code for this example at: https://github.com/fmarchioni/mastertheboss/tree/master/bootable-jar/cluster

A simple example of MicroProfile REST Client API

MicroProfile REST Client API provides a type-safe approach to invoke RESTful services over HTTP. It relies on JAX-RS APIs for consistency and easier reuse, therefore you won’t need a specific extension to be added in WildFly to use this API. Let’s see a sample application which is composed of a Server Endpoint and a REST Client Endpoint which acts as an interface for the Server Endpoint.

Requirements:

In order to run this tutorial you will need:

  • WildFly application server version 19 or newer
  • Maven
  • JDK (at least 1.8)

Creating the REST Server Endpoint

The Server Endpoint exposes our SimpleRESTService:

@Path("/")
public class SimpleRESTService {
	@GET
	@Path("/text")
	public String getHello () 
	{
		return "hello world!";
	} 
	@GET
	@Path("/json")
	@Produces(MediaType.APPLICATION_JSON)
	public SimpleProperty getPropertyJSON () 
	{
        SimpleProperty p = new SimpleProperty("key","value");
		return p;
	}
	@GET
	@Path("/xml")
	@Produces(MediaType.APPLICATION_XML)
	public SimpleProperty getPropertyXML () 
	{
        SimpleProperty p = new SimpleProperty("key","value");
		return p;
	}
}

This Endpoint is activated by the following JAX-RS activator:

@ApplicationPath("/api")
public class JaxRsActivator extends Application {
    
}

You can deploy the Endpoint as usual with as follows:

$ mvn clean package wildfly:deploy

Check that the application has been deployed successfully:

15:12:46,302 INFO  [org.jboss.as.server.deployment] (MSC service thread 1-1) WFLYSRV0027: Starting deployment of "ee-microprofile-rest-server.war" (runtime-name: "ee-microprofile-rest-server.war")
15:12:46,374 INFO  [org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool -- 116) RESTEASY002225: Deploying javax.ws.rs.core.Application: class com.itbuzzpress.jaxrs.activator.JaxRsActivator
15:12:46,378 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 116) WFLYUT0021: Registered web context: '/ee-microprofile-rest-server' for server 'default-server'

Now let’s move to the REST Client application.

Creating the REST Client Endpoint

In order to create a MicroProfile REST Client we need to add an interface which uses the proper JAX-RS and MicroProfile annotations:

@RegisterRestClient(baseUri = "http://localhost:8080/ee-microprofile-rest-server/rest")
@Path("/api")
public interface SimpleRESTServiceItf {
	@GET
	@Path("/text")
	public String getHello();

	@GET
	@Path("/json")
	@Produces(MediaType.APPLICATION_JSON)
	public SimpleProperty getPropertyJSON();

	@GET
	@Path("/xml")
	@Produces(MediaType.APPLICATION_XML)
	public SimpleProperty getPropertyXML();

}

This interface registers a remote REST Endpoint using the @RegisterRestClient annotation that will connect to the Server Endpoint. For this purpose, we have provided the same set of API which are available in the remote Endpoint as interface methods.

Next, we need an actual JAX-RS Resource that will be available on the Client application as proxy to the Remote Endpoint:

@Path("/proxy")
@ApplicationScoped
public class SimpleRESTEndpoint {

    @Inject
    @RestClient
    SimpleRESTServiceItf service;

    @GET
    @Path("/text")
    public String getHello() {
        return service.getHello();
    }

    @GET
    @Path("/json")
    @Produces(MediaType.APPLICATION_JSON)
    public SimpleProperty getPropertyJSON(){
        return service.getPropertyJSON();
    }

    @GET
    @Path("/xml")
    @Produces(MediaType.APPLICATION_XML)
    public SimpleProperty getPropertyXML() {
        return service.getPropertyXML();
    }

}

By injecting the SimpleRESTServiceItf as @RestClient, we will be able to proxy the remote REST Endpoint.

Now we are ready to build, deploy and test our application.

Build and deploy the application:

$ mvn clean package wildfly:deploy

Let’s check first the Server Endpoint’s text method:

$ curl http://localhost:8080/ee-microprofile-rest-server/api/text
hello world!

Now, try to reach the same method using the Client REST API_

$ curl http://localhost:8080/ee-microprofile-rest-client/proxy/text
hello world!

Configuring the REST client base URL/URI dynamically

To configure the base URI of the REST client dynamically the MicroProfile REST Client can the MicroProfile Config specification.

The name of the property for the base URI of our REST client needs to follow a certain convention. Create a new file `src/main/resources/META-INF/microprofile-config.properties` with the following content:

com.itbuzzpress.jaxrs.service.SimpleRESTServiceItf/mp-rest/url=http://localhost:8080/ee-microprofile-rest-server
com.itbuzzpress.jaxrs.service.SimpleRESTServiceItf/mp-rest/scope=javax.inject.Singleton

This configuration means that:

  • All requests performed using com.itbuzzpress.jaxrs.service.SimpleRESTServiceItf will use http://localhost:8080/ee-microprofile-rest-server as a base URL.
  • The default scope of com.itbuzzpress.jaxrs.service.SimpleRESTServiceItf will be @Singleton. Supported scope values are @Singleton, @Dependent, @ApplicationScoped and @RequestScoped. The default scope is @Dependent.

This tutorial is an excerpt from the book “Practical Enterprise Development” available on ItBuzzPress:

Source code available here: https://bit.ly/3busStm

Getting started with OpenAPI on WildFly

The Microprofile OpenAPI can be used to document your REST endpoint using annotations or a pre-generated JSON in a standard way. In this tutorial we will learn how to leverage this API on applications deployed on WildFly.

Documenting REST Services is extremely useful since, as it follows a standard, it can be used in a range of tools such as those provided by the Swagger suite. These tools let you do all sorts of things such as design, edit and test a REST API documented by an OpenAPI document. Documents generated with OpenAPI typically contain:

  • A list of common objects
  • A list of servers the API is available from
  • A list of each path in the API as well as what parameters it accepts

Here is a sample application which documents the org.eclipse.microprofile.openapi.annotations.responses.APIResponse for a set of org.eclipse.microprofile.openapi.annotations.Operation:

package com.itbuzzpress.microprofile.service;

import java.io.InputStream;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import java.util.logging.Logger;

import javax.inject.Inject;
import javax.ws.rs.*;
import javax.ws.rs.container.*;
import javax.ws.rs.core.*;
import org.eclipse.microprofile.config.Config;
import org.eclipse.microprofile.openapi.annotations.Operation;
import org.eclipse.microprofile.openapi.annotations.responses.APIResponse;
import org.eclipse.microprofile.openapi.annotations.responses.APIResponses;
import org.eclipse.microprofile.openapi.annotations.tags.Tag;
import com.itbuzzpress.microprofile.model.SimpleProperty;
@Tag(name = "OpenAPI Example", description = "Get a text in various formats")

@Path("/simple")
public class SimpleRESTService {
	@GET
	@Operation(description = "Getting Hello Text")
	@APIResponse(responseCode = "200", description = "Successful, Text")
	@Path("/text")
	public String getHello () 
	{
		return "hello world!";
	}

	@GET
	@Operation(description = "Getting Hello JSON")
	@APIResponse(responseCode = "500", description = "Error in generating JSON")
	@Path("/json")
	@Produces(MediaType.APPLICATION_JSON)
	public SimpleProperty getPropertyJSON () 
	{
        SimpleProperty p = new SimpleProperty("key","value");
		return p;
	}
	@GET
	@Path("/xml")
	@Operation(description = "Getting Hello XML")
	@APIResponse(responseCode = "200", description = "Successful, return XML")
	@Produces(MediaType.APPLICATION_XML)
	public SimpleProperty getPropertyXML () 
	{
        SimpleProperty p = new SimpleProperty("key","value");
		return p;
	}
}

This class also includes a org.eclipse.microprofile.openapi.annotations.tags.Tag annotation that stores some meta-information you can use to help organize your API endpoints.

In order to build this class, you will need the microprofile-openapi dependency in your pom.xml

<dependency>
    <groupId>org.eclipse.microprofile.openapi</groupId>
    <artifactId>microprofile-openapi-api</artifactId>
</dependency>

Before deploying the example, check if the openapi extension is included in your configuration file:

[standalone@localhost:9990 /] /extension=org.wildfly.extension.microprofile.  [TAB]
org.wildfly.extension.microprofile.config-smallrye           org.wildfly.extension.microprofile.metrics-smallrye
org.wildfly.extension.microprofile.fault-tolerance-smallrye  org.wildfly.extension.microprofile.health-smallrye           
org.wildfly.extension.microprofile.opentracing-smallrye      org.wildfly.extension.microprofile.jwt-smallrye  

In the current release (WIldFly 19) it is not included in the default profile, so you have two main options here:

Option 1) Start WildFly using a Microprofile profile such as:

$ ./standalone -c standalone-microprofile.xml

Option 2) Activate the openapi-smallrye extension and subsystem in your server profile as follows:

[standalone@localhost:9990 /]  /extension=org.wildfly.extension.microprofile.openapi-smallrye:add()
{"outcome" => "success"}

Next, add its subsystem:

[standalone@localhost:9990 /] /subsystem=microprofile-openapi-smallrye:add()
{
    "outcome" => "success",
    "response-headers" => {
        "operation-requires-reload" => true,
        "process-state" => "reload-required"
    }
}

Here is a sample json that is generated through the openapi context:

$ curl -v http://localhost:8080/openapi

And this is the description of the endpoint generated through the OpenAPI:

openapi: 3.0.1
info:
  title: Generated API
  version: "1.0"
tags:
- name: OpenAPI Example
  description: Get a text in various formats
paths:
  /rest/simple/json:
    get:
      tags:
      - OpenAPI Example
      description: Getting Hello JSON
      responses:
        200:
          description: Successful, return JSON
  /rest/simple/text:
    get:
      tags:
      - OpenAPI Example
      description: Getting Hello Text
      responses:
        200:
          description: Successful, Text
  /rest/simple/xml:
    get:
      tags:
      - OpenAPI Example
      description: Getting Hello XML
      responses:
        200:
          description: Successful, return XML

If you prefer, you can also have the endoint description in JSON format by appending the format parameter:

$ curl -v http://localhost:8080/openapi?format=JSON  

Please note that on some browsers you might get a “406 – Not Acceptable error” if you try to access the openapi URI (http://localhost:8080/openapi). In this case, just request it using cURL as in the above example.

This tutorial is an excerpt from the book “Practical Enterprise Development” available on ItBuzzPress:

Source code for this tutorial available here: https://bit.ly/39o9nkL