Getting started with MongoDB and Quarkus

This tutorial covers the all the steps required for creating a REST application with MongoDB NoSQL Database and Quarkus.

MongoDB is a document-oriented NoSQL database which became popular in the last decade. It can be used for high volume data storage as a replacement for relational databases. Instead of using tables and rows, MongoDB makes use of collections and documents. A Document consists of key-value pairs which are the basic unit of data in MongoDB. A Collection, on the other hand, contain sets of documents and function which is the equivalent of relational database tables.

In order to get started with MongoDB, you can download the latest Community version from: https://www.mongodb.com/try/download/community

To simplify things, we will start MongoDB using Docker with just one line:

docker run -ti --rm -p 27017:27017 mongo:4.0

Done with MongoDB, there are basically two approaches for developing MongoDB applications and Quarkus:

  • Using the MongoDB Java Client which focuses on the API provided by the MongoDB Client
  • Using Hibernate Panache which focuses on a special kind of Entity objects which are named PanacheMongoEntity

Using the MongoDB Java Client API

By using this approach, you manipulate plain Java Bean classes with MongoDB Java Client. Let’s build a sample application. We will start from the simple Model class:

public class Customer {
    private Long id;
    private String name;

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    @Override
    public String toString() {
        return "Customer{" +
                "id=" + id +
                ", name='" + name + '\'' +
                '}';
    }
}

Then, we need a Service class to manage the standard CRUD operations on the Model using the MongoDB Client:

@ApplicationScoped
public class CustomerService {

    @Inject MongoClient mongoClient;

    public List<Customer> list(){
        List<Customer> list = new ArrayList<>();
        MongoCursor<Document> cursor = getCollection().find().iterator();

        try {
            while (cursor.hasNext()) {
                Document document = cursor.next();
                Customer customer = new Customer();
                customer.setName(document.getString("name"));
                customer.setId(document.getLong("id"));
                list.add(customer);
            }
        } finally {
            cursor.close();
        }
        return list;
    }

    public void add(Customer customer){
        Document document = new Document()
                .append("name", customer.getName())
                .append("id", customer.getId());
        getCollection().insertOne(document);
    }

    public void update(Customer customer){
        Bson filter = eq("id", customer.getId());
        Bson updateOperation = set("name", customer.getName());
        getCollection().updateOne(filter, updateOperation);
    }

    public void delete(Customer customer){
        Bson filter = eq("id", customer.getId());
        getCollection().deleteOne(filter);
    }
    private MongoCollection getCollection(){
        return mongoClient.getDatabase("customer").getCollection("customer");
    }
}

Finally, a REST Endpoint is added to expose the Service:

@Path("/customer")
@Produces("application/json")
@Consumes("application/json")
public class CustomerEndpoint {

    @Inject CustomerService service;

    @GET
    public List<Customer> list() {
        return service.list();
    }

    @POST
    public List<Customer> add(Customer customer) {
        service.add(customer);
        return list();
    }

    @PUT
    public List<Customer> put(Customer customer) {
        service.update(customer);
        return list();
    }

    @DELETE
    public List<Customer> delete(Customer customer) {
        service.delete(customer);
        return list();
    }
}

The last configuration tweak required is in application.properties where we will configure the MongoDB client for a single instance on localhost:

quarkus.mongodb.connection-string = mongodb://localhost:27017

To compile the application, besides the resteasy libraries and the testing API, we need to include the Quarkus MongoDB Client API:

    <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-resteasy</artifactId>
    </dependency>
    <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-resteasy-jsonb</artifactId>
    </dependency>
    <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-mongodb-client</artifactId>
    </dependency>
    <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-junit5</artifactId>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>io.rest-assured</groupId>
      <artifactId>rest-assured</artifactId>
      <scope>test</scope>
    </dependency>

Run the application:

mvn install quarkus:dev

Then, you can test adding a new Customer:

curl -d '{"id":"1", "name":"Frank"}' -H "Content-Type: application/json" -X POST http://localhost:8080/customer

Checking the list of customers:

curl http://localhost:8080/customer
[{"id":1,"name":"Frank"}]

Updating a Customer:

curl -d '{"id":"1", "name":"John"}' -H "Content-Type: application/json" -X PUT http://localhost:8080/customer

And finally deleting it:

curl -d '{"id":"1"}' -H "Content-Type: application/json" -X DELETE http://localhost:8080/customer

Using MongoDB with Hibernate Panache

The Hibernate Panache API greatly simplifies the management of CRUD operations on Hibernate Entity by extending them with Panache Entity Objects and their subclasses such as PanacheMongoEntity.

The following here is the Customer class, which extends PanacheMongoEntity and therefore inherits all the basic CRUD operations:

@MongoEntity(collection = "customers")
public class Customer extends PanacheMongoEntity {
    
    public Long id;

    @BsonProperty("customer_name")
    private String name;

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    @Override
    public String toString() {
        return "Customer{" +
                "id=" + id +
                ", name='" + name + '\'' +
                '}';
    }

    public static Customer findByName(String name) {
        return find("name", name).firstResult();
    }

}

The @MongoEntity annotation can be used to define the MongoDB collection name to be used for the Entity class. Also, the @BsonProperty annotation is used to change the name used in MongoDB to persist the field. With the PanacheMongoEntity, we already have all the methods available to perform the standard CRUD operations. The REST Endpoint is therefore just a wrapper to access each CRUD operation:

@Path("/customer")
@Produces("application/json")
@Consumes("application/json")
public class CustomerEndpoint {

    @GET
    public List<Customer> list() {
        return Customer.listAll();
    }

    @POST
    public Response create(Customer customer) {
        customer.persist();
        return Response.status(201).build();
    }

    @PUT
    public void update(Customer customer) {
        customer.update();
    }

    @DELETE
    public void delete(Customer c) {
        Customer customer = Customer.findById(c.id);
        customer.delete();
    }
}

The last configuration tweak required is in application.properties where we will configure the MongoDB client for a single instance on localhost and the database name:

quarkus.mongodb.connection-string = mongodb://localhost:27017
quarkus.mongodb.database = customers

To build this example, you will need to use the following dependency in your pom.xml (instead of of the MongoDB client API):

    <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-mongodb-panache</artifactId>
    </dependency>

Using PanacheQuery to fetch your data

The PanacheQuery interface can be used to map an Entity query, adding the use of paging, getting the number of results, and operating both on List and Java Stream API.

Here is an example:

    // Get first page of 20 entries
    @Path("/page1")
    @GET
    public List<Customer> pageList() {
        PanacheQuery<Customer> customers = Customer.findAll();
        customers.page(Page.ofSize(20));
        return customers.list();
    }

The above example shows how you can fetch just the first page of 20 Customer Entity objects. To fetch the next page:

List<Customer> nextPage = customers.nextPage().list();

Besides the findAll() method you can also restrict your Query using the find(key,value) method. For example:

    @Path("/page1")
    @GET
    public List<Customer> pageList() {
        PanacheQuery<Customer> customers = Customer.find("name","John");
        customers.page(Page.ofSize(20));
        return customers.list();
    }

Finally, the PanacheQuery can be used as well to fetch the total number of entities, without paging:

int count = customers.count();

If you want an example of Hibernate Panache with a Relational Database check the following article: Managing Data Persistence with Quarkus and Hibernate Panache

You can find the source code for both examples on Github at: https://github.com/fmarchioni/mastertheboss/tree/master/quarkus/mongodb

How to generate a JAX-RS CRUD application in Quarkus using Panache

In this tutorial we will learn how to generate automatically a JAX-RS CRUD application in Quarkus, starting from a Hibernate Panache Entity.

Creating CRUD applications for simple REST endpoint is a tedious task which requires adding lots of boilerplate code. Thanks to the quarkus-hibernate-orm-panache extension, this can be self-generated starting from a plain Entity class using or not a Respository class for persistence.

Please note that this feature is still experimental and currently supports Hibernate ORM with Panache and can generate CRUD resources that work with application/json and application/hal+json content.

First, check this tutorial for some background on Hibernate Panache: Managing Data Persistence with Quarkus and Hibernate Panache

Setting up the Quarkus project

We will now re-create the same application using self-generation of Resource Endpoints. Start by creating a new Quarkus project:

mvn io.quarkus:quarkus-maven-plugin:1.6.1.Final:create \
     -DprojectGroupId=com.mastertheboss \
     -DprojectArtifactId=panache-demo \
     -DclassName="com.mastertheboss.MyService" \
     -Dpath="/tickets" \
     -Dextensions="quarkus-hibernate-orm-rest-data-panache,quarkus-hibernate-orm-panache,quarkus-jdbc-postgresql,quarkus-resteasy-jsonb"

As you can see, our pom.xml file includes an extra dependency to support JAX-RS self-generation with Panache:

<dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-hibernate-orm-rest-data-panache</artifactId>
</dependency>

We will be adding a simple Entity objects, named Tickets which is unchanged from our first Panache example:

import javax.persistence.Column;
import javax.persistence.Entity;

import io.quarkus.hibernate.orm.panache.PanacheEntity;

@Entity
public class Ticket extends PanacheEntity {

    @Column(length = 20, unique = true)
    public String name;

    @Column(length = 3, unique = true)
    public String seat;

    public Ticket() {
    }

    public Ticket(String name, String seat) {
        this.name = name;
        this.seat = seat;
    }
}

Then, you have two options here. If you are updating the Entity directly from the JAX-RS Resource, then just add an interface which extends PanacheEntityResource which is typed with the Entity and the Primary Key used behind the hoods by Panache (in our example, we default to Long)

import io.quarkus.hibernate.orm.rest.data.panache.PanacheEntityResource;

public interface MyService  extends PanacheEntityResource<Ticket, Long> {
}

On the other hand, if your application uses a Repository Class, then you will use a different set of Types which includes also the TicketRepository class:

public interface MyService extends PanacheRepositoryResource<TicketRepository, Ticket, Long> {
}

That’s all! You just add one typed interface and you are done with the JAX-RS Resource! As a proof of concept, let’s add openapi extension to our project so that we can eventually check all endpoints which have been created:

    <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-smallrye-openapi</artifactId>
    </dependency>

Since our application will run against PostgreSQL, in our application.properties includes the JDBC settings for connecting to a local instance of it:

quarkus.datasource.url=jdbc:postgresql:quarkusdb
quarkus.datasource.driver=org.postgresql.Driver
quarkus.datasource.username=quarkus
quarkus.datasource.password=quarkus
quarkus.hibernate-orm.database.generation=drop-and-create
quarkus.hibernate-orm.log.sql=true

Now start PostgreSQL, for example using Docker:

docker run --ulimit memlock=-1:-1 -it --rm=true --memory-swappiness=0 --name quarkus_test -e POSTGRES_USER=quarkus -e POSTGRES_PASSWORD=quarkus -e POSTGRES_DB=quarkusdb -p 5432:5432 postgres:10.5

We also have included an import.sql file to create some sample records:

INSERT INTO ticket(id, name,seat) VALUES (nextval('hibernate_sequence'), 'Phantom of the Opera','11A');
INSERT INTO ticket(id, name,seat) VALUES (nextval('hibernate_sequence'), 'Chorus Line','5B');
INSERT INTO ticket(id, name,seat) VALUES (nextval('hibernate_sequence'), 'Mamma mia','21A');

And the following class will run a minimal Test Case:

import io.quarkus.test.junit.QuarkusTest;
import org.junit.jupiter.api.Test;

import static io.restassured.RestAssured.given;
import static org.hamcrest.CoreMatchers.containsString;

@QuarkusTest
public class MyServiceTest {

    @Test
    public void testListAllTickets() {

        given()
                .when().get("/my-service")
                .then()
                .statusCode(200)
                .body(
                        containsString("Phantom of the Opera"),
                        containsString("Chorus Line"),
                        containsString("Mamma mia")
                );

    }

}

Testing the application

Now if you run the application using:

mvn install quarkus:dev

You will see the following endpoints are available at: http://localhost:8080/openapi

openapi: 3.0.1
info:
  title: Generated API
  version: "1.0"
paths:
  /my-service:
    get:
      responses:
        "200":
          description: OK
    post:
      requestBody:
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/Ticket'
      responses:
        "200":
          description: OK
  /my-service/{id}:
    get:
      parameters:
      - name: id
        in: path
        required: true
        schema:
          format: int64
          type: integer
      responses:
        "200":
          description: OK
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Ticket'
    put:
      parameters:
      - name: id
        in: path
        required: true
        schema:
          format: int64
          type: integer
      requestBody:
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/Ticket'
      responses:
        "200":
          description: OK
    delete:
      parameters:
      - name: id
        in: path
        required: true
        schema:
          format: int64
          type: integer
      responses:
        "204":
          description: No Content
components:
  schemas:
    Ticket:
      type: object
      properties:
        id:
          format: int64
          type: integer
        name:
          type: string
        seat:
          type: string

As you can see, our JAX-RS Service has been bound with the hypened name of the Backing resource: “my-service”. As a proof of concept, you can for example query the list of Ticket objects:

curl http://localhost:8080/my-service

That will return:

[
   {
      "id":2,
      "name":"Chorus Line",
      "seat":"5B"
   },
   {
      "id":3,
      "name":"Mamma mia",
      "seat":"21A"
   },
   {
      "id":1,
      "name":"Phantom of the Opera",
      "seat":"11A"
   }
]

That’s all. We have demonstrated how to scaffold a CRUD JAX-RS application from an Entity object using Quarkus and Hibernate ORM Panache. Stay tuned for more updates on this matter: https://quarkus.io/guides/rest-data-panache

Source code for this tutorial: https://github.com/fmarchioni/mastertheboss/tree/master/quarkus/panache-rest-demo

Building Quarkus native applications with Mandrel

Mandrel is a downstream open source distribution of GraalVM edition which can be used to create native builds for Quarkus applications. Quarkus applications require one essential tool of GraalVM – the native-image feature – which is what actually produces native executables. Mandrel let us to have GraalVM bundled on top of OpenJDK 11 in RHEL distributions and other OpenJDK 11 distributions. Thus, Mandrel can best be described as a distribution of a regular OpenJDK with a specially packaged GraalVM native image.

Mandrel releases are built from the upstream GraalVM code base, with just a few minor changes. Mandrel supports the same native image capability as GraalVM with no significant changes to functionality. One minor difference is that Mandrel does not include support for Polyglot programming languages via the Truffle interpreter and compiler framework. Therefore, it is not possible to extend Mandrel by downloading languages from the Truffle language catalogue.

Mandrel is also built using the standard OpenJDK project release of jdk11u, therefore it does not include a few small enhancements added by Oracle to the JVMCI module which allows the Graal compiler to be run inside OpenJDK.

That being said, these differences should not cause the resulting images themselves to execute in a noticeably different manner.

Installing Mandrel

In order to install, you need at first to install the following packages which are required for Mandrel’s native-image tool:

On Fedora/CentOS/RHEL machines:

sudo dnf install glibc-devel zlib-devel gcc libffi-devel

On Ubuntu systems with:

sudo apt install gcc zlib1g-dev libffi-dev

Then, download the latest release of Mandrel from: https://github.com/graalvm/mandrel/releases

In our case, we will download mandrel-java11-linux-amd64-20.1.0.0.Alpha1.tar.gz:

Then, untar the distribution:

$ tar -xf mandrel-java11-linux-amd64-20.1.0.0.Alpha1.tar.gz

The folder mandrelJDK will be created as top JDK directory:

$ ls mandrelJDK
bin  conf  demo  include  jmods  languages  legal  lib  LICENSE  man  README.md  release  SECURITY.md  THIRD_PARTY_LICENSE.txt

Now you need to set up the JAVA_HOME, GRAALVM_HOME and PATH to include the folder mandrelJDK:

$ export JAVA_HOME="$( pwd )/mandrelJDK"
$ export GRAALVM_HOME="${JAVA_HOME}"
$ export PATH="${JAVA_HOME}/bin:${PATH}"

Check that your Java version is built on the top of OpenJDK 11.08:

java -version
openjdk version "11.0.8-ea" 2020-07-14
OpenJDK Runtime Environment 18.9 (build 11.0.8-ea+7)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.8-ea+7, mixed mode)

Building a Quarkus native application with Mandrel

Now you can try to build a sample Quarkus application as native executable:

$ curl -O -J  https://code.quarkus.io/api/download
$ unzip code-with-quarkus.zip
$ cd code-with-quarkus/
$ ./mvnw package -Pnative
$ ./target/code-with-quarkus-1.0.0-SNAPSHOT-runner

The following output will be displayed:

__  ____  __  _____   ___  __ ____  ______ 
 --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ 
 -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \   
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/   
2020-07-11 18:45:40,939 INFO  [io.quarkus] (main) code-with-quarkus 1.0.0-SNAPSHOT native (powered by Quarkus 1.6.0.Final) started in 0.012s. Listening on: http://0.0.0.0:8080
2020-07-11 18:45:40,939 INFO  [io.quarkus] (main) Profile prod activated. 
2020-07-11 18:45:40,939 INFO  [io.quarkus] (main) Installed features: [cdi, resteasy]

Check that the “hello” Endpoint:

$ curl http://localhost:8080/hello
hello

Great. You just managed to build your native executable application using Quarkus and Mandrel distribution.

How to connect your Quarkus application to Infinispan

Infinispan is a distributed in-memory key/value data grid. An in-memory data grid is a form of middleware that stores sets of data for use in one or more applications, primarily in memory. There are different clients available to connect to a remote/embedded Infinispan server, In this tutorial we will learn how to connect to Infinispan using Quarkus extension.

Starting Infinispan

For the purpose of this tutorial, we will be running a local Infinispan 10 server with this cache definition in infinispan.xml:

  <cache-container default-cache="local">
      <transport cluster="${infinispan.cluster.name}" stack="${infinispan.cluster.stack:tcp}" node-name="${infinispan.node.name:}"/>
      <local-cache name="local"/>
      <invalidation-cache name="invalidation" mode="SYNC"/>
      <replicated-cache name="repl-sync" mode="SYNC"/>
      <distributed-cache name="dist-sync" mode="SYNC"/>
   </cache-container>

From the bin folder of Infinispan, run:

 ./server.sh

As an alternative, you can also run Infinispan with Docker as follows:

docker run -it -p 11222:11222 infinispan/server:latest

Creating the Quarkus Infinispan project

In order to create the Quarkus application, we will need the following set of dependencies:

<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-resteasy-jsonb</artifactId>
</dependency>
<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-resteasy</artifactId>
</dependency>
<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-infinispan-client</artifactId>
</dependency>

Resteasy and jsonb will be needed to set up a CRUD REST application. The dependency quarkus-infinispan-client is needed to connect to Infinispan from Quarkus

There is an extra dependency needed in order to start and connect to an embedded Infinispan server, however we will be discussing this later.

Here is our main Application class:

@ApplicationScoped
public class InfinispanClientApp {

    private static final Logger LOGGER = LoggerFactory.getLogger("InfinispanClientApp");
    @Inject
    @Remote("local")
    RemoteCache<String, Customer> cache;

    void onStart(@Observes StartupEvent ev) {
        cache.addClientListener(new EventPrintListener());
        Customer c = new Customer("1","John","Smith");
        cache.put("1", c);
    }

    @ClientListener
    static class EventPrintListener {

        @ClientCacheEntryCreated
        public void handleCreatedEvent(ClientCacheEntryCreatedEvent e) {
            LOGGER.info("Someone has created an entry: " + e);
        }

        @ClientCacheEntryModified
        public void handleModifiedEvent(ClientCacheEntryModifiedEvent e) {
            LOGGER.info("Someone has modified an entry: " + e);
        }

        @ClientCacheEntryRemoved
        public void handleRemovedEvent(ClientCacheEntryRemovedEvent e) {
            LOGGER.info("Someone has removed an entry: " + e);
        }

    }
}

A couple of things to notice:

When the application is bootstrapped, we are connecting to the RemoteCache (“local”), setting up a ClientListener on it and we also add one key into it.

As we need to marshall/unmarshall the Java class Customer using Infinispan, we will use annotation based Serialization by setting the @ProtoField annotation on fields which need to be serialized:

public class Customer {
    private String id;
    private String name;
    private String surname;
    
    @ProtoField(number = 1)
    public String getId() {
        return id;
    }

    public void setId(String id) {
        this.id = id;
    }
    
    @ProtoFactory
    public Customer(String id, String name, String surname) {
        this.id = id;
        this.name = name;
        this.surname = surname;
    }

    @ProtoField(number = 2)
    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
    @ProtoField(number = 3)
    public String getSurname() {
        return surname;
    }

    public void setSurname(String surname) {
        this.surname = surname;
    }

    @Override
    public String toString() {
        return "Customer{" +
                "id='" + id + '\'' +
                ", name='" + name + '\'' +
                ", surname='" + surname + '\'' +
                '}';
    }

    @Override
    public boolean equals(Object o) {
        if (this == o) return true;
        if (!(o instanceof Customer)) return false;
        Customer customer = (Customer) o;
        return Objects.equals(id, customer.id) &&
                Objects.equals(name, customer.name) &&
                Objects.equals(surname, customer.surname);
    }

    @Override
    public int hashCode() {
        return Objects.hash(id, name, surname);
    }

    public Customer() {
    }
}

Then we need also a SerializationContextInitializer interface with an annotation on it to specify configuration settings:

@AutoProtoSchemaBuilder(includeClasses = { Customer.class}, schemaPackageName = "customer_list")
public interface CustomerContextInitializer extends SerializationContextInitializer {
}

That’s it. We are done with Infinispan. We will now add a REST Endpoint to allow CRUD Operations on our RemoteCache:

Path("/infinispan")
@ApplicationScoped
public class InfinispanEndpoint {

    @Inject
    @Remote("local")
    RemoteCache<String, Customer> cache;


    @GET
    @Path("{customerId}")
    @Produces("application/json")
    public Response get(@PathParam("customerId") String id) {
        Customer customer = cache.get(id);
        System.out.println("Got customer " +customer);
        return Response.ok(customer).status(200).build();
    }



    @POST
    @Produces("application/json")
    @Consumes("application/json")
    public Response create(Customer customer) {
        cache.put(customer.getId(), customer);
        System.out.println("Created customer " +customer);
        return Response.ok(customer).status(201).build();
    }

    @PUT
    @Produces("application/json")
    @Consumes("application/json")
    public Response update(Customer customer) {
        cache.put(customer.getId(), customer);
        System.out.println("Updated customer " +customer);
        return Response.ok(customer).status(202).build();
    }

    @DELETE
    @Path("{customerId}")
    public Response delete(@PathParam("customerId") String id) {
        cache.remove(id);
        System.out.println("Deleted customer "+id);
        return Response.ok().status(202).build();
    }
}

As you can see, once that we get injected the RemoteCache, we can apply the basic get/put/remove operations accordingly on the HTTP methods.

Finally, into our application.properties, we have set the address of Infinispan server:

quarkus.infinispan-client.server-list=localhost:11222

Running the Quarkus application

Start the Quarkus application in dev mode:

mvn clean install quarkus:dev

We can test our application using the REST Endpoints. For example, to get the customer with id “1”:

curl http://localhost:8080/infinispan/1
{"id":"1","name":"John","surname":"Smith"}

To add a new Customer:

curl -d '{"id":"2", "name":"Clark","surname":"Kent"}' -H "Content-Type: application/json" -X POST http://localhost:8080/infinispan

To update an existing Customer:

curl -d '{"id":"2", "name":"Peter","surname":"Parker"}' -H "Content-Type: application/json" -X PUT http://localhost:8080/infinispan

And finally to delete one Customer:

curl -X DELETE http://localhost:8080/infinispan/2

Source code for this tutorial: https://github.com/fmarchioni/mastertheboss/tree/master/quarkus/infinispan-demo

Embedding Infinispan in Quarkus applications

It is also possible to bootstrap an Infinispan cluster from within a Quarkus application using the embedded API. For this purpos you will need an extra dependency in your pom.xml:

<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-infinispan-embedded</artifactId>
</dependency>

With this dependency in place, you can inject the org.infinispan.manager.EmbeddedCacheManager in your code:

@Inject
EmbeddedCacheManager emc;

This will give us a default local (i.e. non-clustered) CacheManager. You can also build a cluster of servers from a configuration file, say dist.xml:

ConfigurationBuilder configurationBuilder = new ConfigurationBuilder();
configurationBuilder.clustering().cacheMode(CacheMode.DIST_SYNC);

List<EmbeddedCacheManager> managers = new ArrayList<>(3);
try {
    String oldProperty = System.setProperty("jgroups.tcp.address", "127.0.0.1");
    for (int i = 0; i < 3; i++) {
        EmbeddedCacheManager ecm = new DefaultCacheManager(
                Paths.get("src", "main", "resources", "dist.xml").toString());
        ecm.start();
        managers.add(ecm);
        // Start the default cache
        ecm.getCache();
    }

And here’s a sample dist.xml (which used tcpping) for our cluster:

<infinispan
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="urn:infinispan:config:10.1 http://www.infinispan.org/schemas/infinispan-config-10.1.xsd
                          urn:org:jgroups http://www.jgroups.org/schema/jgroups-4.0.xsd"
      xmlns="urn:infinispan:config:10.1"
      xmlns:ispn="urn:infinispan:config:10.1">
   <jgroups>
       <stack name="tcpping" extends="tcp">
           <MPING ispn:stack.combine="REMOVE" xmlns="urn:org:jgroups"/>
           <TCPPING async_discovery="true"
                    initial_hosts="${initial_hosts:127.0.0.1[7800],127.0.0.1[7801]}"
                    port_range="0" ispn:stack.combine="INSERT_AFTER" ispn:stack.position="TCP" xmlns="urn:org:jgroups"/>
       </stack>
   </jgroups>

   <cache-container name="test" default-cache="dist">
       <transport cluster="test" stack="tcpping"/>
      <distributed-cache name="dist">
         <memory>
            <object size="21000"/>
         </memory>
      </distributed-cache>
   </cache-container>
</infinispan>

Quarkus vs Spring Boot – part two

In the first tutorial Quarkus vs Spring Boot: What You Need to Know we have compared Spring Boot and Quarkus in relation to the core framework capabilities, memory consumption, cloud readiness and ease of development. In the second part of this article series, we will be comparing which are the actual API that you can use to build microservices-like applications using Spring Boot and Quarkus.

The foundations of QuarkusIO

In terms of application development, QuarkusIO aims to ease the application development by including a subset of the core Java Enterprise API which most developers are familiar with and, in addition, brings the innovation of Microprofile API which is an initiative that aims to optimize Enterprise Java for the Microservices architecture.

Here is a bird’s-eye-view of QuarkusIO core architecture, although the list of available extensions cannot be exhaustive for the sake of brevity:

One important advantage of adopting Quarkus is the early adoption of Microprofile API, which has quickly become a standard for covering all core aspects needed for Microservices architectures such as:

  • The Eclipse MicroProfile Configuration: Provides a unified way to configure your services, injecting the configuration data from a static file or environment variables.
  • The Eclipse MicroProfile OpenAPI : Provides a set of Java Interfaces to document your Services in a standard way.
  • The Eclipse MicroProfile Health Check: Provides the ability to probe the state of service; for example if it’s running or not, if it lacks of disk space or maybe if there is any issue with the database connection.
  • The Eclipse MicroProfile Metrics: Provides a standard way for MicroProfile services to export monitoring data to external agents. Metrics also provide a common Java API for exposing their telemetry data.
  • The Eclipse MicroProfile Fault Tolerance: Allows you to define a strategy in case of failure of your services, for example configuring timeouts, retry policies, fallback methods and Circuit Breaker processing.
  • The Eclipse MicroProfile OpenTracing: Provides a set of instrumentation libraries for tracing components such as JAX-RS and CDI.
  • The Eclipse MicroProfile Rest Client which builds upon the JAX-RS API, provides a type-safe, unified approach for invoking RESTful services over HTTP.

On the top of QuarkusIO, you can re-use two core Java Enterprise building blocks such as:

  • JAX-RS: provides portable APIs for developing, exposing and accessing Web applications designed and implemented in compliance with principles of REST architectural style.
  • CDI: is the Java Enterprise standard for dependency injection (DI) and interception (AOP). Most of the existing CDI code should work just fine, however, it is not yet a full CDI implementation verified by the TCK.

In order to persist data, Quarkus can use the well-known Hibernate ORM and JPA API. Besides it, (not included for brevity in the picture), it is also possible to leverage Hibernate Panache API to simplify and accelerate the development of applications.

Quarkus is not just for HTTP microservices, but also for event-driven architecture. In particular, it simplifies the creation of Reactive application using at its core Vert.x and Netty plus a bunch of reactive frameworks and extensions on top to help developers. Its reactive nature makes it very efficient when dealing with messages (e.g., Apache Kafka or AMQP).

In particular, Quarkus is able to handle both imperative and reactive code at the same level.

For example, you may need to handle a fast non-blocking code that handles almost everything going via the event-loop thread (IO thread). But, at the same time, you might need the traditional (imperative) programming model to create a standard REST application or a client-side application. Therefore, depending on the destination, quarkus can invoke the code managing the request on a worker thread (e.g. JAX-RS) or using an IO Thread to manage a reactive route.

In terms of connectors, Quarkus contains connectors for most common Streaming platforms such as:

  • Apache Kafka Connector to send and receive messages to an Apache Kafka cluster
  • ActiveMQ connector, using its Advanced Message Queuing Protocol (AMQP)

The foundations of Spring Boot

Spring Boot is an opinionated view of the Spring platform and third-party libraries so you can get started with Spring in a matter of minutes.

Here is a high-level view of the project:

Although Spring Boot greatly simplifies the creation of Spring applications, it is based on the same solid foundations as Spring.

At the core of the Spring framework, there’s the Spring IoC container which is responsible to create the objects, wire them together, configure them, and manage their complete life cycle from creation till destruction. The Spring IoC container uses DI to manage the components (called Spring Beans) that make up an application.

The BeanFactory is the actual representation of the Spring IoC container that is responsible for containing and otherwise managing the Spring Beans. The BeanFactory interface is the central IoC container interface in Spring.

The container gets its instructions on what objects to instantiate, configure, and assemble by reading the configuration metadata provided. The configuration metadata can be represented either by XML, Java annotations, or Java code. Spring Boot uses Java annotations and sensible defaults to provide configuration metadata.

In terms of Web application development, Spring Boot can use different embedded Web containers (Jetty, Undertow, Tomcat) and relies on Spring MVC to develop applications using the well-known Model View Controller (MVC) design pattern. Also, there are several template engines which can be used on the top of Spring MVC, the most popular choice being Thymeleaf.

In terms of Microservices development, Spring Boot aims to enable developing production-ready applications in quick time. There isn’t an external project like Eclipse Microprofile to coordinate all available frameworks. On the other hand, most of the relevant frameworks are sub-project of the main Spring project or third-party libraries. This way, you have more freedom of choice at the price of a more scattered set of libraries.

Spring Boot Actuator module helps you monitor and manage your Spring Boot application by providing production-ready features like health check-up, auditing, metrics gathering, HTTP tracing etc. All of these features can be accessed over JMX or HTTP endpoints.

Spring REST Docs aims at helping you to produce documentation for your RESTful services that is accurate and readable.

In order to help you developing REST Services, Spring boot has built-in support for web modules to use a @RestController. You can integrate it with HATEOAS to ease the creation of REST Services and, with FeignClient which is an equivalent API of MicroProfile Rest Client.

In order to perform JSON Binding/Processing, Spring Boot uses directly Jackson to allow seamless conversion between JSON data and Java objects and vice-versa.

In terms of configuration, Spring Boot uses its own configuration module to allow you to configure your application configuration using by default a file named application.properties

Then, in order to build a fault-tolerant system, this can be done with the help of Hystrix, latency management, and fault-tolerant system

In terms of Data persistence, Spring uses Spring Data which abstracts from a specific provider (such as Hibernate). This results in a repository abstraction which aims to reduce the effort to implement data access layers for various persistence stores significantly.

Data persistence is related to Transactions, and Spring Boot supports distributed JTA transactions across multiple XA resources by using either an Atomikos or Bitronix embedded transaction manager. JTA transactions are also supported when deploying to a suitable Java EE Application Server such as WildFly.

Finally, in terms of reactive applications, Spring Boot features project Reactor which is a fully non-blocking foundation with back-pressure support included. The Reactive stack includes Spring WebFlux which is a non-blocking web framework built to take advantage of modern processors to handle a massive number of connections.

Conclusions

In this two parts tutorial, we have compared two modern frameworks: Quarkus and Spring Boot.

In the first part, we have stressed the capabilities of Quarkus to create modern Java/native cloud-ready applications with reduced memory footprint. On this side, no doubt Quarkus shows a clear advantage, even if new cloud/native features are being added to Spring Boot which may or may not alter this balance in the future.

In terms of foundations, there’s no clear winner from my point of view as both frameworks shine with respect to a specific basis of comparison. In the following table, I’ll recap a summary of what we have discussed so far in the hope that it will help you to determine the optimal choice for the next application you will write.

Basis of comparison 

Quarkus

Spring Boot

Microservices Embraces the Microprofile API, which is driven by an highly active and responsive community. Based on the enhancements produced in the last two years, I see this more innovative and purely developer-driven.   Provides its own modules to develop modern Microservices architectures with the same goals.
Front-end development
At the moment includes basic built-in front-end options (Servlet) and the Qute template engine. Based on the solid foundation of Spring MVC and includes mature templates (e.g.ThymeLeaf)
Maturity It is a relatively new framework, although derived from production ready Java Enterprise API Mature, open-source, feature-rich framework with excellent documentation and a huge community
Reactive apps
Features the most advanced combination of traditional (imperative) programming model and event-based.
Features project Reactor which is a fully non-blocking foundation with back-pressure support included
Dependency Injection Uses CDI. At the moment, only a subset of the CDI features is implemented Uses its robust Dependency injection Container.
Data persistence
Uses frameworks familiars to developers (Hibernate ORM) which can be further be accelerated with Panache and include reactive API. Focus on innovation. Based on Spring Data abstraction. Focus on maturity
Speed Best overall application performance. The abstraction built on the top of Spring makes it generally slower than projects derived from Java Enterprise API.

How to manage the lifecycle of a Quarkus application

CDI Events allow beans to communicate so that one bean can define an event, another bean can fire the event, and yet another bean can handle the event.Let’s see how we can take advantage of this to manage the lifecycle of a Quarkus application.

Start from a basic Quarkus project:

mvn io.quarkus:quarkus-maven-plugin:1.3.2.Final:create \
    -DprojectGroupId=com.sample \
    -DprojectArtifactId=lifecycle-demo \
    -DclassName="com.sample.ExampleResource" \
    -Dpath="/hello"

Then, add the following CDI Bean to it:

import io.quarkus.runtime.ShutdownEvent;
import io.quarkus.runtime.StartupEvent;
import io.quarkus.runtime.configuration.ProfileManager;
import org.jboss.logging.Logger;

import javax.enterprise.context.ApplicationScoped;
import javax.enterprise.event.Observes;

@ApplicationScoped
class ApplicationLifeCycle {

    private static final Logger LOGGER = Logger.getLogger(ApplicationLifeCycle.class);

    void onStart(@Observes StartupEvent ev) {
        LOGGER.info("The application has started");
    }

    void onStop(@Observes ShutdownEvent ev) {
        LOGGER.info("The application is stopping...");
    }
}

This is an example of how to take advantage of the CDI concept of events, in which you produce and subscribe to events occuring in your application in a way that enables you to maintain decoupled code between producers and observers. Quarkus runtime uses the javax.enterprise.event.Event class to create events, and you use the CDI’s @Observes annotation to subscribe to events.

In our case, the CDI @Observes the io.quarkus.runtime.StartupEvent and the io.quarkus.runtime.ShutdownEvent. Therefore:

  • On start with the StartupEvent you can execute code when the application is starting
  • On shutdown with the ShutdownEvent you can execute code when the application is shutting down.

Run the application with:

mvn quarkus:dev

You will see on the Console printed:

[com.sam.ApplicationLifeCycle] (main) The application has started

If you stop it, the following message will be displayed:

[com.sam.ApplicationLifeCycle] (Quarkus Shutdown Thread) The application is stopping...

Interestingly enough, you can perform a different set of actions in your LifeCycle class based on the Profile which has been used to start your application.

By default Quarkus has three profiles, although it is possible to use as many as you like. The default profiles are:

  • dev – Activated when in development mode (i.e. quarkus:dev)
  • test – Activated when running tests
  • prod – The default profile when not running in development or test mode

Let’s change our code as follows:

void onStart(@Observes StartupEvent ev) {
    LOGGER.info("The application is starting with profile `%s`", ProfileManager.getActiveProfile());
}

Now, if you start the application in development mode, the following output will be displayed:

[com.sam.ApplicationLifeCycle] (main) The application is starting with profile `dev`

Deploying Quarkus applications on OpenShift

This tutorial explores how you can deploy Quarkus applications in containers and, more specifically, on OpenShift Paas Cloud platforms There are different approaches to deploy Quarkus applications on OpenShift. In this tutorial we will discuss them in detail.

Start by checking out this example, which is an example of JAX-RS application which uses Hibernate ORM API to persist data on a PostgreSQL database:

https://github.com/fmarchioni/mastertheboss/tree/master/quarkus/hibernate-advanced

If you have a look at the directory folder of the example, you will see that it contains the standard structure of all Quarkus applications, which includes a folder src/main/docker with four Dockerfile. We will focus on the Dockerfile.jvm to deliver Java applications and Dockerfile.native to deploy native applications:

src
├── main
│   ├── docker
│   │   ├── Dockerfile.jvm
│   │   ├── Dockerfile.legacy-jar
│   │   ├── Dockerfile.native
│   │   └── Dockerfile.native-distroless
│   ├── java
│   │   └── com
│   │       └── mastertheboss
│   │           ├── CustomerEndpoint.java
│   │           ├── CustomerException.java
│   │           ├── Customer.java
│   │           ├── CustomerRepository.java
│   │           ├── OrderEndpoint.java
│   │           ├── OrderRepository.java
│   │           └── Orders.java
│   └── resources
│       ├── application.properties
│       ├── import.sql
│       └── META-INF
│           └── resources
│               ├── index.html
│               ├── order.html
│               └── stylesheet.css
└── test
    └── java
        └── com
            └── mastertheboss
                ├── GreetingResourceTest.java
                └── NativeGreetingResourceIT.java

To have a quick run of this example on OpenShift, we recommend using Red Hat Code Ready Containers, which are introduced in this tutorial: Getting started with Code Ready Containers

Setting up OpenShift

Before you start CRC, it is required to increase the amount of memory assigned to the CRC process to 16GB, otherwise you won’t be able to run the S2I builder process which is quite memory and CPU intensive:

$ crc config set memory 16384

Now start CRC with:

$ crc start

When your OpenShift cluster is ready, check that you are able to connect, using the credentials which have been printed on the Console:

$ oc login -u kubeadmin -p kKdPx-pjmWe-b3kuu-jeZm3 https://api.crc.testing:6443

You should be now logged into the default project. Let’s create a new project for our application:

$ oc new-project quarkus

The first application we need to add to our project, is PostgreSQL database. We will create it using the ‘oc’ command line as follows, which allows setting the username/password/database attributes in one line:

$ oc new-app -e POSTGRESQL_USER=quarkus -e POSTGRESQL_PASSWORD=quarkus -e POSTGRESQL_DATABASE=quarkusdb postgresql

The postgresql image will be pulled from the default repository and in a minute or so you should be able to check:

oc get pods
NAME                      READY   STATUS                      
postgresql-1-xdlwt        1/1     Running              

Ok so now we need to plug into the Quarkus application. We can do it in several ways:

  1. Creating a Binary build of your application and uploading content in it from the local folder
  2. Using JKube Maven plugin to generate the Docker image, Kubernetes manifest and deploy to OpenShift.

Let’s check them in detail.

Deploying Quarkus applications on OpenShift

To deploy a Quarkus application on OpenShift, the simplest way to go is to install the OpenShift extension:

mvn quarkus:add-extension -Dextensions="openshift"

Then, let’s review our application configuration in application.properties:

%prod.quarkus.datasource.db-kind=postgresql
%prod.quarkus.datasource.username=quarkus
%prod.quarkus.datasource.password=quarkus
%prod.quarkus.datasource.jdbc.url=jdbc:postgresql://postgresql/quarkusdb
%prod.quarkus.datasource.jdbc.max-size=8
%prod.quarkus.datasource.jdbc.min-size=2

quarkus.hibernate-orm.database.generation=drop-and-create
quarkus.hibernate-orm.log.sql=true
quarkus.hibernate-orm.sql-load-script=import.sql

quarkus.openshift.expose=true

As you can see, our application has a datasource configuration that will be used for the “prod” Profile (used on OpenShift). When running in “dev” mode, the DevServices option will set up for us the Database Container we have in our extensions:

    <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-jdbc-postgresql</artifactId>
    </dependency>

Now it’s time to deploy your application. To deploy a Java based application, run the following command:

$ mvn clean package -Dquarkus.kubernetes.deploy=true

On the other hand, to deploy a native container image, add the “native” profile in it:

$ mvn clean package -Pnative -Dquarkus.kubernetes.deploy=true

Note, make sure you have set up correctly GRAALVM_HOME and JAVA_HOME in order to complete the above step. In a few minutes your application will be deployed :

$ oc get pods
NAME                          READY   STATUS      RESTARTS   AGE
hibernate-advanced-1-build    0/1     Completed   0          21m
hibernate-advanced-1-deploy   0/1     Completed   0          24m
hibernate-advanced-3-deploy   0/1     Completed   0          20m
hibernate-advanced-3-zd8k6    1/1     Running     0          20m
postgresql-7677796b66-86ps7   1/1     Running     0          27m

Check which is the application route:

$ oc get routes
NAME             HOST/PORT                                 PATH   SERVICES         PORT       TERMINATION   WILDCARD
hibernate-demo   hibernate-demo-quarkus.apps-crc.testing          hibernate-demo   8080-tcp                 None

You can reach that route from the browser and verify that the application is running:

Let’s see now how to use the JKube Maven plugin to deploy our application on OpenShift.

Deploying Quarkus on OpenShift using JKube Maven plugin

JKube Maven plugin is the successor of the dedprecated fabric8 Maven plugin and can be used for building container images using Docker, JIB or S2I build strategies. Eclipse JKube generates and deploys Kubernetes/OpenShift manifests at compile time too.

Using this plugin is straightforward: we recommend adding a profile for deploying applications to openshift:

<profile>
    <id>openshift</id>
    <properties>
        <jkube.generator.quarkus.nativeImage>
            true
        </jkube.generator.quarkus.nativeImage>
    </properties>
    <build>
        <pluginManagement>
            <plugins>
                <plugin>
                    <groupId>org.eclipse.jkube</groupId>
                    <artifactId>openshift-maven-plugin</artifactId>
                    <version>${jkube.version}</version>
                    <executions>
                        <execution>
                            <goals>
                                <goal>resource</goal>
                                <goal>build</goal>
                            </goals>
                        </execution>
                    </executions>
                    <configuration>
                        <enricher>
                            <config>
                                <jkube-service>
                                    <type>NodePort</type>
                                </jkube-service>
                            </config>
                        </enricher>
                    </configuration>
                </plugin>
            </plugins>
        </pluginManagement>
    </build>
</profile>

This profile includes the openshift-maven-plugin, which can be enriched adding extra services such as NodePort for remote access of the service.

To deploy the native application on OpenShift, make sure you have built the native image:

 mvn package -Pnative -Dnative-image.docker-build=true -DskipTests=true

Now you can build the Docker image, generate the Kubernetes Manifests and apply then to Cluster as follows:

mvn oc:build oc:resource oc:apply

You will see from your Console logs that the application has been deployed:

[INFO] oc: OpenShift platform detected
[INFO] oc: Using project: quarkus
[INFO] oc: Creating a Service from openshift.yml namespace quarkus name hibernate-advanced
[INFO] oc: Created Service: target/jkube/applyJson/quarkus/service-hibernate-advanced.json
[INFO] oc: Creating a DeploymentConfig from openshift.yml namespace quarkus name hibernate-advanced
[INFO] oc: Created DeploymentConfig: target/jkube/applyJson/quarkus/deploymentconfig-hibernate-advanced.json
[INFO] oc: Creating Route quarkus:hibernate-advanced host: null
[INFO] oc: HINT: Use the command `oc get pods -w` to watch your pods start up
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------

Check tat Pods are available:

$ oc get pods
NAME                             READY   STATUS      RESTARTS   AGE
hibernate-advanced-1-9bwjt       1/1     Running     0          15m
hibernate-advanced-1-deploy      0/1     Completed   0          15m
hibernate-advanced-s2i-1-build   0/1     Completed   0          16m
postgresql-1-4hs7z               1/1     Running     3          22m
postgresql-1-deploy              0/1     Completed   0          22m

From the Pod log you can verify that the native application is up and running:

oc logs hibernate-advanced-1-9bwjt 

QUARKUS_OPTS environment variable was not set, using default values of -Xmx24M -Xms16M -Xmn24M
__  ____  __  _____   ___  __ ____  ______ 
 --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ 
 -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \   
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/   
2021-01-03 09:27:56,101 INFO  [io.agr.pool] (main) Datasource '<default>': Initial size smaller than min. Connections will be created when necessary
   . . . . .
Hibernate: 
    INSERT INTO customer (id, name, surname) VALUES ( nextval('customerId_seq'), 'John','Doe')
Hibernate: 
    INSERT INTO customer (id, name, surname) VALUES ( nextval('customerId_seq'), 'Fred','Smith')
2021-01-03 09:27:56,180 INFO  [io.quarkus] (main) hibernate-advanced 1.0-SNAPSHOT native (powered by Quarkus 1.10.5.Final) started in 0.097s. Listening on: http://0.0.0.0:8080
2021-01-03 09:27:56,181 INFO  [io.quarkus] (main) Profile prod activated. 
2021-01-03 09:27:56,181 INFO  [io.quarkus] (main) Installed features: [agroal, cdi, hibernate-orm, jdbc-postgresql, mutiny, narayana-jta, resteasy, resteasy-jsonb, smallrye-context-propagation]

And finally, gather the Route so that you can access the application:

oc get routes
NAME             HOST/PORT                                      PATH   SERVICES         PORT   TERMINATION   WILDCARD
hibernate-demo   hibernate-demo-quarkus.apps-crc.testing          hibernate-demo   8080                 None

Messaging with Quarkus – part one: JMS Messaging

Quarkus includes several options for sending and consuming aysnchronous messages. In this two-parts tutorial we will learn how to send messages using the JMS API, which is well-known to Java Enterprise developers. In the next tutorial we will check out how to use Reactive Messaging to handle messages through the mediation of a Bus.

JMS support for Quarkus has been added through the quarkus-artemis-jms extension:

<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-artemis-jms</artifactId>
</dependency>

By using this extension, you can take advantage of migrating your JMS applications (or creating new ones) and leverage transactions which is available by definition in the JMS API.

Spinning up ArtemisMQ

In order to get started, we will spin up an ArtemisMQ server and produce/consume messages with it.

This tutorial will provide an overview of ArtemisMQ, if you want to use a local set up of Artemis on your machine:  Introduction to ActiveMQ Artemis

On the other hand, if you want to get started even more quickly, you can launch ArtemisMQ with Docker as follows:

docker run -it --rm -p 8161:8161 -p 61616:61616 -e ARTEMIS_USERNAME=quarkus -e ARTEMIS_PASSWORD=quarkus vromero/activemq-artemis:2.9.0-alpine

Notice we are using some Environment variables to configure a management user for Artemis. that will be needed in your Quarkus application.

Creating the Quarkus project

Next, create your Quarkus project including the artemismq dependency, plus the jsonb API, we will use to produce/consume and parse messages in JSON format:

mvn io.quarkus:quarkus-maven-plugin:1.2.0.Final:create \
    -DprojectGroupId=com.mastertheboss.quarkus \
    -DprojectArtifactId=jms-demo \
    -Dextensions="artemis-jms,resteasy-jsonb,resteasy"

Our application will just contain a Producer and a JMS Consumer and an Endpoint which will trigger the message Producer. Here is the REST Endpoint:

package com.mastertheboss.quarkus.jms;

import javax.inject.Inject;
import javax.ws.rs.*;
import javax.ws.rs.core.Response;


@Path("/jms")
@Produces("application/json")
@Consumes("application/json")
public class JMSEndpoint {

    @Inject
    JMSProducer producer;

    @POST
    public Response sendMessage(String message) {
        producer.sendMessage(message);
        return Response.status(201).build();

    }

}

Then, the JMSProducer class which will send a JMS message against the “exampleQueue” Destination:

package com.mastertheboss.quarkus.jms;

import javax.enterprise.context.ApplicationScoped;
import javax.inject.Inject;
import javax.jms.ConnectionFactory;
import javax.jms.JMSContext;
import javax.jms.JMSRuntimeException;
import javax.jms.Session;


@ApplicationScoped
public class JMSProducer {

    @Inject
    ConnectionFactory connectionFactory;

    public void sendMessage(String message) {
        try (JMSContext context = connectionFactory.createContext(Session.AUTO_ACKNOWLEDGE)){
            context.createProducer().send(context.createQueue("exampleQueue"), message);
        } catch (JMSRuntimeException ex) {
            // handle exception (details omitted)
        }
    }
}

Then, we have the JMSConsumer class. As we don’t have Message Driven Bean in Quarkus (since Quarkus is not an Application Server with EJB support) we can simulate message consumption using a CDI Bean. The JMSConsumer Bean schedules an event every 5 seconds that will check for incoming messages in our JMS Queue:

package com.mastertheboss.quarkus.jms;

import io.quarkus.runtime.ShutdownEvent;
import io.quarkus.runtime.StartupEvent;

import javax.enterprise.context.ApplicationScoped;
import javax.enterprise.event.Observes;
import javax.inject.Inject;
import javax.jms.*;
import javax.json.Json;
import javax.json.JsonObject;
import javax.json.JsonReader;
import java.io.StringReader;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;


@ApplicationScoped
public class JMSConsumer implements Runnable {

    @Inject
    ConnectionFactory connectionFactory;

    private final ScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor();

    void onStart(@Observes StartupEvent ev) {
        scheduler.scheduleWithFixedDelay(this, 0L, 5L, TimeUnit.SECONDS);
    }

    void onStop(@Observes ShutdownEvent ev) {
        scheduler.shutdown();
    }

    @Override
    public void run() {
        try (JMSContext context = connectionFactory.createContext(Session.AUTO_ACKNOWLEDGE)) {
            javax.jms.JMSConsumer consumer = context.createConsumer(context.createQueue("exampleQueue"));
            while (true) {
                Message message = consumer.receive();
                if (message == null) {
                    return;
                }
                JsonReader jsonReader = Json.createReader(new StringReader(message.getBody(String.class)));
                JsonObject object = jsonReader.readObject();
                String msg = object.getString("message");
                System.out.println(msg);
            }
        } catch (JMSException e) {
            throw new RuntimeException(e);
        }
    }



}

The last item is the application.properties file, which contains the URL of the Artemis Server and the login credentials:

quarkus.artemis.url=tcp://localhost:61616
quarkus.artemis.username=quarkus
quarkus.artemis.password=quarkus

Running the example

You can start the example with:

$ mvn install quarkus:dev

Once that Quarkus is started, send a JMS message through an HTTP POST, as in this example:

curl -d '{"message":"Hello!"}' -H "Content-Type: application/json" -X POST http://localhost:8080/jms

You should be able to see the message in your Quarkus Console:

Hello!

You can check the result also through the ArtemisMQ console, which is available at http://localhost:8161 . Login with “quarkus/quarkus”:

Then, you can verify that the JMS Destination has been created and messages are piling up into it:

That’s all. Check out the next tutorial Messaging with Quarkus part two: Reactive Messaging in order to learn how to use Reactive Messaging to produce and consume messages with Quarkus.

Source code for this tutorial: https://github.com/fmarchioni/mastertheboss/tree/master/quarkus/jms-demo