Monitoring Enterprise applications with OpenShift and Prometheus

This article covers how to monitor Java Enterprise applications using OpenShift Container Platform 4.6. For the purpose of this example, we will be using JBoss Enterprise Application platform Expansion pack which empowers JBoss EAP with Microprofile API, such as the Metrics API.

The monitoring solution we will use is Prometheus which is available out of the box in OpenShift 4.6 and later. The advantage of using the OpenShift’s embedded Prometheus is that you will use a single Console (the Development Console of OpenShift) to view and manage your metrics.

Older versions of OpenShift can still use Prometheus, however you will need to install it separately and access metric through its Console.

For those who are new to it, Prometheus, is an increasingly popular toolkit that provides monitoring and alerts across applications and servers.

The primary components of Prometheus include

  • Prometheus server which scrapes and stores time series data
  • A push gateway for supporting short-lived jobs
  • Special-purpose exporters for services like HAProxy, StatsD, Graphite, etc.
  • An Alertmanager to handle alerts
  • Client libraries for instrumenting application code

To run our demo, we will use Code Ready Containers (CRC), which provides simple startup to an OpenShift single node cluster. If you want to learn how to get started with CRC, check this tutorial:

Enabling application monitoring on OpenShift

Before starting CRC, you need to set the property “enable-cluster-monitoring” to true as the monitoring, alerting, and telemetry are disable by default on CRC.

$ crc config set enable-cluster-monitoring true

Next start CRC:

$ crc start
   . . . . .

Started the OpenShift cluster.

The server is accessible via web console at:
  https://console-openshift-console.apps-crc.testing

Log in as administrator:
  Username: kubeadmin
  Password: T3sJD-jjueE-2BnHe-ftNBw

Log in as user:
  Username: developer
  Password: developer

Use the 'oc' command line interface:
  $ eval $(crc oc-env)
  $ oc login -u developer https://api.crc.testing:6443

Now you can login as kubeadmin user:

$ oc login -u kubeadmin https://api.crc.testing:6443

Now, in order to enable the embedded Prometheus, we will edit the cluster-monitoring-config ConfigMap in the openshift-monitoring namespace. We need to set the attribute “techPreviewUserWorkload” to true:

$ oc -n openshift-monitoring edit configmap cluster-monitoring-config

And set the techPreviewUserWorkload setting to true under data/config.yaml:

apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-monitoring-config
  namespace: openshift-monitoring
data:
  config.yaml: |
    techPreviewUserWorkload:
      enabled: true

Great. You will see that in a minute the Prometheus, Thanos and Alert Manager Pods will start:

$ oc get pods -n openshift-monitoring
NAME                                           READY   STATUS    RESTARTS   AGE
alertmanager-main-0                            5/5     Running   0          58d
alertmanager-main-1                            5/5     Running   0          58d
alertmanager-main-2                            5/5     Running   0          58d
cluster-monitoring-operator-686555c948-7lq8z   2/2     Running   0          58d
grafana-6f4d96d7fd-5zz9p                       2/2     Running   0          58d
kube-state-metrics-749954d685-h9nfx            3/3     Running   0          58d
node-exporter-5tk5t                            2/2     Running   0          58d
openshift-state-metrics-587d97bb47-rqljs       3/3     Running   0          58d
prometheus-adapter-85488fffd6-8qpdz            1/1     Running   0          2d8h
prometheus-adapter-85488fffd6-nw9kz            1/1     Running   0          2d8h
prometheus-k8s-0                               7/7     Running   0          58d
prometheus-k8s-1                               7/7     Running   0          58d
prometheus-operator-658ccb589c-qbgwq           2/2     Running   0          58d
telemeter-client-66c98fb87-7qfv5               3/3     Running   0          58d
thanos-querier-7cddb86d6c-9vwmc                5/5     Running   0          58d
thanos-querier-7cddb86d6c-hlmb7                5/5     Running   0          58d

Deploying an Enterprise application which produces Metrics

The application we will monitor is a WildFly quickstart which contains MicroProfile Metrics in a REST Endpoint: https://github.com/wildfly/quickstart/tree/master/microprofile-metrics

The main method in the REST Endpoint calculates if a number is a prime number, using the following annotations from the Metrics API:

@GET
@Path("/prime/{number}")
@Produces(MediaType.TEXT_PLAIN)
@Counted(name = "performedChecks", displayName="Performed Checks", description = "How many prime checks have been performed.")
@Timed(name = "checksTimer", absolute = true, description = "A measure of how long it takes to perform the primality test.", unit = MetricUnits.MILLISECONDS)
@Metered(name = "checkIfPrimeFrequency", absolute = true)
public String checkIfPrime(@PathParam("number") long number) {
   // code here
}

To get started, we will create a project named “xp-demo”:

$ oc new-project xp-demo

Then, in order to run the EAP XP application, we need at at first deploy the required template files for JBoss EAP Expansion Pack:

oc replace --force -n openshift -f https://raw.githubusercontent.com/jboss-container-images/jboss-eap-openshift-templates/eap-xp2/jboss-eap-xp2-openjdk11-openshift.json

oc replace --force -n openshift -f https://raw.githubusercontent.com/jboss-container-images/jboss-eap-openshift-templates/eap-xp2/templates/eap-xp2-basic-s2i.json

Now we can deploy the actual application, using as template parameter the EAP image jboss-eap-xp2-openjdk11-openshift:2.0:

 oc new-app --template=eap-xp2-basic-s2i -p APPLICATION_NAME=eap-demo -p EAP_IMAGE_NAME=jboss-eap-xp2-openjdk11-openshift:2.0 -p EAP_RUNTIME_IMAGE_NAME=jboss-eap-xp2-openjdk11-runtime-openshift:2.0 -p IMAGE_STREAM_NAMESPACE=openshift -p SOURCE_REPOSITORY_URL=https://github.com/wildfly/quickstart -p SOURCE_REPOSITORY_REF=23.0.0.Final -p CONTEXT_DIR="microprofile-metrics"

Watch for the EAP Pods to come up:

$ oc get pods
NAME                               READY   STATUS      RESTARTS   AGE
eap-demo-1-6dw28                   1/1     Running     0          22h
eap-demo-1-deploy                  0/1     Completed   0          22h
eap-demo-2-build                   0/1     Completed   0          22h
eap-demo-build-artifacts-1-build   0/1     Completed   0          22h

Since the EAP metrics are available on port 9990, we will create an additional service named “eap-xp2-basic-app-admin” which targets the port 9990 of the “eap-demo” application:

apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    app: eap-xp2-basic-s2i-admin
    app.kubernetes.io/component: eap-xp2-basic-s2i-admin
    app.kubernetes.io/instance: eap-xp2-basic-s2i-admin
    application: eap-xp2-basic-app-admin
    template: eap-xp2-basic-s2i-admin
    xpaas: "1.0"
  name: eap-xp2-basic-app-admin
  namespace: xp-demo
spec:
  ports:
    - name: admin
      port: 9990
      protocol: TCP
      targetPort: 9990
  selector:
    deploymentConfig: eap-demo
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Save it in a file, for example service.yml and create it with:

$ oc create -f service.yml

Check that the list of services now include also the “eap-xp2-basic-app-admin” bound to port 9990:

$ oc get services
NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
eap-demo                  ClusterIP   10.217.4.139   <none>        8080/TCP   22h
eap-demo-ping             ClusterIP   None           <none>        8888/TCP   22h
eap-xp2-basic-app-admin   ClusterIP   10.217.5.217   <none>        9990/TCP   22h

As optional step, we have exposed also a Route for this service, on the path “/metrics” so that we can check metrics also from outside the cluster:

$ oc expose svc/eap-xp2-basic-app-admin --path=/metrics

Here’s the available Routes:

$ oc get routes
NAME          HOST/PORT                              PATH       SERVICES                  PORT    TERMINATION     WILDCARD
admin-route   admin-route-xp-demo.apps-crc.testing   /metrics   eap-xp2-basic-app-admin   admin                   None
eap-demo      eap-demo-xp-demo.apps-crc.testing                 eap-demo                  <all>   edge/Redirect   None

The last resource we need to create, is a ServiceMonitor which will be used by Prometheus to select the service to monitor. For this purpose, we will include a selector matching the “eap-xp2-basic-s2i-admin”:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    k8s-app: prometheus-example-monitor
  name: prometheus-example-monitor
  namespace: xp-demo
spec:
  endpoints:
  - interval: 30s
    port: admin
    scheme: http
  selector:
    matchLabels:
      app: eap-xp2-basic-s2i-admin

Save it in a file, for example servicemonitor.yml and create it with:

$ oc create -f servicemonitor.yml

As Prometheus by default will scrape metrics on the “/metrics” URI, nothing else but the service binding is required.

Now let’s turn to the Web admin console.

Accessing Metrics from the Enterprise application

Log into the OpenShift console and Switch to the Developer tab. You will see that a Monitoring tab is available:

Select the Montoring Tab. You will enter in the Monitoring Dashboard which contains a set of line charts (CPU, Memory, Bandwidth, Packets transmitted) for the EAP Pods:

Now switch to the “Metrics” tab. From there you can either select one of the metrics seen in the Dashboards or you can opt for Custom Metrics. We will choose a Custom Metric. Which one? to know the available metrics you can query the admin-route-xp-demo.apps-crc.testing which targets EAP on port 9990:

Here is an excerpt from it:

application_duplicatedCounter_total{type="original"} 0.0
application_duplicatedCounter_total{type="copy"} 0.0
# HELP application_org_wildfly_quickstarts_microprofile_metrics_PrimeNumberChecker_parallelAccess_current Number of parallel accesses
# TYPE application_org_wildfly_quickstarts_microprofile_metrics_PrimeNumberChecker_parallelAccess_current gauge
application_org_wildfly_quickstarts_microprofile_metrics_PrimeNumberChecker_parallelAccess_current 0.0
# TYPE application_org_wildfly_quickstarts_microprofile_metrics_PrimeNumberChecker_parallelAccess_max gauge
application_org_wildfly_quickstarts_microprofile_metrics_PrimeNumberChecker_parallelAccess_max 0.0
# TYPE application_org_wildfly_quickstarts_microprofile_metrics_PrimeNumberChecker_parallelAccess_min gauge
application_org_wildfly_quickstarts_microprofile_metrics_PrimeNumberChecker_parallelAccess_min 0.0
# HELP application_org_wildfly_quickstarts_microprofile_metrics_PrimeNumberChecker_highestPrimeNumberSoFar Highest prime number so far.
# TYPE application_org_wildfly_quickstarts_microprofile_metrics_PrimeNumberChecker_highestPrimeNumberSoFar gauge
application_org_wildfly_quickstarts_microprofile_metrics_PrimeNumberChecker_highestPrimeNumberSoFar 13.0
# HELP application_org_wildfly_quickstarts_microprofile_metrics_PrimeNumberChecker_performedChecks_total How many prime checks have been performed.
# TYPE application_org_wildfly_quickstarts_microprofile_metrics_PrimeNumberChecker_performedChecks_total counter
application_org_wildfly_quickstarts_microprofile_metrics_PrimeNumberChecker_performedChecks_total 7.0
# TYPE application_injectedCounter_total counter
application_injectedCounter_total 0.0
# TYPE application_checkIfPrimeFrequency_total counter
application_checkIfPrimeFrequency_total 7.0
# TYPE application_checkIfPrimeFrequency_rate_per_second gauge
application_checkIfPrimeFrequency_rate_per_second 0.009764570078023867
# TYPE application_checkIfPrimeFrequency_one_min_rate_per_second gauge
application_checkIfPrimeFrequency_one_min_rate_per_second 0.01514093009622217
# TYPE application_checkIfPrimeFrequency_five_min_rate_per_second gauge
application_checkIfPrimeFrequency_five_min_rate_per_second 0.015280392552156003
# TYPE application_checkIfPrimeFrequency_fifteen_min_rate_per_second gauge
application_checkIfPrimeFrequency_fifteen_min_rate_per_second 0.0067481340497595925
# TYPE application_checksTimer_rate_per_second gauge
application_checksTimer_rate_per_second 0.009764593427401155
# TYPE application_checksTimer_one_min_rate_per_second gauge
application_checksTimer_one_min_rate_per_second 0.01514093009622217
# TYPE application_checksTimer_five_min_rate_per_second gauge
application_checksTimer_five_min_rate_per_second 0.015280392552156003
# TYPE application_checksTimer_fifteen_min_rate_per_second gauge
application_checksTimer_fifteen_min_rate_per_second 0.0067481340497595925
# TYPE application_checksTimer_min_seconds gauge
application_checksTimer_min_seconds 1.301E-5
# TYPE application_checksTimer_max_seconds gauge
application_checksTimer_max_seconds 1.08862E-4
# TYPE application_checksTimer_mean_seconds gauge
application_checksTimer_mean_seconds 2.3666899949653865E-5
# TYPE application_checksTimer_stddev_seconds gauge
application_checksTimer_stddev_seconds 2.4731858133453515E-5
# HELP application_checksTimer_seconds A measure of how long it takes to perform the primality test.
# TYPE application_checksTimer_seconds summary
application_checksTimer_seconds_count 7.0

For example, we will be checking a Performance metric, the “application_checksTimer_max_seconds” metric, which is the maximum number (in seconds) required to check for prime number:

Now let’s start requesting the “/prime” endpoint, passing a number to be checked, for example: https://eap-demo-xp-demo.apps-crc.testing/prime/13

You will see that the Custom Metric will track our metric:

 

Adding Alerts for relevant metrics

Alerting with Prometheus is separated into two parts. Alerting rules in Prometheus servers send alerts to an Alertmanager. The Alertmanager then manages those alerts, including silencing, inhibition, aggregation and sending out notifications via methods such as email, on-call notification systems, and chat platforms.

As an example, we will add a sample alert to capture the value of “application_checksTimer_max_seconds” when it goes over a certain threshold (3 ms). Here is our sample metric file:

apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: example-alert
  namespace: xp-demo
spec:
  groups:
  - name: performance
    rules:
    - alert: CheckTimerAlert
      expr: version{job="application_checksTimer_max_seconds"} > 0.003

Save it in a file, for example alert.yml and create it with:

$ oc create -f alert.yml

Let’s request some time-consuming prime numbers calculations such as: https://eap-demo-xp-demo.apps-crc.testing/prime/10001891

You will now see that as the count for that metric goes above the defined threshold (5), an alert will be available in the “Alerts” tab:

Clicking on it, you will see the condition which activated the Alert:

That’s all.

We have just covered how to enable application monitoring on OpenShift 4.6 (or newer). We have therefore enabled the embedded Prometheus service and deployed an Enterprise application which emits metrics that can be scraped by Prometheus.

Next, we have showed how this metrics can be captured by OpenShift developer Console and how to bind alerts for critical metrics.

Profiling Jakarta EE Applications with NetBeans Profiler

In this article we will learn how to profile Java/Jakarta EE applications with NetBeans built-in profiler tool and how to instrument WildFly application server for this purpose.

There are several vendor tools for profiling Java applications such as JProfiler or YourKit. In the list of opensource tools, a valid option is NetBeans Profiler which lets you develop and profile your applications without leaving your IDE. As you will see, the NetBeans profiling tool easily enables you to monitor thread states, CPU performance, and memory usage of your application from within the IDE, and imposes relatively low overhead.

Using the Profiler for the First Time

The first time you are using the Profiler, you must have calibration data for each Java platform that will be used for profiling. The calibration only needs to be performed once. You can run the JVM calibration at any time by performing the following steps:

From the upper Menu, choose: Tools | Options | Java | Profiler | General | Manage Calibration Data:

Click on the Manage button. Then, select the Java Platform. Click Calibrate. A dialog box will appear when the calibration operation is complete.

Next, we need to instrument WildFly so that it can be profiled. For this purpose, we need to add to WildFly modules the JFluid Server API which allows starting the full instrumentation profiling, calls to which are injected into the target application bytecodes when they are instrumented.

The required library, named jfluid-server.jar, is included into NetBeans distribution in the directory path $NETBEANS_HOME/profiler/lib/

A simple way to make it available globally as a module is to create a global directory and put the jar file in there:

$ mkdir $JBOSS_HOME/standalone/profile

$ cp jfluid-server.jar $JBOSS_HOME/standalone/profile

Then from WildFly CLI:

/subsystem=ee/global-directory=profiler:add(path="standalone/profiler", relative-to=jboss.home.dir)

Finally, we need to add the top package (“org.netbeans.lib“) where JFluid API is included in the list of JBOSS_MODULES_SYSTEM_PKGS:

if [ "x$JBOSS_MODULES_SYSTEM_PKGS" = "x" ]; then
   JBOSS_MODULES_SYSTEM_PKGS="org.jboss.byteman,org.netbeans.lib"
fi

Profiling your Jakarta EE applications

There are two ways to profile Enterprise applications using NetBeans:

1) Start an application or a Server in Profile mode

This can be achieved by selecting the “Services” Tab and from there start the application server. For example, WildFly Application Server | Start in Profile Mode:

Please note that there’s currently an issue with NetBeans on OpenJDK 11 (https://issues.apache.org/jira/browse/NETBEANS-2452) which causes the unsupported option “-Xbootclasspath/p” to be added to your Java Options:

“-Xbootclasspath/p is no longer a supported option.

2) Attach to an existing JVM Process

The most stable option is to start the application server externally and then attach to it.

We will therefore start WildFly from the command line:

$ ./standalone.sh

From the NetBeans Menu, select “Profile | Attach to External Process

Click on the Profile button, then, select the PID of your WildFly (or other JVM)server and click on it to attach to the process:

NOTE: If your connection to the WildFly process is hanging on and you are sure the connection should work (i.e. you double checked all the standard things like firewalls, network configuration etc.) the RMI system property java.rmi.server.hostname is something to try:

$ ./standalone.sh -Djava.rmi.server.hostname=localhost

Profiling applications

If you click on the arrow next to the “Attach” button, you will see that several options are available:

Telemetry:

Within this panel, you can have a real time high-level information about properties of the target JVM

Methods:

From within this panel, you will be able to hotspot the Total time spent on each method and amount of Total CPU time for the same method. You can also apply filters to packages and classes to narrow your search.

Objects:

From this panel you will be able to measure the amount of Bytes and number of objects created. This is a precious hint to find memory leaks in your applications

Threads:

This is a performance view of the time spent running each Thread:

SQL Queries:

Within this panel you will be able to track the performance of JDBC calls and the number of executions

 

Taking Snapshots

When running a profiling session, you can capture the results by taking a Snapshot. A snapshot captures the profiling data at the instant you take the snapshot. Snapshots differ from live profiling results in the following ways:

  • Snapshots can be checked when no profiling session is running.
  • Snapshots contain a more detailed record of profiling data than live results.
  • A Snapshot cam be compared with another Snapshot to compare memory usage

This concludes our short ride on profiling an application using NetBeans IDE. In this tutorial we showed the basics of how to use an opensource IDE to profile a Jakarta EE application server and view the profiling results. The steps outlined above can also be applied for other profiler tools such as VisualVM. Mind it, the jfluid API on VisualVM is located under a different package so you will have to change the JBOSS_MODULES_SYSTEM_PKGS accordingly:

if [ "x$JBOSS_MODULES_SYSTEM_PKGS" = "x" ]; then
   JBOSS_MODULES_SYSTEM_PKGS="org.jboss.byteman,org.graalvm.visualvm"
fi

How to deploy SOAP Web Services in Jakarta EE applications

This tutorial covers how to build and deploy SOAP based Web services in Jakarta EE applications, also discussing the changes in the Java SE that removed the JAX-WS API from the default Java modules.

First of all a bit of history. The JAX-WS API used to be bundled in JDK until Java 8. JAX-WS was then deprecated in Java 9 and 10 and finally removed in Java 11 as you can read from the release notes:

java.xml.ws (JAX-WS, plus the related technologies SAAJ and Web Services Metadata) - REMOVED

This mean, that if you used to build an application which included, for example, javax.xml.soap, with Java 11 (and nothing else in the classpath) the following error will be returned:

package javax.xml.soap does not exist

There are a couple of simple ways to get around this issue:

Check if the application server supports Jakarta SOAP Web Services

Since Jakarta EE 9, SOAP Web services are marked as optional technology. As you can see from the following picture, technologies in gray are optional, which means that not all vendors might implement this feature.

In our case, if we have a look at WildFly 22 application server’s module, the javax.xml.soap modules are bundled in the server modules. For example:

$ grep -r javax.xml.soap.SOAPMessage
Binary file system/layers/base/javax/xml/soap/api/main/jboss-saaj-api_1.4_spec-1.0.2.Final.jar matches

On the other hand, if you want to check against a Jakarta EE 9 compatible application server (as WildFly Preview), the classes will be moved to the package “jakarta.xml.soap”

grep -r jakarta.xml.soap.SOAPMessage
Binary file system/layers/base/jakarta/xml/soap/api/main/jboss-saaj-api_1.4_spec-1.0.2.Final-ee9.jar matches

Buiding a Jakarta EE application that uses SOAP Web services

That being said, in order to deploy a SOAP Web Services application to a Jakarta EE container like WildFly, which includes the SOAP libraries, it is sufficient to add the jakarta.jakartaee-api bundle:

<dependency>
    <groupId>jakarta.platform</groupId>
    <artifactId>jakarta.jakartaee-api</artifactId>
    <version>8.0.0</version>
    <scope>provided</scope>
</dependency>

or for Jakarta EE 9:

<dependency>
    <groupId>jakarta.platform</groupId>
    <artifactId>jakarta.jakartaee-api</artifactId>
    <version>9.0.0</version>
    <scope>provided</scope>
</dependency>

On the other hand, if your application server does not include the SOAP Web services API, you need to add a dependency for jakarta.xml.ws-api and one compatible implementation for it.

For Jakarta 8 EE you can add:

<dependency>
  <groupId>jakarta.xml.ws</groupId>
  <artifactId>jakarta.xml.ws-api</artifactId>
  <version>2.3.3</version>
</dependency>
<dependency>
  <groupId>com.sun.xml.ws</groupId>
  <artifactId>jaxws-rt</artifactId>
  <version>2.3.3</version>
  <scope>runtime</scope>
</dependency>

For Jakarta EE 9, that would be:

<dependency>
  <groupId>jakarta.xml.ws</groupId>
  <artifactId>jakarta.xml.ws-api</artifactId>
  <version>3.0.0</version>
</dependency>
<dependency>
  <groupId>com.sun.xml.ws</groupId>
  <artifactId>jaxws-rt</artifactId>
  <version>3.0.0</version>
  <scope>runtime</scope>
</dependency>

That’s it. As bonus tip, you can check a Jakarta EE 9 SOAP Web service:

import jakarta.inject.Inject;
import jakarta.jws.WebParam;
import jakarta.jws.WebResult;
import jakarta.jws.WebService;
import jakarta.jws.soap.SOAPBinding;

@WebService
@SOAPBinding(style= SOAPBinding.Style.RPC)
public class AccountWS implements AccountWSItf{
	@Inject
	AccountManager ejb;

 
	public void newAccount(@WebParam(name = "name") String name) {
		ejb.createAccount(name);

	}

 
	public void withdraw(@WebParam(name = "name") String name,
			@WebParam(name = "amount") long amount) throws RuntimeException {
		ejb.withdraw(name, amount);
	}

 
	public void deposit(@WebParam(name = "name") String name,
			@WebParam(name = "amount") long amount) {
		ejb.deposit(name, amount);
	}

	@WebResult(name = "BankAccount")
	public Account findAccountByName(String name) {
		return ejb.findAccount(name);
	}
}

The full source code for the Jakarta EE 9 Web Service is available here: https://github.com/fmarchioni/mastertheboss/tree/master/jax-ws/jakartaee-ws-basic

Building and deploying a Jakarta EE application on OpenShift

This is the second article about building and deploying a Jakarta EE service in the Cloud. In the first tutorial, we have covered How to build and deploy a Jakarta EE application on Kubernetes

Now we will show how to deploy the same application on OpenShift container application platform.

How to install quickly an OpenShift cluster

The simplest way to install and try OpenShift on your laptop is Red Hat Code Ready Containers (https://developers.redhat.com/products/codeready-containers/overview) which simplifies the setup and testing and emulates the cloud development environment locally with all of the tools needed to develop container-based applications.

A basic tutorial, which covers the Code Ready Containers (CRC) set up, is available here: Getting started with Code Ready Containers

Right now, we will assume that you have installed CRC so you can start it with:

$ crc start
   . . . . .
INFO Starting OpenShift cluster ... [waiting 3m]  
INFO                                              
INFO To access the cluster, first set up your environment by following 'crc oc-env' instructions 
INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443' 
INFO To login as an admin, run 'oc login -u kubeadmin -p kKdPx-pjmWe-b3kuu-jeZm3 https://api.crc.testing:6443' 

We are done with the OpenShift setup.

Coding the Jakarta EE application

We will now deploy the same Jakarta EE application discussed in the tutorial (How to build and deploy a Jakarta EE application on Kubernetes)

The application is a basic REST Service with access to a RDBMs, which is PostgreSQL.

Here is our Model:

@Entity
@Table(name = "customer")
@NamedQuery(name = "findAllCustomer", query = "SELECT c FROM Customer c")
public class Customer implements Serializable {

    @Id
    @SequenceGenerator(
            name = "customerSequence",
            sequenceName = "customerId_seq",
            allocationSize = 1,
            initialValue = 1)
    @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "customerSequence")
    private Long id;

    @Column
    private String name;

    @Column
    private String surname;

    // Getter/Setters omitted for brevity
}

Then, we include a REST endpoint with a method for adding a new Customer (via HTTP POST) and one for returning the list of Customers (via HTTP GET)

@Path("customers")
@Produces(MediaType.APPLICATION_JSON)
public class CustomerEndpoint {

@Inject CustomerManager manager;

    @POST
    public void createCustomer(Customer customer) {
        manager.createCustomer(customer);
    }

    @GET
    public List<Customer> getAllCustomers() {
        return manager.getAllCustomers();
    }
}

The Manager class is the layer responsible for updating the Database:

@ApplicationScoped
public class CustomerManager {

    @PersistenceContext
    private EntityManager em;

    @Transactional
    public void createCustomer(Customer customer) {
        em.persist(customer);
        System.out.println("Created Customer "+customer);

    }

    public List<Customer> getAllCustomers() {
        List<Customer> tasks = new ArrayList<>();
        try {

            tasks = em.createNamedQuery("findAllCustomer").getResultList();

        } catch (Exception e){
            e.printStackTrace();
        }
        return tasks;
    }
}

As we will be using PostgreSQL, our persistence.xml file will contain the following persistence unit definition:

<persistence version="2.1"
             xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="
        http://xmlns.jcp.org/xml/ns/persistence
        http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
    <persistence-unit name="primary">
        <jta-data-source>java:jboss/datasources/PostgreSQLDS</jta-data-source>
        <properties>
            <!-- Properties for Hibernate -->
            <property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQLDialect"/>
            <property name="hibernate.hbm2ddl.auto" value="create-drop" />
            <property name="hibernate.show_sql" value="true"/>

        </properties>
    </persistence-unit>
</persistence>

You should have noticed that a Datasource is defined as JTA Datasource. In order to provide a Datasource to our Jakarta EE service, we will provision a WildFly Bootable Jar, which includes as additional layer:

<build>
        <finalName>wildfly-jar-sample</finalName>
        <plugins>
            <plugin>
                <groupId>org.wildfly.plugins</groupId>
                <artifactId>wildfly-jar-maven-plugin</artifactId>
                <version>${version.wildfly.jar}</version>
                <configuration>
                    <feature-packs>
                        <feature-pack>
                            <location>wildfly@maven(org.jboss.universe:community-universe)#${version.wildfly}</location>
                        </feature-pack>
                        <feature-pack>
                            <groupId>org.wildfly</groupId>
                            <artifactId>wildfly-datasources-galleon-pack</artifactId>
                            <version>1.1.0.Final</version>
                        </feature-pack>
                    </feature-packs>
                    <layers>
                        <layer>cloud-profile</layer>
                        <layer>postgresql-datasource</layer>
                    </layers>
                    <excluded-layers>
                        <layer>deployment-scanner</layer>
                    </excluded-layers>
                    <cloud/>
                </configuration>
                <executions>
                    <execution>
                        <goals>
                            <goal>package</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

As you can see from the above Maven plugin configuration, our WildFly bootable Jar will be provisioned with a:

  • cloud-profile: This is an aggregation of some basic layers (bean-validation, cdi, ee-security, jaxrs, jms-activemq, jpa, observability, resource-adapters, web-server) to address cloud use cases.
  • postgresql-datasource: This layer installs postgresql driver as a module inside a WildFly server. The driver is named postgresql. For more info, this layer is available at: https://github.com/wildfly-extras/wildfly-datasources-galleon-pack

We are done with the Jakarta EE service. Now we will add the last piece of the puzzle: the configuration to connect the two services (WildFly and PostgreSQL)

Connecting the Jakarta EE service with PostgreSQL service:

Now we have configured both services. The last thing we need is some glue between the two services. As a matter of fact, the WildFly image when it’s built will search for some environment variables to find out the settings to connect to the PostgreSQL database. So we have to provide this information via Environment variables. That’s a simple task. We will add another YAML file (deployment.yaml) that contains this information and, also, the JVM Settings we need for our Jakarta EE service:

spec:
  template:
    spec:
      containers:
      - env:
        - name: POSTGRESQL_USER
          value: user
        - name: POSTGRESQL_PASSWORD
          value: password
        - name: POSTGRESQL_DATABASE
          value: wildflydb
        - name: POSTGRESQL_SERVICE_HOST
          value: postgres
        - name: POSTGRESQL_SERVICE_PORT
          value: 5432
        - name: JAVA_OPTIONS
          value: '-Xms128m -Xmx1024m'
        - name: GC_MAX_METASPACE_SIZE
          value: 256
        - name: GC_METASPACE_SIZE
          value: 96

Here is the full project tree:

src
└── main
    ├── java
    │   └── com
    │       └── mastertheboss
    │           ├── model
    │           │   └── Customer.java
    │           └── rest
    │               ├── CustomerEndpoint.java
    │               ├── CustomerManager.java
    │               └── JaxrsConfiguration.java
    ├── jkube
    │   └── deployment.yaml
    ├── resources
    │   ├── import.sql
    │   └── META-INF
    │       └── persistence.xml
    └── webapp
        ├── index.jsp
        └── WEB-INF
            └── beans.xml

Additionally, we have included an import.sql file with some initial data for our application:

INSERT INTO customer (id, name, surname) VALUES ( nextval('customerId_seq'), 'John','Doe');
INSERT INTO customer (id, name, surname) VALUES ( nextval('customerId_seq'), 'Fred','Smith');

Deploying the Jakarta EE application on OpenShift

Now it’s time to deploy our artifacts on OpenShift. First of all, let’s create a new project for our application:

oc new-project jakartaee-demo

Next, we will add the PostgreSQL Database using the ‘oc-new app’ command, passing as an argument the environment variables we have in deployment.yaml:

$ oc new-app -e POSTGRESQL_USER=user -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=wildflydb postgresql
--> Found image 40d2ad9 (11 months old) in image stream "openshift/postgresql" under tag "10" for "postgresql"

    PostgreSQL 10 
    ------------- 
    PostgreSQL is an advanced Object-Relational database management system (DBMS). The image contains the client and server programs that you'll need to create, run, maintain and access a PostgreSQL DBMS server.

    Tags: database, postgresql, postgresql10, rh-postgresql10

    * This image will be deployed in deployment config "postgresql"
    * Port 5432/tcp will be load balanced by service "postgresql"
      * Other containers can access this service through the hostname "postgresql"

--> Creating resources ...
    imagestreamtag.image.openshift.io "postgresql:10" created
    deploymentconfig.apps.openshift.io "postgresql" created
    service "postgresql" created
--> Success
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose svc/postgresql' 

Please note that it’s also possible to use a more agnostic approach for the creation of the Database by adding the Service, Deployment and ConfigMap YAML files into the jkube/raw folder of the project as shown in the Kubernetes tutorial. On the other hand, OpenShift greatly simplifies the orchestration of containers so I prefer using the ‘oc new-app‘ command which does it all in just one line.

In short time, the PostgreSQL Pod will be available:

$ oc get pods
NAME                         READY   STATUS      RESTARTS   AGE
postgresql-1-deploy          0/1     Completed   0          2m10s
postgresql-1-pjwxb           1/1     Running     0          2m8s

Now let’s deploy our Jakarta EE application too. For this purpose, we will use the JKube Maven plugin, by including the “openshift” profile of your Maven project:

<profile>
    <id>openshift</id>
    <build>
        <plugins>
            <plugin>
                <groupId>org.eclipse.jkube</groupId>
                <artifactId>openshift-maven-plugin</artifactId>
                <version>${jkube.version}</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>resource</goal>
                            <goal>build</goal>
                        </goals>
                    </execution>
                </executions>
                <configuration>
                    <enricher>
                        <config>
                            <jkube-service>
                                <type>NodePort</type>
                            </jkube-service>
                        </config>
                    </enricher>
                </configuration>
            </plugin>
        </plugins>
    </build>
</profile>

To deploy our application, just execute the “deploy” goal against the “openshift” profile:

$ mvn oc:deploy -Popenshift

Then, check that the output of the command is successful:

[INFO] <<< openshift-maven-plugin:1.0.2:deploy (default-cli) < install @ jakartaee-demo <<<
[INFO] 
[INFO] 
[INFO] --- openshift-maven-plugin:1.0.2:deploy (default-cli) @ jakartaee-demo ---
[INFO] oc: Using OpenShift at https://api.crc.testing:6443/ in namespace default with manifest /home/francesco/git/mastertheboss/openshift/jakartaee/target/classes/META-INF/jkube/openshift.yml 
[INFO] oc: OpenShift platform detected
[INFO] oc: Using project: default
[INFO] oc: Creating a Service from openshift.yml namespace default name jakartaee-demo
[INFO] oc: Created Service: target/jkube/applyJson/default/service-jakartaee-demo.json
[INFO] oc: Creating a DeploymentConfig from openshift.yml namespace default name jakartaee-demo
[INFO] oc: Created DeploymentConfig: target/jkube/applyJson/default/deploymentconfig-jakartaee-demo.json
[INFO] oc: Creating Route default:jakartaee-demo host: null
[INFO] oc: HINT: Use the command `oc get pods -w` to watch your pods start up
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  01:49 min
[INFO] Finished at: 2020-12-29T12:46:09+01:00
[INFO] ------------------------------------------------------------------------

Great, the two Pods should be up and running now:

Check out the Route to reach the application:

$ oc get routes
NAME             HOST/PORT                                 PATH   SERVICES         PORT   TERMINATION   WILDCARD
jakartaee-demo   jakartaee-demo-default.apps-crc.testing          jakartaee-demo   8080                 None

And finally, let’s test the application:

$ curl jakartaee-demo-default.apps-crc.testing/rest/customers | jq

[
   {
      "id":1,
      "name":"John",
      "surname":"Doe"
   },
   {
      "id":2,
      "name":"Fred",
      "surname":"Smith"
   }
]

That’s it! We have gone through the creation and deployment of a Jakarta EE application on OpenShift container platform.

Source code for this tutorial: https://github.com/fmarchioni/mastertheboss/tree/master/openshift/ocp-jakartaee

How to build and deploy a Jakarta EE application on Kubernetes

In this series of tutorials, we will show how to create and deploy a Jakarta EE service in a Cloud environment. Within this first article, we will learn how to deploy a WildFly application on Kubernetes using Minikube and JKube Maven plugin. In the next article, we will target OpenShift as Cloud environment.

Let’s get started. The first step is obviously installing Kubernetes so that we can deploy applications on top of it.

Install a Kubernetes Cluster with Minikube

Kubernetes is an open source system for managing containerized applications across multiple hosts. It provides basic mechanisms for deployment, maintenance, and scaling of applications. The simplest way to get started with Kubernetes is to install Minikube.

Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing a single node and ships with a CLI that provides basic bootstrapping operations for working with your cluster, including start, stop, status, and delete

The procedure for installing Minikube is detailed at: https://minikube.sigs.k8s.io/docs/start/

To install the binary distribution you can just download it and run the “install” command on it:

$ curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
$ sudo install minikube-linux-amd64 /usr/local/bin/minikube

When done, launch the “minikube start”command which will select the Driver for your environment and download the Virtual Machine required to run Kubernetes components:

$ minikube start

🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

When done, verify that the default and kube-system services are available:

$ minikube service list
|-------------|------------|--------------|-----|
|  NAMESPACE  |    NAME    | TARGET PORT  | URL |
|-------------|------------|--------------|-----|
| default     | kubernetes | No node port |
| kube-system | kube-dns   | No node port |
|-------------|------------|--------------|-----|

Great, you Kubernetes cluster is now up and running.

Then, in order to build the Docker image using Minikube’s Docker instance, execute:

$ eval $(minikube docker-env)

The command “minikube docker-env” returns a set of Bash environment variable exports to configure your local environment to re-use the Docker daemon inside the Minikube instance.

If you fail to configure your Minikube environment as above, you will see an error when deploying your resource: “Connection reset by peer

Configuring the Jakarta EE Service

We will now set up a Jakarta EE REST service which depends on a Database service running as well on Kubernetes.

The REST service follows the standard pattern, that is a Model class named Customer:

@Entity
@Table(name = "customer")
@NamedQuery(name = "findAllCustomer", query = "SELECT c FROM Customer c")
public class Customer implements Serializable {

    @Id
    @SequenceGenerator(
            name = "customerSequence",
            sequenceName = "customerId_seq",
            allocationSize = 1,
            initialValue = 1)
    @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "customerSequence")
    private Long id;

    @Column
    private String name;

    @Column
    private String surname;

    // Getter/Setters omitted for brevity
}

Then, we include a REST endpoint with a method for adding a new Customer (via HTTP POST) and one for returning the list of Customers (via HTTP GET)

@Path("customers")
@Produces(MediaType.APPLICATION_JSON)
public class CustomerEndpoint {

@Inject CustomerManager manager;

    @POST
    public void createCustomer(Customer customer) {
        manager.createCustomer(customer);
    }
    @GET
    public List<Customer> getAllCustomers() {
        return manager.getAllCustomers();
    }
}

The Manager class is the layer responsible for updating the Database:

@ApplicationScoped
public class CustomerManager {

    @PersistenceContext
    private EntityManager em;

    @Transactional
    public void createCustomer(Customer customer) {
        em.persist(customer);
        System.out.println("Created Customer "+customer);
    }
    public List<Customer> getAllCustomers() {
        List<Customer> tasks = new ArrayList<>();
        try {
            tasks = em.createNamedQuery("findAllCustomer").getResultList();
        } catch (Exception e){
            e.printStackTrace();
        }
        return tasks;
    }
}

As we will be using PostgreSQL, our persistence.xml file will contain the following persistence unit definition:

<persistence version="2.1"
             xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="
        http://xmlns.jcp.org/xml/ns/persistence
        http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
    <persistence-unit name="primary">
        <jta-data-source>java:jboss/datasources/PostgreSQLDS</jta-data-source>
        <properties>
            <!-- Properties for Hibernate -->
            <property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQLDialect"/>
            <property name="hibernate.hbm2ddl.auto" value="create-drop" />
            <property name="hibernate.show_sql" value="true"/>
        </properties>
    </persistence-unit>
</persistence>

You should have noticed that a Datasource is defined as JTA Datasource. In order to provide a Datasource to our Jakarta EE service, we will provision a WildFly Bootable Jar, which includes the following layers:

<build>
        <finalName>wildfly-jar-sample</finalName>
        <plugins>
            <plugin>
                <groupId>org.wildfly.plugins</groupId>
                <artifactId>wildfly-jar-maven-plugin</artifactId>
                <version>${version.wildfly.jar}</version>
                <configuration>
                    <feature-packs>
                        <feature-pack>
                            <location>wildfly@maven(org.jboss.universe:community-universe)#${version.wildfly}</location>
                        </feature-pack>
                        <feature-pack>
                            <groupId>org.wildfly</groupId>
                            <artifactId>wildfly-datasources-galleon-pack</artifactId>
                            <version>1.1.0.Final</version>
                        </feature-pack>
                    </feature-packs>
                    <layers>
                        <layer>cloud-profile</layer>
                        <layer>postgresql-datasource</layer>
                    </layers>
                    <excluded-layers>
                        <layer>deployment-scanner</layer>
                    </excluded-layers>
                    <cloud/>
                </configuration>
                <executions>
                    <execution>
                        <goals>
                            <goal>package</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

As you can see from the above Maven plugin configuration, our WildFly bootable Jar will be provisioned with a:

  • cloud-profile: This is an aggregation of some basic layers (bean-validation, cdi, ee-security, jaxrs, jms-activemq, jpa, observability, resource-adapters, web-server) to address cloud use cases
  • postgresql-datasource: This layer installs postgresql driver as a module inside a WildFly server. The driver is named postgresql. (For more info, this layer is available at: https://github.com/wildfly-extras/wildfly-datasources-galleon-pack

Configuring PostgreSQL Service on Kubernetes

We will use PostgreSQL as database backend. There are several strategies for deploying PostgreSQL on Kubernetes. We will show here how to use the JKube Maven plugin to address the deployment of all resources required to deploy PostgreSQL. For this purpose, we will add the following files under the src/main/jkube/raw folder of your Maven project:

│   └── raw
│       ├── postgres-configmap.yml
│       ├── postgres-deployment.yml
│       └── postgres-service.yml

The first file, postgres-configmap.yml, contains the Kubernetes ConfigMap with the credentials to access PostgreSQL:

apiVersion: v1
kind: ConfigMap
metadata:
  name: postgres-config
  labels:
    app: postgres
data:
  POSTGRESQL_USER: user
  POSTGRESQL_PASSWORD: password
  POSTGRESQL_DATABASE: wildflydb

The second one, postgres-deployment.yml, contains information about the Deployment unit, including the base image used for the PostgreSQL service:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: centos/postgresql-96-centos7:latest
          imagePullPolicy: "IfNotPresent"
          ports:
            - containerPort: 5432
          envFrom:
            - configMapRef:
                name: postgres-config

Finally, the third one (postgres-service.yml), defines the PostgreSQL service:

apiVersion: v1
kind: Service
metadata:
  name: postgres
  labels:
    app: postgres
spec:
  type: NodePort
  ports:
   - port: 5432
  selector:
   app: postgres

That’s all.

Connecting the Jakarta EE service with PostgreSQL service:

Now we have configured both services. The last thing we need is some glue between the two services. As a matter of fact, the WildFly image when it’s built will search for some environment variables to find out the settings to connect to the PostgreSQL database. So we have to provide this information via Environment variables. That’s a simple task. We will add another YAML file (deployment.yaml) that contains this information and, also, the JVM Settings we need for our Jakarta EE service:

spec:
  template:
    spec:
      containers:
      - env:
        - name: POSTGRESQL_USER
          value: user
        - name: POSTGRESQL_PASSWORD
          value: password
        - name: POSTGRESQL_DATABASE
          value: wildflydb
        - name: POSTGRESQL_SERVICE_HOST
          value: postgres
        - name: POSTGRESQL_SERVICE_PORT
          value: 5432
        - name: JAVA_OPTIONS
          value: '-Xms128m -Xmx1024m'
        - name: GC_MAX_METASPACE_SIZE
          value: 256
        - name: GC_METASPACE_SIZE
          value: 96

Here is the full project tree:

src
└── main
    ├── java
    │   └── com
    │       └── mastertheboss
    │           ├── model
    │           │   └── Customer.java
    │           └── rest
    │               ├── CustomerEndpoint.java
    │               ├── CustomerManager.java
    │               └── JaxrsConfiguration.java
    ├── jkube
    │   ├── deployment.yaml
    │   └── raw
    │       ├── postgres-configmap.yml
    │       ├── postgres-deployment.yml
    │       └── postgres-service.yml
    ├── resources
    │   ├── import.sql
    │   └── META-INF
    │       └── persistence.xml
    └── webapp
        ├── index.jsp
        └── WEB-INF
            └── beans.xml

The project includes as well a SQL script to load some Entity objects at start up.

Deploying our services on on Kubernetes

Now that both plugins are in place, we will deploy the application on Kubernetes. To do that, we will include, as Maven profile, the kubernates-maven-plugin which is in charge to create the Kubernetes descriptors, build and deploy the service on Kubernetes:

<profile>
    <id>kubernetes</id>
    <build>
        <plugins>
            <plugin>
                <groupId>org.eclipse.jkube</groupId>
                <artifactId>kubernetes-maven-plugin</artifactId>
                <version>${jkube.version}</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>resource</goal>
                            <goal>build</goal>
                        </goals>
                    </execution>
                </executions>
                <configuration>
                    <enricher>
                        <config>
                            <jkube-service>
                                <type>NodePort</type>
                            </jkube-service>
                        </config>
                    </enricher>
                </configuration>
            </plugin>
        </plugins>
    </build>
</profile>

From the shell, execute the following steps:

1) Create your Kubernetes resource descriptors.

mvn clean k8s:resource -Pkubernetes

2) Then start docker build by hitting the build goal.

mvn package k8s:build -Pkubernetes

3) Finally, deploy your application on the Kubernetes cluster:

mvn k8s:apply -Pkubernetes

Once deployed, you can see the pods running inside your Kubernetes cluster:

$ kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
jakartaee-demo-7d98897d57-jhpx5   1/1     Running   0          2m
postgresql-bfbb7ff8b-nb9bx        1/1     Running   0          2m

Let’s check the updated Service List:

minikube service list
|----------------------|---------------------------|--------------|----------------------------|
|      NAMESPACE       |           NAME            | TARGET PORT  |            URL             |
|----------------------|---------------------------|--------------|----------------------------|
| default              | jakartaee-demo            | http/8080    | http://192.168.39.35:31353 |
| default              | kubernetes                | No node port |
| default              | postgres                  |         5432 | http://192.168.39.35:30662 |
| kube-system          | kube-dns                  | No node port |
| kubernetes-dashboard | dashboard-metrics-scraper | No node port |
| kubernetes-dashboard | kubernetes-dashboard      | No node port |
|----------------------|---------------------------|--------------|----------------------------|

Now we can test the service:


You can now manage your application from Kubernetes using either the ‘kubectl’ command line, or from the Kubernetes Dashboard:

minikube dashboard

Troubleshooting notes

Please notice that there’s a known issue which affects some JDKs: https://bugs.openjdk.java.net/browse/JDK-8236039

When this occurs the client throws an exception:

“javax.net.ssl.SSLHandshakeException: extension (5) should not be presented in certificate_request”

This happens because JDK 11 onwards has support for TLS 1.3 which can cause the above error.

You can work around this issue by setting the property -Djdk.tls.client.protocols=TLSv1.2 to the JVM args to make it use 1.2 instead. As an alternative, update to the latest version od JDK (it’s fully solved in JDK 15).

Enjoy Jakarta EE on Kubernetes!

You can find the source code for this Maven project at: https://github.com/fmarchioni/mastertheboss/tree/master/openshift/kubernetes-jakartaee

Jakarta EE 9 Hello World example application

In the last post we had our first taste of Jakarta EE 9 with the preview version of WildFly 22: How to run Jakarta EE 9 on WildFly

Let’s see now how to build and deploy sample application which uses the ‘jakarta‘ package namespace.

The jakarta.jakartaee-api version 9 is now available on the official Maven repository: https://search.maven.org/search?q=g:jakarta.platform

This means that you can now build your applications using ‘jakarta‘ packages as follows:

<dependency>
    <groupId>jakarta.platform</groupId>
    <artifactId>jakarta.jakartaee-api</artifactId>
    <version>9.0.0</version>
    <scope>provided</scope>
</dependency>

If you just need the Web Profile capabilities, then you can use instead:

<dependency>
    <groupId>jakarta.platform</groupId>
    <artifactId>jakarta.jakartaee-web-api</artifactId>
    <version>9.0.0</version>
    <scope>provided</scope>
</dependency>

Let’s rewrite a WildFly quickstart to use the new packaging. Here’s an Hello World Servlet:

import java.io.IOException;
import java.io.PrintWriter;

import jakarta.inject.Inject;
import jakarta.servlet.ServletException;
import jakarta.servlet.annotation.WebServlet;
import jakarta.servlet.http.HttpServlet;
import jakarta.servlet.http.HttpServletRequest;
import jakarta.servlet.http.HttpServletResponse;

@WebServlet("/HelloWorld")
public class HelloWorldServlet extends HttpServlet {

    static String PAGE_HEADER = "<html><head><title>helloworld</title></head><body>";
    static String PAGE_FOOTER = "</body></html>";

    @Inject
    HelloService ejb;

    @Override
    protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException,ServletException {
        resp.setContentType("text/html");
        PrintWriter writer = resp.getWriter();
        writer.println(PAGE_HEADER);

        try {
            writer.println(ejb.createHelloMessage("Jakarta EE 9!"));
        } catch (Exception e) {
            e.printStackTrace();
        }

        writer.println(PAGE_FOOTER);
        writer.close();
    }

}

And here’s the HelloService ejb:

import jakarta.ejb.Stateless;
 
@Stateless
public class HelloService {

  public String  createHelloMessage(String name) throws Exception {
         return "Hello "+name;
         
  }
  
}

Now grab WildFly 22 from https://www.wildfly.org/downloads/ and start it:

$ ./standalone.sh

Build and deploy the application on it:

$ mvn install wildfly:deploy

You will see from the logs that the application deployed on WildFly:

11:46:23,735 INFO  [org.jboss.as.server.deployment] (MSC service thread 1-6) WFLYSRV0027: Starting deployment of "helloworld.war" (runtime-name: "helloworld.war")
11:46:23,798 INFO  [org.jboss.weld.deployer] (MSC service thread 1-8) WFLYWELD0003: Processing weld deployment helloworld.war
11:46:23,820 INFO  [org.jboss.as.ejb3.deployment] (MSC service thread 1-8) WFLYEJB0473: JNDI bindings for session bean named 'HelloService' in deployment unit 'deployment "helloworld.war"' are as follows:

	java:global/helloworld/HelloService!org.jboss.as.quickstarts.helloworld.HelloService
	java:app/helloworld/HelloService!org.jboss.as.quickstarts.helloworld.HelloService
	java:module/HelloService!org.jboss.as.quickstarts.helloworld.HelloService
	java:global/helloworld/HelloService
	java:app/helloworld/HelloService
	java:module/HelloService
11:46:24,136 INFO  [org.jboss.as.server] (management-handler-thread - 1) WFLYSRV0010: Deployed "helloworld.war" (runtime-name : "helloworld.war")

Test it:

That was our first ride on Jakarta EE 9 with WildFly. As said in our first article, at the moment using the ‘jakarta‘ packaging is not mandatory as the transformer capabilities of wildfly-preview layer are able to perform the required bytecode changes in the package name, so everything still works with the former ‘javax’ packaging. Enjoy Jakarta EE 9!

Source code for this tutorial: https://github.com/fmarchioni/mastertheboss/tree/master/jakartaee/jakartaee-9

How to run Jakarta EE 9 on WildFly

Welcome to Jakarta EE 9! We can finally have a taste of Jakarta EE 9 by downloading the Alpha version of WildFly 22. Let’s check it out

As many of you know, Jakarta EE 9 is a tooling release that aims at updating specifications to the new Jakarta EE 9 namespace (from javax.* to jakarta.*), and removing specifications that are no longer relevant. Although it does not include exciting new features Jakarta EE 9 is a key step on the road to further innovation using cloud native technologies for Java. As we will see, the shape of application servers won’t be again the same after Jakarta EE 9.

So let’s start by downloading WildFly 22, which includes a preview of Jakarta EE 9: https://wildfly.org/downloads

Next, unzip it as usual. We will now have a quick look at the application server.

A closer look at a Jakarta EE 9 application server

First of all, let’s check any enterprise api to see the new packaging:

$ jar tvf modules/system/layers/base/jakarta/servlet/jsp/api/main/jakarta.servlet.jsp-api-3.0.0-RC2.jar

  1169 Thu Nov 12 21:02:12 CET 2020 jakarta/servlet/jsp/tagext/BodyContent.class
   500 Thu Nov 12 21:02:12 CET 2020 jakarta/servlet/jsp/tagext/JspFragment.class
   358 Thu Nov 12 21:02:12 CET 2020 jakarta/servlet/jsp/tagext/PageData.class
   257 Thu Nov 12 21:02:12 CET 2020 jakarta/servlet/jsp/tagext/TryCatchFinally.class
   832 Thu Nov 12 21:02:12 CET 2020 jakarta/servlet/jsp/tagext/FunctionInfo.class
  2108 Thu Nov 12 21:02:12 CET 2020 jakarta/servlet/jsp/tagext/TagData.class
  . . . . 

So, as expected, now packages have been renamed from “javax” to “jakarta“. Does that mean we have to update all our applications, in order to change packages from “javax” to “jakarta” ? Actually not. The Rules of the game are defined by the Galleon tool which now includes the ‘wildfly-preview‘ feature pack to handle the transition to Jakarta EE 9.

In a nutshell, when suitable Jakarta EE 9 API jars or ‘native’ EE 9 implementation libraries are found, those will be used instead of the EE 8 spec jars used in standard WildFly. On the other hand, if you deploy a Jakarta EE 8 application on the top of WildFly 22, the features in galleon-preview will detect the old EE 8 API and will trigger a bytecode transformation to change references from javax.* to jakarta.*.

If you are interested to know more about the transformation project, check the Eclipse Transformer project (https://projects.eclipse.org/projects/technology.transformer) which contains part of the technology used in WildFly 22 to trasform your references.

Let’s have a test by starting WildFly 22 and deploying any EE 8 application on the top of it:

 ./standalone.sh

Now let’s deploy the WildFly 20 kitchensink quickstart on the top if it:

[wildfly@fedora kitchensink]$ mvn clean install wildfly:deploy

As you can see, the kitchensink deployed on WildFly 20:

10:01:05,449 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 76) WFLYUT0021: Registered web context: '/kitchensink' for server 'default-server'
10:01:05,518 INFO  [org.jboss.as.server] (management-handler-thread - 1) WFLYSRV0010: Deployed "kitchensink.war" (runtime-name : "kitchensink.war")

In the long run it, I reckon that development tools or IDE will perform out of the box a transformation of “javax” namespaces to “jakarta” so the transformation feature might not be required or it could become optional as well. We will see. Let’s see now some other differences contained in the new application server version.

No embedded messaging broker (by default)

The key change in WildFly 22 is the “messaging-activemq” which does not include (by default) an embedded Artemis MQ server:

<subsystem xmlns="urn:jboss:domain:messaging-activemq:11.0">
    <remote-connector name="artemis" socket-binding="messaging-activemq">
        <param name="use-nio" value="true"/>
        <param name="use-nio-global-worker-pool" value="true"/>
    </remote-connector>
    <pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="artemis" transaction="xa" user="guest" password="guest"/>
</subsystem>

As you can see, the configuration in the “full” and “full-ha” profiles now include a Remote connector only to connect to an external Artemis MQ server. The logic behind this choice is that cloud applicatons are supposed to provide microservices so it’s discouraged to start an application server which provides both business logic and an embedded broker. So, in OpenShift terms, the best practice is to spin or or more Pods for the application server and one or more Pods for the Artemis MQ broker.

The features of the embedded broker are still there though. If you check the folder JBOSS_HOME/docs/examples/configs you will find the standalone-activemq-embedded.xml legacy configuration which includes the embedded broker.

Changes in WildFly security

The old legacy security subsystem has been removed from the configuration. Therefore elytron is the security layer used by WildFly from this release on. Picketbox libraries are still there though:

find . -name "picket*.jar"
 ./system/layers/base/org/picketbox/main/picketbox-5.0.3.Final-redhat-00007.jar
 ./system/layers/base/org/picketbox/main/picketbox-infinispan-5.0.3.Final-redhat-00007.jar
 ./system/layers/base/org/picketbox/main/picketbox-commons-1.0.0.final.jar

However they are planned to be removed completely when JDK 14 is out. As you can check from JDK 14 release notes (https://www.oracle.com/java/technologies/javase/14-relnote-issues.html), the PicketBox API is now obsolete for this JDK release.

For the same reason, the old vault tool is obsolete as well and it has been removed from the ‘bin’ folder of the application server. You are recommended to migrate from the vault password mechanism to Elytron Credential Stores Using Elytron Credential Stores in WildFly

The fall of the legacy subsystems

A number of legacy subsystems have been deprecated such as ‘cmp’, ‘config-admin’, ‘jacorb’, ‘jaxr’. Also, WildFly support for JSR-77 has been removed so you will not find the “jsr-77” subsystem any more. In replacement, it’s recommended to use the management layer to get this data. The management model has a native protocol with a Java API, and also has an HTTP/JSON protocol that can be used from any language.

That’s all! The WildFly 22 process is still in an early stage so not all the core features (mainly the transforming feature) are available in all contexts. For example, deployment overlays are not transformed with the alpha releases. Unmanaged deployments that use EE 8 APIs will not work as well.

For more info, check the release notes and its news: https://www.wildfly.org/news/2020/11/12/Jakarta-EE-9-with-WildFly-Preview/

Getting ready for Jakarta EE 9

In this article we will give an overview of the status of the upcoming Jakarta EE 9 and its impact on our applications. Also we will check when we will be able to have our first application tests in the Jakarta EE 9 environment

Jakarta EE 8 was substantially just a change in project name which derived when Java EE was migrated from Oracle to the Eclipse Foundation, and it is called Jakarta EE, under the Eclipse Enterprise for Java (EE4J) project.

What is the main purpose of Jakarta EE 9? basically Jakarta EE 9 aims to deliver a set of specifications functionally similar to Jakarta EE 8 but in the new Jakarta EE 9 namespace jakarta.*.

So a migration will take place. Given the required move from the javax namespace to the jakarta namespace in Jakarta EE 9, application migration will be required. Some application servers may provide additional facilities to make this migration more consumable. Still, an application may be better positioned for the future by converting it to use the new jakarta namespace of the Jakarta EE platform.

In addition, the Jakarta EE 9 release removes a small set of APIs from Jakarta EE 8 (mainly a few old ones) in order to reduce the surface area of the APIs to ensure that it is easier for new vendors to enter the Jakarta EE 9 environment as well as reduce the burden of the migration.

So, in a nutshell, Jakarta EE 9 is going to be a tooling release to support the new jakarta.* namespace and a foundation for innovation that Jakarta EE specification projects can use to drive new features for release in Jakarta EE 10 and beyond.

Backwards Compatibility

Jakarta EE 9 will not impose any backward compatibility hard requirements for compatible implementations to support the Jakarta EE 8 release. This is to facilitate new implementations to enter the Jakarta EE 9 ecosystem. On the other hand, it is expected that vendors will provide tooling and products to allow backwards compatibility and migration solutions for enabling EE 8 applications to run on Jakarta EE 9.

Java SE Version

As per Jakarta EE 9 specification, the API must be compiled at the Java SE 8 source level. However, compatible implementations of the Jakarta EE 9 Web Profile and Full Profile must certify compatibility on Java SE 11. Compatible Implementations may also additionally certify and support Java SE 8.

Full Jakarta™ EE Product Requirements

The full Jakarta EE platform provides a number of technologies in each of the containers defined by this specification. Jakarta EE Technologies indicates the technologies with their required versions, which containers include the technologies, and whether the technology is required (REQ), proposed optional (POPT), or optional (OPT).

The following technologies are required:

  • Jakarta Enterprise Beans 4.0 (except for Jakarta Enterprise Beans entity beans and associated Jakarta Enterprise Beans QL, which have been made optional)
  • Jakarta Servlet 5.0
  • Jakarta Server Pages 3.0
  • Jakarta Expression Language 4.0
  • Jakarta Messaging 3.0
  • Jakarta Transactions 2.0
  • Jakarta Activation 2.0
  • Jakarta Mail 2.0
  • Jakarta Connectors 2.0
  • Jakarta RESTful Web Services 3.0
  • Jakarta WebSocket 2.0
  • Jakarta JSON Processing 2.0
  • Jakarta JSON Binding 2.0
  • Jakarta Concurrency 2.0
  • Jakarta Batch 2.0
  • Jakarta Authorization 2.0
  • Jakarta Authentication 2.0
  • Jakarta Security 2.0
  • Jakarta Debugging Support for Other Languages 2.0
  • Jakarta Standard Tag Library 2.0
  • Jakarta Server Faces 3.0
  • Jakarta Annotations 2.0
  • Jakarta Persistence 3.0
  • Jakarta Bean Validation 3.0
  • Jakarta Managed Beans 2.0
  • Jakarta Interceptors 2.0
  • Jakarta Contexts and Dependency Injection 3.0
  • Jakarta Dependency Injection 2.0

The following technologies are optional:

  • Jakarta Enterprise Beans 3.2 and earlier entity beans and associated Jakarta Enterprise Beans QL
  • Jakarta Enterprise Beans 2.x API group
  • Jakarta Enterprise Web Services 2.0
  • Jakarta SOAP with Attachments 2.0
  • Jakarta Web Services Metadata 3.0
  • Jakarta XML Web Services 3.0
  • Jakarta XML Binding 3.0

The following technologies are removed:

  • Distributed Interoperability in the Jakarta Enterprise Beans 3.2
  • Jakarta XML RPC 1.1
  • Jakarta XML Registries 1.0
  • Jakarta Deployment 1.2
  • Jakarta Management 1.1

Jakarta EE 9 scheduling

Jakarta EE 9 will be delivered in a set of waves similar to those delivered in the Jakarta EE 8 release. These waves are somewhat related to the dependency tree of specifications. The Jakarta EE team aims to deliver specifications with a low number of dependencies first followed by other specifications. As you can check from https://eclipse-ee4j.github.io/jakartaee-platform/jakartaee9/JakartaEE9#jakarta-ee-9-schedule the planned Final Jakarta EE 9 release (Java 11) is scheduled on Sept 16 2020

A Preview of Jakarta EE 9

The Eclipse GlassFish 6 contains prerelease milestone to allow users, vendors, and all community members a glimpse into the changes forthcoming with Jakarta EE 9. This is not a stable release and is intended to allow users to begin evaluation of the proposed name-space and pruning changes included in this new EE 9 specification release.

You can get it at: https://eclipse-ee4j.github.io/glassfish/download

WildFly and Jakarta EE 9

We recommend reading the post from B.Stansberry about the adoption of Jakarta EE 9 and WildFly: https://wildfly.org/news/2020/06/23/WildFly-and-Jakarta-EE-9/

In a nustshell, the main idea is to continue using the EE 8 APIs as primary distribution of WildFly to avoid hampering the current new features and fixes being worked by the WildFly team. On the other hand, it would be helpful to provide a playground for the transition from Jakarta EE 8 and Jakarta EE 9 where you can test and preview your applications in the latest Jakarta EE environent using the jakarta namespace.

So we might expect an alpha Jakarta EE 9 version of WildFly at the end of this summer and future updates of it coming out along with the main WildFly (EE 8 based) release. To make things work in the smoothest possible way, the WildFly Jakarta EE 9 server should be based on a separate Galleon feature pack from the one used for the main distribution. That can be done by transforming EE 8 resources as part of provisioning.

Expect to hear more news about this in the coming few weeks as WildFly developers will begin implementing changes in the main code base to shape the Jakarta EE 9 variant. Stay tuned!

Getting started with Jakarta EE

As most of you probably know, the Java EE was migrated from Oracle to the Eclipse Foundation, and it is now called Jakarta EE, under the Eclipse Enterprise for Java (EE4J) project. There are already a list of application servers which offer a Jakarta EE 8 compatible implementation such as WildFly 18. In this tutorial we will learn how to bootstrap a Jakarta EE project.

In general terms, Jakarta EE 8 and Java EE 8 APIs are identical and so is their implementation. In practical terms, even if the source from which the server is built changes, it is not expected to introduce any runtime incompatibility. Therefore, WildFly 18 is still a Java EE 8 compatible application server.

Let’s start from the configuration. We will go through the Maven and Gradle configuration of a Jakarta EE project.

Configuring Jakarta EE with Maven

If you are using Maven to build your Jakarta EE project, then you have two main options:

1) Use the “Vanilla” Jakarta EE dependency system:

<dependencies>
        <dependency>
            <groupId>jakarta.platform</groupId>
            <artifactId>jakarta.jakartaee-api</artifactId>
            <version>8.0.0</version>
            <scope>provided</scope>
        </dependency>
</dependencies>

This is the simplest choice if you stick to standard Jakarta EE API or if you want to quick Test a Jakarta EE application.

2) Use WildFly (or other compatible Jakarta EE server) dependency system:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.wildfly.bom</groupId>
            <artifactId>wildfly-jakartaee8-with-tools</artifactId>
            <version>${version.server.bom}</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>
<dependencies>
    <dependency>
        <groupId>jakarta.enterprise</groupId>
        <artifactId>jakarta.enterprise.cdi-api</artifactId>
        <scope>provided</scope>
    </dependency>
  <!-- Other Dependencies here -->
</dependencies>

This options pays back if you are using WildFly features that extend the Jakarta EE umbrella. For example, the wildfly-jakartaee8-with-tools BOM includes also the correct versions for Arquillian, so you don’t need to manage their version. Besides it, WildFly extends the Jakarta EE project including a growing subset of the Eclipse MicroProfile API. So, summing up, if you are using features that extend Jakarta EE, it is worthy to use WildFly BOM and dependencies to build your projects.

Configuring Jakarta EE with Gradle

If you are using Gradle to bootstrap your projects, then the recommended gradle.build you can use is the following one, which creates a sample jakartaee-demo.war application under the base package com.mastertheboss:

apply plugin: 'war'
 
group = 'com.mastertheboss'
version = '1.0-SNAPSHOT'
 
repositories {
    mavenCentral()
}
dependencies {
    providedCompile 'jakarta.platform:jakarta.jakartaee-api:8.0.0'
}
 
compileJava {
    targetCompatibility = '11'
    sourceCompatibility = '11'
}
 
war{
    archiveName 'jakartaee-demo.war'
}

A sample Jakarta EE 8 project

To show you an example, we have moved one Hello World Rest/JPA Example coded for Java EE 8 into the new Jakarta EE 8 style.

This project is made up of a basic REST Endpoint that exposes a set of methods to list and create resources:

@Path("/service")
public class RESTService {

    @Inject
    ServiceBean ejb;

    @GET
    @Path("/list/{id}")
    @Produces(MediaType.APPLICATION_JSON)
    public SimpleProperty getPropertyByPathParam(@PathParam("id") String id)
    {
        return ejb.findById(id);
    }

    @GET
    @Path("/list")
    @Produces(MediaType.APPLICATION_JSON)
    public List<SimpleProperty> getProperty()
    {
        return ejb.findAll();
    }
    @POST
    @Produces(MediaType.TEXT_PLAIN)
    public Response createProperty(@FormParam("key")String key,
                                   @FormParam("value")String value)
    {
        ejb.put(key,value);

        return Response.ok("Inserted! Go back and check the list.").build();

    }

}

This is the ServiceBean which does some queries on the default Database:

@Stateless
public class  ServiceBean   {

    @PersistenceContext
    private EntityManager em;

    public void put(String key, String value){
        SimpleProperty p = new SimpleProperty();
        p.setKey(key);
        p.setValue(value);
        em.persist(p);
    }

    public void delete(SimpleProperty p){

        Query query = em.createQuery("delete FROM SimpleProperty p where p.key='"+p.getKey()+"'");

        query.executeUpdate();

    }

    public List<SimpleProperty> findAll(){

        Query query = em.createQuery("FROM SimpleProperty");

        List <SimpleProperty> list = query.getResultList();
        return list;

    }

    public SimpleProperty findById(String id){

        SimpleProperty p = em.find(SimpleProperty.class, id);
        return p;

    }

}

To build up this project, we have used the standard Jakarta EE 8 dependencies:

<dependencies>
        <dependency>
            <groupId>jakarta.platform</groupId>
            <artifactId>jakarta.jakartaee-api</artifactId>
            <version>8.0.0</version>
            <scope>provided</scope>
        </dependency>
</dependencies>

You can find the full source code for the project here: https://github.com/fmarchioni/mastertheboss/tree/master/jakartaee/standard

As the project includes also WildFly Maven plugin, you can simply run it as follows:

$ mvn install wildfly:deploy

Here is your first, Jakarta EE 8 Project running on WildFly:

Have fun with Jakarta EE!

From Java EE to Jakarta EE with WildFly

WildFly 18 has been released and one of the most interesting news is the alignment of the project with Jakarta EE 8 API.

WildFly 17.0.1 was the first release of the applicaiton server certified as a Jakarta EE 8 compatible implementation. In terms of API do we have anything to change in our configuration ?

In general terms, Jakarta EE 8 and Java EE 8 APIs are identical and so is their implementation. In practical terms, even if the source from which the server is built changes, it is not expected to introduce any runtime incompatibility. Therefore, WildFly 18 is still a Java EE 8 compatible application server.

On the other hand, you will see that from WildFly 18 and newer versions, the dependencies and BOM files will be collected from a different location, so the groupIp and artifactId will be different.

This is the new BOM file for Jakarta EE projects:

 <dependencyManagement>
        <dependencies>      
            <dependency>
                <groupId>org.wildfly.bom</groupId>
                <artifactId>wildfly-jakartaee8-with-tools</artifactId>
                <version>${version.server.bom}</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
        </dependencies>
</dependencyManagement>

Then, for projects that were consuming an API jar produced by a JBoss.org community project, a new github repo was created, with the initial code derived from the Jakarta projects, and new releases were produced.

Here is the list dependencies which now fall under the Jakarta Umbrella:

<artifactId>jakarta.activation</artifactId>
<groupId>com.sun.activation</groupId>

<artifactId>jakarta.enterprise.cdi-api</artifactId>
<groupId>jakarta.enterprise</groupId>

<artifactId>jakarta.inject-api</artifactId>
<groupId>jakarta.inject</groupId>

<artifactId>jakarta.json-api</artifactId>
<groupId>jakarta.json</groupId>

<artifactId>jakarta.mail</artifactId>
 <groupId>com.sun.mail</groupId>

<artifactId>jakarta.persistence-api</artifactId>
<groupId>jakarta.persistence</groupId>

<artifactId>jakarta.security.enterprise-api</artifactId>
<groupId>jakarta.security.enterprise</groupId>

<artifactId>jakarta.validation-api</artifactId>
<groupId>jakarta.validation</groupId>

A sample Jakarta EE 8 project

To show you an example, we have moved one Hello World Rest/JPA Example coded for Java EE 8 into the new Jakarta EE 8 style.

This project is made up of a basic REST Endpoint that exposes a set of methods to list and create resources:

@Path("/service")
public class RESTService {

    @Inject
    ServiceBean ejb;

    @GET
    @Path("/list/{id}")
    @Produces(MediaType.APPLICATION_JSON)
    public SimpleProperty getPropertyByPathParam(@PathParam("id") String id)
    {
        return ejb.findById(id);
    }

    @GET
    @Path("/list")
    @Produces(MediaType.APPLICATION_JSON)
    public List<SimpleProperty> getProperty()
    {
        return ejb.findAll();
    }
    @POST
    @Produces(MediaType.TEXT_PLAIN)
    public Response createProperty(@FormParam("key")String key,
                                   @FormParam("value")String value)
    {
        ejb.put(key,value);

        return Response.ok("Inserted! Go back and check the list.").build();

    }

}

This is the ServiceBean which does some queries on the default Database:

@Stateless
public class  ServiceBean   {

    @PersistenceContext
    private EntityManager em;

    public void put(String key, String value){
        SimpleProperty p = new SimpleProperty();
        p.setKey(key);
        p.setValue(value);
        em.persist(p);
    }

    public void delete(SimpleProperty p){

        Query query = em.createQuery("delete FROM SimpleProperty p where p.key='"+p.getKey()+"'");

        query.executeUpdate();

    }

    public List<SimpleProperty> findAll(){

        Query query = em.createQuery("FROM SimpleProperty");

        List <SimpleProperty> list = query.getResultList();
        return list;

    }

    public SimpleProperty findById(String id){

        SimpleProperty p = em.find(SimpleProperty.class, id);
        return p;

    }

}

Most interesting for us, this is the list of Jakarta EE 8 dependencies for our project:

    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.wildfly.bom</groupId>
                <artifactId>wildfly-jakartaee8-with-tools</artifactId>
                <version>${version.server.bom}</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
        </dependencies>
    </dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>jakarta.enterprise</groupId>
            <artifactId>jakarta.enterprise.cdi-api</artifactId>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>jakarta.persistence</groupId>
            <artifactId>jakarta.persistence-api</artifactId>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.jboss.spec.javax.ejb</groupId>
            <artifactId>jboss-ejb-api_3.2_spec</artifactId>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.jboss.spec.javax.ws.rs</groupId>
            <artifactId>jboss-jaxrs-api_2.1_spec</artifactId>
            <scope>provided</scope>
        </dependency>
    </dependencies>

You can find the full source code for the project here: https://github.com/fmarchioni/mastertheboss/tree/master/jakartaee/wildfly-based

As the project includes also WildFly Maven plugin, you can simply run it as follows:

$ mvn install wildfly:deploy

Here is your first, minimal Jakarta EE 8 Project running on WildFly:

That’s with the Jakarta EE news!