Health checks are an essential component of all microservices applications. In this tutorial we will learn how to use the HealthCheck Microprofile API to verify the liveness and readiness of a microservice and see how Openshift can use the health checks to determine if an application needs to be discarded/stopped.

The main API to provide health check procedures on the application level is the HealthCheck interface:

@FunctionalInterface
public interface HealthCheck {

    HealthCheckResponse call();
}

When using CDI contexts, any bean that implements HealthCheck and is annotated with @Health is discovered automatically and is invoked by the framework or runtime when the outermost protocol entry point receives an inbound request. See this example:

@Health
@ApplicationScoped
public class CheckDiskSpace implements HealthCheck {

    public HealthCheckResponse call() {
        return HealthCheckResponse.named("successful-check").up().build();
    }
}

According to the newest Microprofile specification, however, the @Health annotation has been deprecated in favor of more specific checks:

  • Liveness check: can be used for services that run for long periods of time, which may eventually transit to a broken state. Therefore, it cannot recover except being restarted.
  • Readiness check: can be used for services that are temporarily unable to serve traffic. For example, a service might need to load large data or configuration files during startup.

Let's see a practical example of it. We will create a simple JAX-RS application which can be deployed on WildFly, Thorntail or Quarkus. We will deploy it on Thorntail, therefore we need the following dependencies:

  <dependencyManagement>
    <dependencies>
      <dependency>
        <groupId>io.thorntail</groupId>
        <artifactId>bom-all</artifactId>
        <version>${version.thorntail}</version>
        <scope>import</scope>
        <type>pom</type>
      </dependency>
    </dependencies>
  </dependencyManagement>

  <build>
    <finalName>demo</finalName>
    <plugins>
      <plugin>
        <groupId>io.thorntail</groupId>
        <artifactId>thorntail-maven-plugin</artifactId>
        <version>${version.thorntail}</version>
        
        <executions>
          <execution>
            <goals>
              <goal>package</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>

  <dependencies>
    
    <dependency>
      <groupId>io.thorntail</groupId>
      <artifactId>microprofile-health</artifactId>
    </dependency><dependency>
      <groupId>io.thorntail</groupId>
      <artifactId>jaxrs</artifactId>
    </dependency>
  </dependencies>

Now let's add a DatabaseHealthCheck class which performs a @Liveness check over a Database:

package com.mastertheboss.healthdemo.rest;

import org.eclipse.microprofile.health.Liveness;
import org.eclipse.microprofile.health.HealthCheck;
import org.eclipse.microprofile.health.HealthCheckResponse;
import org.eclipse.microprofile.health.HealthCheckResponseBuilder;


import javax.enterprise.context.ApplicationScoped;
import java.io.IOException;
import java.net.Socket;

@Liveness
@ApplicationScoped
public class DatabaseHealthCheck implements HealthCheck {



    @Override
    public HealthCheckResponse call() {

        HealthCheckResponseBuilder responseBuilder = HealthCheckResponse.named("Database connection health check");
        String hostName = (System.getenv("POSTGRESQL_SERVICE_HOST") != null) ?  System.getenv("POSTGRESQL_SERVICE_HOST") : "localhost";
        Integer port = (System.getenv("POSTGRESQL_SERVICE_PORT") != null) ? Integer.parseInt(System.getenv("POSTGRESQL_SERVICE_PORT")) : 5432;

        try {
            pingServer(hostName, port);
            responseBuilder.up();
        } catch (Exception e) {

            responseBuilder.down()
                    .withData("error", e.getMessage());
        }

        return responseBuilder.build();
    }

    private void pingServer(String dbhost, int port) throws IOException
    {
        Socket socket = new Socket(dbhost, port);
        socket.close();

    }


}

Within this Health check class, we are trying to reach a PostgreSQL database. The Host and Port of the Database are searched through the default Container env variables for PostgreSQL. If not found, the default will be localhost and 5432.

Now start a PostgreSQL database instance, for example using Docker:

docker run --rm=true --name health_test -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres -e POSTGRES_DB=postgres -p 5432:5432 postgres:10.5

Next, start the application on Thorntail:

$ mvn package thorntail:run

Now if you try to reach the liveness rest endpoint (http://localhost:8080/health/live ) you will see the result of the health check test:

$ curl http://localhost:8080/health/live | jq
{
  "status": "UP",
  "checks": [
    {
      "name": "Database connection health check",
      "status": "UP"
    }
  ]
}

On the other hand, if you attempt to shut down the Database, the health report will change to:

$ curl http://localhost:8080/health/live | jq
{
  "status": "DOWN",
  "checks": [
    {
      "name": "Database connection health check",
      "status": "DOWN",
      "data": {
        "error": "Connection refused (Connection refused)"
      }
    }
  ]
}

Great! the Health check is completed. Let's see how it works in the Cloud.

Taking the example to the Cloud

So now we have got an application which contains an Health check. Let's see how we can use Containers Health Check on Openshift to verify the status of the application. Start your Openshift/OKD Environment and create a new project:

$ oc new-app health-demo

We will add a PostgreSQL Database using the "postgresql" Template and providing a minimal configuration through Environment Variables:

$ oc new-app -e POSTGRESQL_USER=postgres -e POSTGRESQL_PASSWORD=postgres -e POSTGRESQL_DATABASE=postgres postgresql

Ok, now you should be able to see the following Pod available:

$ oc get pods
NAME                 READY     STATUS    RESTARTS   AGE
postgresql-1-lvl2w   1/1       Running   1          9s

Next steps will be importing the "java" Template so that we can use it to run the Thorntail executable JAR file in it:

$ oc import-image java:8 --from=registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift --confirm
$ oc new-app --name rest-demo 'java:8~https://github.com/fmarchioni/mastertheboss' --context-dir='openshift/healthdemo'
$ oc expose svc/rest-demo

Ok, now verify that also the "rest-demo" application is running in a Pod:

$ oc get pods
NAME                 READY     STATUS    RESTARTS   AGE
postgresql-1-lvl2w   1/1       Running   1          1m
rest-demo-1-build    1/1       Running   0          26s

Great. So the application should reply to the "/health/live" request, exactly as in our local environment:

$ curl http://rest-demo-myproject.192.168.42.30.nip.io/health/live | jq
{
  "status": "UP",
  "checks": [
    {
      "name": "Database connection health check",
      "status": "UP"
    }
  ]
}

Ok. So what we will do now is setting in the Deployment YML file a Liveness Probe check.

Just to recap, through the Kubernates Deployment YML files you can perform the following checks:

  • Liveness Probe: A liveness probe checks if the container in which it is configured is still running. If the liveness probe fails, the kubelet kills the container, which will be subjected to its restart policy. Set a liveness check by configuring the template.spec.containers.livenessprobe stanza of a pod configuration.
  • Readiness Probe: A readiness probe determines if a container is ready to service requests. If the readiness probe fails a container, the endpoints controller ensures the container has its IP address removed from the endpoints of all services. A readiness probe can be used to signal to the endpoints controller that even though a container is running, it should not receive any traffic from a proxy.

To keep it simple, we will use the Web Console. Select the "rest-demo" Deployment configuration and choose "Edit Health Checks"

health checks microprofile openshift thorntail

Select "Add Liveness Probe" and enter the URI Path which emits the Health check:

health checks microprofile openshift thorntail

Save. Now let's see what happend if we try to scale down to 0 our postgresql service, so that it's now available anymore:

$ oc scale --replicas=0 dc/postgresql

health checks microprofile openshift thorntail

As you can see from the Events log, the Liveness probe will fail, therefore your application will not be available (until the probe succeeds)

health checks microprofile openshift thorntail

If you scale up again the Postgresql application, the rest-demo will be again available:

$ curl http://rest-demo-myproject.192.168.42.30.nip.io
Hello from Thorntail

You can find the source code of this application on Github at: https://github.com/fmarchioni/mastertheboss/openshift/healthdemo

0
0
0
s2sdefault

Related articles available on mastertheboss.com

Openshift installation quick tutorial

IMPORTANT: This tutorial is now outdated. Openishift origin has b

Java EE example application on Openshift

In this tutorial we will learn how to deploy a Java EE applicatio

Running tomcat Docker image on Openshift

A common issue when porting Docker images on Openshift is that th

Deploy Docker images on Openshift

This tutorial will teach you how you can build and deploy a custo

Using Property files in your Openshift applications

A common need for most application is to store its configuration

Accessing Openshift services remotely

In this article we will learn how to connect to services running