Create custom WildFly container images with S2I toolkit

Source-to-Image (S2I) is a toolkit for building container images from source code. S2I produces ready-to-run images by injecting source code into a container image and letting the container prepare that source code for execution. By creating self-assembling builder images, you can version your images and control your build environments exactly like you use container images to version your runtime environments. In this tutorial we will learn how to create S2I images of WildFly from source code.

To run this tutorial you need to download the S2I toolkit from: https://github.com/openshift/source-to-image

Follow the instruction on the README page of the github repository to install S2I.

Using S2I with WildFly

In its simplest workflow, S2I can be used to build a container image of WildFly from source code. We’ll assume that you have a maven project that builds a simple Web application in the current path:

├── pom.xml
├── README.md
└── src
    └── main
        └── webapp
            └── index.jsp

Next, pull WildFly latest image:

$ docker pull quay.io/wildfly/wildfly-centos7

We will now use the s2i tool to create the S2I WildFly builder image:

$ s2i build . quay.io/wildfly/wildfly-centos7 wildfly-demo

When the build has completed, you can run the image named “wildfly-demo” as follows:

$ docker run --rm -p 8080:8080 --name wildfly wildfly-demo

You will see that the server boots in foreground mode and that the application has been deployed in the Root Web context:

16:53:16,366 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 80) WFLYUT0021: Registered web context: '/' for server 'default-server'
16:53:16,427 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 46) WFLYSRV0010: Deployed "ROOT.war" (runtime-name : "ROOT.war")

S2I build workflow

The s2i build workflow includes a set of steps:

  • Creates a container based on the build image using the application source in src
  • Sets the environment variables from .s2i/environment (optional)
  • Starts the container and runs its assemble script
  • When done, commits the container, setting the CMD for the output image to be the run script and tagging the image with the name provided.

Let’s see how to customize our image of wildfly, by adding a .s2i/bin/assemble file.

Within the assemble file, we need to call the original assemble script for WildFly image and then (or before) we can add our customization. As an example, we will add a sample Web application in the deployments folder after that the original assemble phase has completed:

#!/bin/sh
# Original assemble script
/usr/local/s2i/assemble

mkdir /opt/wildfly/standalone/deployments/hello.war
echo "Hello world" > /opt/wildfly/standalone/deployments/hello.war/index.jsp
touch /opt/wildfly/standalone/deployments/hello.war.dodeploy

If you re-execute the s2i build command, and run again the “wildfly-demo” you will see that the “hello.war” application (created with the assemble script) has been deployed:

16:53:16,366 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 80) WFLYUT0021: Registered web context: '/' for server 'default-server'
16:53:16,366 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 83) WFLYUT0021: Registered web context: '/hello' for server 'default-server'
16:53:16,427 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 46) WFLYSRV0010: Deployed "ROOT.war" (runtime-name : "ROOT.war")
16:53:16,428 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 46) WFLYSRV0010: Deployed "hello.war" (runtime-name : "hello.war")

Customize server artifacts

With the assemble script, you can execute scripts or create/replace files at the end of the build phase. If you want to perform advanced configuration changes, it is recommended to use CUSTOM_INSTALL_DIRECTORIES. The CUSTOM_INSTALL_DIRECTORIES is a list of comma-separated list of directories used for installation and configuration of artifacts for the image during the S2I process. This information can be included via a custom install.sh script. The location of CUSTOM_INSTALL_DIRECTORIES can be set in the environment file. Let’s put it in practice to show how to execute CLI command during the s2i build.

Start by creating a folder named “extensions” in the root folder of your project:

$ mkdir extensions

Within the extension folder, add a file named install.sh, which launches the install-common.sh to set environment variables and then add in a file named configuration.cli, the list of CLI commands:

#!/usr/bin/env bash
injected_dir=$1
source /usr/local/s2i/install-common.sh

S2I_CLI_SCRIPT="${injected_dir}/configuration.cli"

echo "/system-property=property1:add(value=property1-value)" > "${S2I_CLI_SCRIPT}"

run_cli_script "${S2I_CLI_SCRIPT}"

We just need to define the location of CUSTOM_INSTALL_DIRECTORIES. This can be done in a file named .s2i/environment as follows:

CUSTOM_INSTALL_DIRECTORIES=extensions

Here is the final application tree:

├── extensions
│   └── install.sh
├── pom.xml
├── README.md
├── .s2i
│   ├── bin
│   │   └── assemble
│   └── environment
└── src
    └── main
        └── webapp
            └── index.jsp

Now, rebuild the image:

$ s2i build . quay.io/wildfly/wildfly-centos7 wildfly-demo

Next, run the image:

$ docker run --rm -p 8080:8080 --name wildfly wildfly-demo

If you log into the docker image, you will see that the configuration also includes a System Property named “property1”:

    <system-properties>
        <property name="property1" value="property1-value"/>
    </system-properties>

That’s all. We have covered how to use the S2I tool to create container images of WildFly from source code.

The source code for this tutorial is available at: https://github.com/fmarchioni/mastertheboss/tree/master/openshift/s2i

Comparing OpenShift with Kubernetes

This article provides a comparison between OpenShift and Kubernetes container management project covering both management and development areas.

 

First of all, some definitions.

Red Hat OpenShift is an enterprise open source container orchestration platform. It’s a software product that includes components of the Kubernetes container management project but adds productivity and security features which are important to large-scale companies.
So, in a nutshell, OpenShift Container platform focuses on an enterprise user experience. It’s designed to provide everything a full-scale company may need to orchestrate containers—adding enhanced security options and professional support—and to integrate directly into enterprises’ IT stacks.

Kubernetes, on the other hand, is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. Kubernetes features a large and rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

So, it is worth mentioning that at the heart of OpenShift there is Kubernetes, so in terms of API, what you have in Kubernetes is also fully included in OpenShift. That being said, let’s compare Openshift with Kubernetes

Product vs Project

OpenShift is a product. Kubernetes is an OpenSource project

Kubernetes is an open source project, while OpenShift is a product, supported by Red Hat, which comes in several variants.
Broadly speaking Red Hat OpenShift is available as Hosted Service (in several variants such as Dedicated OpenShift, OpenShift clusters hosted on Microsoft Azure, OpenShift on IBM’s public cloud) and as Self-Managed Service (On your own infrastructure).
OpenShift derives from OKD, which is the Community Distribution of Kubernetes that powers Red Hat OpenShift. OKD is free to use and includes most of the features of its commercial product, but you cannot buy support nor you cannot use Red Hat-based official images.

Installation

OpenShift runs on a supported OS. Kubernetes can be installed on (almost) every Linux distro

Red Hat OpenShift Container Platform is fully supported running on Red Hat Enterprise Linux as well as Red Hat Enterprise Linux CoreOS. To be precise, on OCP 4.5 the supported OS are:

  • Red Hat Enterprise Linux CoreOS 4.5
  • Red Hat Enterprise Linux 7.7, 7.8

Kubernetes, being an open source project, can be installed almost on any Linux distribution such as Debian, Ubuntu, and many others.
It is worth noting that a Linux distro called K3OS has been built for the sole purpose of running Kubernetes clusters. In fact, it is a Linux distro and the k3s Kubernetes distro in one. As soon as you boot up a k3OS node, you have Kubernetes up and running. When you boot up multiple k3OS nodes, they form a Kubernetes cluster. K3OS is perhaps the easiest way to stand up Kubernetes clusters on any server.

Image Management

OpenShift is more flexible thanks to Image Streams. Kubernetes‘ image management is more complex.

OpenShift uses Image Streams to provide a stable pointer to an image using various identifying qualities. This means that even if the source image changes, the Image Stream will still point to the right version of the image, ensuring that your application will not break unexpectedly.

In Kubernetes, on the other hand, there is no resource responsible for building container image. There are a couple of choices you have when you want to build container images for Kubernetes using external tools, scripts, or Kubernates’ internal resources. In almost all cases, you will end up using the plain old docker build command.

Local Execution

OpenShift can use CodeReadyContainer technology. Kubernetes relies on Minikube.

If you want to test/develop locally your container orchestration platform some tooling is required.
OpenShift users can use CodeReady Containers which brings a minimal, preconfigured OpenShift 4.1 or newer cluster to your local laptop or desktop computer for development and testing purposes. CodeReady Containers is delivered as a Red Hat Enterprise Linux virtual machine that supports native hypervisors for Linux, macOS, and Windows 10. For OpenShift 3.x clusters, the recommended solution was to use Minishift.
Kubernetes users can use Minikube which is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a Virtual Machine (VM) on your laptop for users looking to try out Kubernetes or develop with it day-to-day.

Command Line

OpenShift provides an advanced command-line ‘oc’ and includes kubectl. Kubernetes relies on kubectl only.

Kubernetes provides a command-line interface named kubectl which is used to run commands against any Kubernetes cluster.
Since OpenShift Container Platform runs on top of a Kubernetes cluster, a copy of kubectl is also included with “oc“, OpenShift Container Platform’s default command-line interface (CLI).
The “oc” binary offers the same capabilities as the kubectl binary, but it is further extended to natively support OpenShift Container Platform features, such as:
Full support for OpenShift resources, authentication, and additional developer-oriented commands such as oc-new app which creates all required objects with a single command and lets you decide to export them or change or store somewhere in your repository. 

Security

OpenShift provides a stricter security model than Kubernetes, including also a built-in OAuth server and a FIPS compliant encryption

Since OpenShift is a supported product, there are stricter security policies. As an example, OpenShift forbids (by default) to run container images as root so many images available on DockerHub won’t be able to run out of the box.
In terms of authentication, requests to the OpenShift Container Platform API are authenticated using OAuth Access Tokens or X.509 Client Certificates
There are three types of OpenShift users:

  • Regular users, which are created automatically in the system upon the first login or can be created via the API.
  • System users, which are created when the infrastructure is defined, mainly to enable the infrastructure to interact with the API securely.
  • Service accounts, which are special system users associated with projects.

The OpenShift Container Platform master also includes a built-in OAuth server. Users obtain OAuth access tokens to authenticate themselves to the API.
OpenShift 4.3 also delivers FIPS (Federal Information Processing Standard) compliant encryption and additional security enhancements to enterprises across industries. Combined, these new and extended features can help protect sensitive customer data with stronger encryption controls and improve the oversight of access control across applications and the platform itself.
Worth mentioning that, in terms of authentication to external apps, OpenShift can leverage authentication to multiple applications (Jenkins, EFK, Prometheus) with a single account using OAuth reverse proxies running as sidecards. This allows to perform zero-configuration OAuth when run as a pod in OpenShift and is able to perform simple authorization checks against the OpenShift and Kubernetes RBAC policy engine to grant access.
On the other hand, Kubernetes features a more basic security approach.
All Kubernetes clusters have two categories of users: service accounts managed by Kubernetes, and normal users. Service accounts are tied to a set of credentials stored as Secrets, which are mounted into pods allowing in-cluster processes to talk to the Kubernetes API. In contrast, any user that presents a valid certificate signed by the cluster’s certificate authority (CA) is considered authenticated.
Kubernetes uses client certificates, bearer tokens, an authenticating proxy, or HTTP basic auth to authenticate API requests through authentication plugins.
Finally, both and Kubernetes as OpensShift uses Role-based access control (RBAC) objects to determine whether a user is allowed to perform a given action within a project. 

Networking

OpenShift provides its own native networking solution. Kubernetes deals with network traffic in an abstract way.

In terms of network configuration, Kubernetes abstractly ensures that Pods are able to network with each other, and allocates each Pod an IP address from an internal network. This ensures all containers within the Pod behave as if they were on the same host. Giving each Pod its own IP address means that Pods can be treated like physical hosts or virtual machines in terms of port allocation, networking, naming, service discovery, load balancing, application configuration, and migration.         

OpenShifts, on the other hand, offers its native networking solution to the users. More in detail, OpenShift uses a software-defined networking (SDN) approach to provide a unified cluster network that enables the communication between Pods across the OpenShift Container Platform cluster. This Pod network is established and maintained by the OpenShift SDN, which configures an overlay network using Open vSwitch (OVS).
OpenShift SDN provides multiple SDN modes for configuring the Pod network: A network policy mode (the default) allows project administrators to configure their own isolation policies using NetworkPolicy objects. The multitenant mode provides project-level isolation for Pods and Services in the entire cluster. The subnet mode provides a flat Pod network where every Pod can communicate with every other Pod and Service. The network policy mode provides the same functionality as the subnet mode.

Worth mentioning that OpenShift Container Platform has a built-in DNS so that the services can be reached by the service DNS as well as the service IP/port.             

Ingress vs Route

OpenShift provides a more mature solution called Route while Kubernetes relies on Ingress.

Although pods and services have their own IP addresses on Kubernetes, these IP addresses are only reachable within the Kubernetes cluster and not accessible to the outside clients.
In order to allow accessing your pod and services from outsides, Kubernetes can use the Ingress object in Kubernetes to signal the Kubernetes platform that a certain service needs to be accessible to the outside world and it contains the configuration needed such as an externally-reachable URL, SSL, and more.
On the OpenShift side, it is possible to use Routes for this purpose. When a Route object is created on OpenShift, it gets picked up by the built-in HAProxy load balancer in order to expose the requested service and make it externally available with the given configuration. It’s worth mentioning that although OpenShift provides this HAProxy-based built-in load-balancer, it has a pluggable architecture that allows admins to replace it with NGINX (and NGINX Plus) or external load-balancers like F5 BIG-IP.
OpenShift Route has more capabilities as it can cover additional scenarios such as TLS re-encryption for improved security, TLS passthrough for improved security, Multiple weighted backends (split traffic), Generated pattern-based hostnames and
Wildcard domains.

Web console

OpenShift provides a richer, developer enabled Web console to deploy the application and manage the cluster. Kubernetes has a UI dashboard to manage the core Kubernetes resources.

Kubernetes features a Dashboard which is a web-based Kubernetes user interface. You can use it to deploy containerized applications to a Kubernetes cluster, troubleshoot your containerized application, and manage the cluster resources. Overall, the Kubernetes console is mostly focused on Kubernetes resources (Pods, Deployments, Jobs, DaemonSets, etc) and does not add much information compared with the command line.


OpenShift, on the other hand, features s a graphical Web Console with both Administration and Developer perspective to allow developers to easily deploy applications to their namespaces from different sources (git code, external registries, Dockerfile, etc) and providing a visual representation of the application components to materialize how they interact together. Since OpenShift 4 is completely based around the concept of Operators, you can reach the Operator Hub directly from the OpenShift console and install Operators in your project.

Deployments

OpenShift uses DeploymentConfig. Kubernetes relies on Deployment objects.

In Kubernetes, there are Deployment objects while OpenShift uses a DeploymentConfig object. The main difference is that a DeploymentConfig uses ReplicationController while a Deployment uses ReplicaSet.
Replica Set and Replication Controller do almost the same thing. Both of them ensure that a specified number of pod replicas are running at any given time. The difference comes with the usage of selectors to replicate pods. Replica Set use Set-Based selectors while replication controllers use Equity-Based selectors. In addition to that, a DeploymentConfig can use hooks to capture an update in your environment (e.g. a change in the database schema). A Deployment is not able to use hooks, however, it supports concurrent updates so that you can have many of them and it will manage to scale them properly.   

CI/CD Pipeline

OpenShift includes native support for CI/CD Pipeline. Kubernetes needs to integrate with external CI/CD platforms.

In modern software projects, many teams utilize the concept of Continuous Integration (CI) and Continuous Delivery (CD). By setting up a toolchain that continuously builds, tests, and stages software releases, a team can ensure that their product can be reliably released at any time. OpenShift can be an enabler in the creation and management of this toolchain.
OpenShift uses OpenShift Pipelines as a solution to build cloud-native CI/CD pipelines on the top of Kubernetes. OpenShift Pipelines is the OpenShift fork of Continuous Delivery Foundation project TektonCD-Pipelines and can be easily plugged in OpenShift using the OpenShift-Pipelines-Operator.
On top of that, you can install the Tekton CLI tkn. This way you can work with Tekton Tasks, Pipelines, PipelineRuns etc using ‘kubectl’ or ‘oc’. However, ‘tkn’ offers you an elegant customized CLI experience.

On the other hand, Kubernetes does not provide a built-in solution for CI/CD Pipelines so you can plug into any solution as long as they can be packaged in a container. To name a few, the following solutions are worth mentioning:

  • Jenkins: Jenkins is the most popular and the most stable CI/CD platform. It has also been used (and still) used by OpenShift developers due to its vast ecosystem and extensibility. If you plan to use it with Kubernetes, it’s recommended to install the official plugin JenkinsX which is a version of Jenkins suited specifically for the Cloud Native world.
  • Spinnaker: Spinnaker is a CD platform for scalable multi-cloud deployments, with backing from Netflix. To install it, we can use the relevant Helm Chart.
  • Drone: This is a versatile, cloud-native CD platform with many features. It can be run in Kubernetes using the associated Runner.
  • GoCD: Another CI/CD platform from Thoughtworks that offers a variety of workflows and features suited for cloud-native deployments. It can be run in Kubernetes as a Helm Chart.
    Additionally, there are cloud services that work closely with Kubernetes and provide CI/CD pipelines like CircleCI and Travis, so it’s equally helpful if you don’t plan to have hosted CI/CD platforms.

Conclusion 

Both Kubernetes and OpenShift are popular container management systems. While Kubernetes helps automate application deployment, scaling, and operations, OpenShift is the container platform that works with Kubernetes to help applications run more efficiently.
Being an open-source project, Kubernetes can be more flexible as container orchestration platform (for example you can choose the Linux distribution to use or opt for multiple CI/CD Pipeline options). OpenShift, on the other hand, does provide additional services to simplify application deployment, handle log management, registry, build automation, CI/CD and it has Enterprise-grade support by Red Hat.

Deploying JBoss EAP applications on OpenShift

Red Hat provides a container image for JBoss Enterprise Application Platform (JBoss EAP) that is specifically designed for use with OpenShift. In this tutorial we will learn how to pull the image from the registry and use it to deploy an application on OpenShift using S2I.

There are several strategies you can use to create an EAP application: in this example we will show how to use OpenShift Templates. First off, let’s create an openshift project for our example:

oc new-project eap-demo

Next, in order to use Red Hat images on OpenShift you need to access one of its Registries. The recommended registry to use is “registry.redhat.io” which requires authentication using your Red Hat Portal login. To verify that you are capable of authentication, you can log-in to the Registry using either docker or podman (Check this tutorial for an overview of Podman: Getting started with Podman)

docker login registry.redhat.io
Username: username
Password: password

Login Succeeded!

Then, for environments where credentials will be shared, such as deployment systems, it is recommended to create a secret with a Service Account Tokens. A Service Account can be created at the following link: https://access.redhat.com/terms-based-registry/

Click on “New Service Account” and fill the Service account name:

 

Next, download it and execute it:

oc create -f 6340056_eapdemo_account.yaml

The secret will be added to your project. Now, you should be able to import the eap73-openjdk11-openshift-rhel8 image:

oc import-image jboss-eap-7/eap73-openjdk11-openshift-rhel8 --from=registry.redhat.io/jboss-eap-7/eap73-openjdk11-openshift-rhel8 --confirm

The above command should complete with the following output:

imagestream.image.openshift.io/eap73-openjdk11-openshift-rhel8 imported

Let’s now import the EAP 7.3 basic Template and the eap73-openjdk11 ImageStream in our project, so that we can use it to instantiate our application:

oc replace --force   -f https://raw.githubusercontent.com/jboss-container-images/jboss-eap-openshift-templates/eap73/templates/eap73-basic-s2i.json

oc replace --force   -f https://raw.githubusercontent.com/jboss-container-images/jboss-eap-7-openshift-image/7.3.x/templates/eap73-openjdk11-image-stream.json

Great, now we will add an example application to our project. The application is the well-known kitchensink app, available in the EAP Quickstarts. You can create it as follows:

oc new-app --template=eap73-basic-s2i \
 -p IMAGE_STREAM_NAMESPACE=eap-demo \
 -p SOURCE_REPOSITORY_URL=https://github.com/jboss-developer/jboss-eap-quickstarts \
 -p SOURCE_REPOSITORY_REF=7.3.x-openshift \
 -p CONTEXT_DIR=kitchensink

Verify that the Pod comes up:

oc get pods
NAME                              READY   STATUS      RESTARTS   AGE
eap-app-1-deploy                  0/1     Completed   0          107m
eap-app-1-zjj2k                   1/1     Running     0          106m
eap-app-2-build                   0/1     Completed   0          110m
eap-app-build-artifacts-1-build   0/1     Completed   0          116m

The Route to access the application shuld be available as well:

oc get routes
NAME      HOST/PORT                           PATH   SERVICES   PORT    TERMINATION     WILDCARD
eap-app   eap-app-eap-demo.apps-crc.testing          eap-app    <all>   edge/Redirect   None

You can reach the application at: eap-app-eap-demo.apps-crc.testing

Great. We just managed to create an EAP 7.3 application on OpenShift, authenticating on registry.redhat.io through Service Account Tokens.

Getting started with Code Ready Containers

Red Hat Code Ready Containers (CRC) provide a minimal, preconfigured OpenShift 4.X single node cluster to your laptop/desktop computer for development and testing purposes. In this tutorial, we will learn how to set up an OpenShift clusters using CRC to emulate the cloud development environment locally and then we will deploy a WildFly application on the top of it.

Installing Red Hat Code Ready Containers

CRC is available on Linux, macOS and Windows operating systems. In this section we will cover the Linux installation. You can refer to the quickstart guide for information about the other OS: https://code-ready.github.io/crc/

On Linux, CodeReady Containers is only supported on Red Hat Enterprise Linux/CentOS 7.5 or newer (including 8.x versions) and on the latest two stable Fedora releases (at the time of writing Fedora 30 and 31).

CodeReady Containers requires the libvirt and NetworkManager packages, which can be installed as follows in Fedora/RHEL distributions:

$ sudo dnf install qemu-kvm libvirt NetworkManager

Next, download the latest release of https://cloud.redhat.com/openshift/install/crc/installer-provisioned for your platform.

TIP: You need register with a Red Hat account to access and download this product.

Once downloaded, create a folder named `.crc` in your home directory:

$ mkdir $HOME/.crc

Then unzip the CRC archive in that location and rename it for your convenience:

$ tar -xf crc-linux-amd64.tar.xz -C $HOME/.crc
$ cd $HOME/.crc
$ mv crc-linux-1.6.0-amd64 crc-1.6.0

Next, add it to your system PATH:

$ export PATH=$HOME/.crc/crc-1.6.0:$HOME/.crc/bin:$PATH

Verify that the crc binary is now available:

$ crc version

crc version
crc version: 1.6.0+8ef676f
OpenShift version: 4.3.0 (embedded in binary)

Great, your environment is ready. It’s time to start it!

Starting Openshift cluster

The `crc setup` command performs operations to set up the environment of your host machine for the CodeReady Containers virtual machine.

This procedure will create the ~/.crc directory if it does not already exist.

Set up your host machine for CodeReady Containers:

$ crc setup
INFO Checking if oc binary is cached              
INFO Checking if CRC bundle is cached in '$HOME/.crc' 
INFO Checking if running as non-root              
INFO Checking if Virtualization is enabled        
INFO Checking if KVM is enabled                   
INFO Checking if libvirt is installed             
INFO Checking if user is part of libvirt group    
INFO Checking if libvirt is enabled               
INFO Checking if libvirt daemon is running        
INFO Checking if a supported libvirt version is installed 
INFO Checking if crc-driver-libvirt is installed  
INFO Checking for obsolete crc-driver-libvirt     
INFO Checking if libvirt 'crc' network is available 
INFO Checking if libvirt 'crc' network is active  
INFO Checking if NetworkManager is installed      
INFO Checking if NetworkManager service is running 
INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists 
INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists 
Setup is complete, you can now run 'crc start' to start the OpenShift cluster

When the set up is complete, start the CodeReady Containers virtual machine:

$ crc start

When prompted, supply your user’s **pull secret** which is available at: https://cloud.redhat.com/openshift/install/crc/installer-provisioned

The cluster will start:

INFO Checking if oc binary is cached              
INFO Checking if running as non-root              
INFO Checking if Virtualization is enabled        
INFO Checking if KVM is enabled                   
INFO Checking if libvirt is installed             
INFO Checking if user is part of libvirt group    
INFO Checking if libvirt is enabled               
INFO Checking if libvirt daemon is running        
INFO Checking if a supported libvirt version is installed 
INFO Checking if crc-driver-libvirt is installed  
INFO Checking if libvirt 'crc' network is available 
INFO Checking if libvirt 'crc' network is active  
INFO Checking if NetworkManager is installed      
INFO Checking if NetworkManager service is running 
INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists 
INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists 
INFO Starting CodeReady Containers VM for OpenShift 4.3.0... 
INFO Verifying validity of the cluster certificates ... 
INFO Check internal and public DNS query ...      
INFO Check DNS query from host ...                
INFO Starting OpenShift cluster ... [waiting 3m]  
INFO                                              
INFO To access the cluster, first set up your environment by following 'crc oc-env' instructions 
INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443' 
INFO To login as an admin, run 'oc login -u kubeadmin -p 7z6T5-qmTth-oxaoD-p3xQF https://api.crc.testing:6443' 
INFO                                              
INFO You can now run 'crc console' and use these credentials to access the OpenShift web console 
Started the OpenShift cluster

So, out of the box, two users have been created for you. An admin user (kubeadmin) and a developer user. Their credentials are displayed in the above log.

Now reach the Web console of openshift with:

crc console

You will notice that the connection is insecure as no certificate is associated with that address. Choose to add an Exception in your browser and continue.

After you have entered the username and password, you will be redirected to the Dashboard of Openshift, which features the default project:

Troubleshooting CRC installation

Depending on your DNS/Network settings, there can be some things that can possibly go wrong.

A common issue which determines the error _”Failed to query DNS from host”_ is normally caused by a misconfiguration of your DNS in the file **/etc/resolv.conf**.

Check that it contains the following entries:

search redhat.com
nameserver 127.0.0.1

This issue is discussed more in detail in the following thread: https://github.com/code-ready/crc/issues/976

Another common issue is signaled by the following error message: _”Failed to connect to the crc VM with SSH”_

This is often cause by a misconfiguration of your virtual network. It is usually fixed by releasing any resources currently in use by it and re-creating through the `crc set up`. Here is the script to perform this tasks:

crc stop
crc delete
sudo virsh undefine crc --remove-all-storage
sudo virsh net-destroy crc
sudo rm -f /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf /etc/NetworkManager/dnsmasq.d/crc.conf
crc setup
crc start

More details about this are available here: https://github.com/code-ready/crc/issues/711

In general terms, if you find an issue with your CRC cluster, it is recommend to start crc in debug mode to collect logs with:

crc start --log-level debug 

Consider reporting the issue on http://gist.github.com/

Deploying WildFly on CRC

As next step, we will deploy a sample Web application which uses an Enterprise stack of components (JSF/JPA) to insert and remove records from a Database. The first step is to create a project for your application using the command `oc new-project`:

$ oc new-project wildfly-demo

The database we will be using in this example is PostgreSQL. A template for this Database is available under the `postgresql` name in the Registry used by CRC. Therefore, you can create a new PostgreSQL application as follows:

$ oc new-app -e POSTGRESQL_USER=wildfly -e POSTGRESQL_PASSWORD=wildfly -e POSTGRESQL_DATABASE=sampledb postgresql

Notice the `-e` parameters, that are used to set the Database attributes using Environment variables.

Now, check that the Pod for postgresql is running:

$ oc get pods
NAME                            READY   STATUS      RESTARTS   AGE
postgresql-1-2dp7m              1/1     Running     0          38s
postgresql-1-deploy             0/1     Completed   0          47s

Done with PostgreSQL, we will add the WildFly application. We will need, for this purpose, to load the `wildfly-centos7` image stream in our project. That requires admin permission, therefore login as kubeadmin:

$ oc login -u kubeadmin -p 7z6T5-qmTth-oxaoD-p3xQF https://api.crc.testing:6443

Now you can load the `wildfly-centos7` image stream in our project:

$ oc create -f https://raw.githubusercontent.com/wildfly/wildfly-s2i/wf-18.0/imagestreams/wildfly-centos7.json

Done with the image stream, you can return to the developer user:

$ oc login
Authentication required for https://api.crc.testing:6443 (openshift)
Username: developer
Password: 
Login successful.
You have one project on this server: "wildfly-demo"

Using project "wildfly-demo".

Now everything is ready to launch our WildFly application. We can use one example available on github at: https://github.com/fmarchioni/openshift-jee-sample . To launch our WildFly application, we will be passing some environment variables, to let WildFly create a PostgreSQL datasource using the correct settings:

$ oc new-app wildfly~https://github.com/fmarchioni/openshift-jee-sample --name=openshift-jee-sample -e DATASOURCE=java:jboss/datasources/PostgreSQLDS -e POSTGRESQL_DATABASE=sampledb -e POSTGRESQL_USER=wildfly -e POSTGRESQL_PASSWORD=wildfly

You might have noticed that we have passed also the environment variable named `DATASOURCE`. This variable is used specifically by our application. If you check the content of the file https://github.com/fmarchioni/openshift-jee-sample/blob/master/src/main/resources/META-INF/persistence.xml, it should be clear how it works:

<persistence version="2.0"
   xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="
        http://java.sun.com/xml/ns/persistence
        http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
   <persistence-unit name="primary">
     
      <jta-data-source>${env.DATASOURCE:java:jboss/datasources/ExampleDS}</jta-data-source>
  
      <properties>
     
         <property name="hibernate.hbm2ddl.auto" value="create-drop" />
         <property name="hibernate.show_sql" value="false" />
        
      </properties>
   </persistence-unit>
</persistence>

So, when passing the environment variable named `DATASOURCE` the application will be bound to that Datasource. Otherwise, the ExampleDS database will be used as fall-back solution.

To get back to our example, the following log will be displayed when you have created the wildfly application:

--> Found image 38b29f9 (4 months old) in image stream "wildfly-demo/wildfly" under tag "latest" for "wildfly"

    WildFly 18.0.0.Final 
    -------------------- 
    Platform for building and running JEE applications on WildFly 18.0.0.Final

    Tags: builder, wildfly, wildfly18

    * A source build using source code from https://github.com/fmarchioni/openshift-jee-sample will be created
      * The resulting image will be pushed to image stream tag "openshift-jee-sample:latest"
      * Use 'oc start-build' to trigger a new build
    * This image will be deployed in deployment config "openshift-jee-sample"
    * Ports 8080/tcp, 8778/tcp will be load balanced by service "openshift-jee-sample"
      * Other containers can access this service through the hostname "openshift-jee-sample"

--> Creating resources ...
    imagestream.image.openshift.io "openshift-jee-sample" created
    buildconfig.build.openshift.io "openshift-jee-sample" created
    deploymentconfig.apps.openshift.io "openshift-jee-sample" created
    service "openshift-jee-sample" created
--> Success
    Build scheduled, use 'oc logs -f bc/openshift-jee-sample' to track its progress.
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose svc/openshift-jee-sample' 
    Run 'oc status' to view your app.

We need to expose our application, so that it can be accessed remotely:

oc expose svc/openshift-jee-sample
route.route.openshift.io/openshift-jee-sample exposed

In a few minutes, the application will be running as you can see from the list of Pods:

$ oc get pods
NAME                            READY   STATUS      RESTARTS   AGE
openshift-jee-sample-1-95q2g    1/1     Running     0          90s
openshift-jee-sample-1-build    0/1     Completed   0          3m17s
openshift-jee-sample-1-deploy   0/1     Completed   0          99s
postgresql-1-2dp7m              1/1     Running     0          3m38s
postgresql-1-deploy             0/1     Completed   0          3m47s

Let’s have a look at the logs of the running Pod of openshift-jee-sample:

$ oc logs openshift-jee-sample-1-95q2g

17:44:25,786 INFO  [org.jboss.as.server.deployment] (MSC service thread 1-1) WFLYSRV0027: Starting deployment of "ROOT.war" (runtime-name: "ROOT.war")
17:44:25,793 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-2) WFLYUT0018: Host default-host starting
17:44:25,858 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-1) WFLYUT0006: Undertow HTTP listener default listening on 10.128.0.70:8080
17:44:25,907 INFO  [org.jboss.as.ejb3] (MSC service thread 1-2) WFLYEJB0493: EJB subsystem suspension complete
17:44:26,025 INFO  [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-2) WFLYJCA0001: Bound data source [java:jboss/datasources/PostgreSQLDS]
17:44:26,026 INFO  [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-2) WFLYJCA0001: Bound data source [java:jboss/datasources/ExampleDS]
 . . . . .

The interesting bit is that the `java:jboss/datasources/PostgreSQLDS` has been successfully bound. Now reach the application which is available at the following route address:

$ oc get routes
NAME                   HOST/PORT                                               SERVICES               PORT          
openshift-jee-sample   openshift-jee-sample-wildfly-demo.apps-crc.testing      openshift-jee-sample   8080-tcp                 

A simple Web application will display, which lets you add and remove records that are eventually displayed in a JSF table:

You can check that your records have been actually committed to the database by logging into the postgresql Pod:

$ oc rsh postgresql-1-2dp7m

From there, we will use the `psql` command to list the available databases:

sh-4.2$ psql
psql (10.6)
Type "help" for help.

postgres=# \l
                                 List of databases
   Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges   
 ----------+----------+----------+------------+------------+-----------------------
 postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | 
 sampledb  | wildfly  | UTF8     | en_US.utf8 | en_US.utf8 | 
 template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres
(4 rows)

Next, let’s use the `sampledb` database:

postgres=# \c sampledb
You are now connected to database "sampledb" as user "postgres".

Query the list of tables available in this database:

sampledb=# \dt
             List of relations
 Schema |      Name      | Type  |  Owner  
 -------+----------------+-------+---------
 public | simpleproperty | table | wildfly
(1 row)

The `simpleproperty` Table has been automatically created thanks to the hibernate.hbm2ddl.auto setting which has been set to create-drop. Here is the list of records contained:

sampledb=# select * from simpleproperty;
 id  | value 
 ----+-------
 foo | bar
(1 row)

We have just demonstrated how to deploy a non-trivial example of Enterprise application using a Database backend, by leveraging Red Hat Code Ready Containers technology

Openshift cheatsheet

Here is a comprehensive Openshift Container Platform cheatsheet for Developers/Administrators.

Openshift Container Platform Login and Configuration

#login with a user
oc login https://192.168.99.100:8443 -u developer -p developer

#login as system admin
oc login -u system:admin

#User Information
oc whoami 

#View your configuration
oc config view

#Update the current context to have users login to the desired namespace:
oc config set-context `oc config current-context` --namespace=<project_name>

Openshift Container Platform Basic Commands

#Use specific template
oc new-app https://github.com/name/project --template=<template>

#New app from a different branch
oc new-app --name=html-dev nginx:1.10~https://github.com/joe-speedboat/openshift.html.devops.git#mybranch

#Create objects from a file:
oc create -f myobject.yaml -n myproject
#Delete objects contained in a file:
oc delete -f myobject.yaml -n myproject
#Create or merge objects from file 
oc apply -f myobject.yaml -n myproject 

#Update existing object 
oc patch svc mysvc --type merge --patch '{"spec":{"ports":[{"port": 8080, "targetPort": 5000 }]}}' 

#Monitor Pod status 
watch oc get pods 

#Show labels 
oc get pods --show-labels 

#Gather information on a project's pod deployment with node information 
oc get pods -o wide 

#Hide inactive Pods 
oc get pods --show-all=false 

#Display all resources 
oc get all,secret,configmap 

#Get the Openshift Console Address 
oc get -n openshift-console route console 

#Get the Pod name from the Selector and rsh in it 
POD=$(oc get pods -l app=myapp -o name) oc rsh -n $POD 

#Exec single command in pod 
oc exec $POD $COMMAND 

#Copy file from myrunning-pod-2 path in the current location 
oc rsync myrunning-pod-2:/tmp/LogginData_20180717220510.json . 

#Read resource schema doc oc explain dc

Openshift Container Platform Image Streams

#List available IS for openshift project
oc get is -n openshift

#Import an image from an external registry
oc import-image --from=registry.access.redhat.com/jboss-amq-6/amq62-openshift -n openshift jboss-amq-62:1.3 --confirm

#List available IS and templates
oc new-app --list

Openshift Container Platform Templates

# Deploy resources contained in a template
oc process -f template.yaml | oc create -f -

#List parameters available in a template
oc process --parameters -f .template.yaml

Setting environment variables

# Update deployment 'registry' with a new environment variable
oc set env dc/registry STORAGE_DIR=/local
  
# List the environment variables defined on a build config 'sample-build'
oc set env bc/sample-build --list
  
# List the environment variables defined on all pods
oc set env pods --all --list
      
# Import environment from a secret
oc set env --from=secret/mysecret dc/myapp

WildFly application example on Openshift Container Platform

oc create -f https://raw.githubusercontent.com/wildfly/wildfly-s2i/wf-18.0/imagestreams/wildfly-centos7.json
oc new-app wildfly~https://github.com/fmarchioni/ocpdemos --context-dir=wildfly-basic --name=wildfly-basic
oc expose svc/wildfly-basic

Create app from a Project with Dockerfile

oc new-build --binary --name=mywildfly -l app=mywildfly

oc patch bc/mywildfly -p '{"spec":{"strategy":{"dockerStrategy":{"dockerfilePath":"Dockerfile"}}}}'
	
oc start-build mywildfly --from-dir=. --follow

oc new-app --image-stream=mywildfly
	
oc expose svc/mywildfly

Openshift Container Platform Nodes

#Get Nodes list
oc get nodes

#Check on which Node your Pods are running
oc get pods -o wide

#Schedule an application to run on another Node
oc patch dc  myapp -p '{"spec":{"template":{"spec":{"nodeSelector":{"kubernetes.io/hostname": "ip-10-0-0-74.acme.compute.internal"}}}}}'

#List all pods which are running on a Node
oc adm manage-node node1.local --list-pods

#Add a label to a Node
oc label node node1.local mylabel=myvalue

#Remove a label from a Node
oc label node node1.local mylabel-

Openshift Container Platform Storage

#create a PersistentVolumeClaim (+update the DeploymentConfig to include a PV + update the DeploymentConfig to attach a volumemount into the specified mount-path)
 
oc set volume dc/file-uploader --add --name=my-shared-storage \
-t pvc --claim-mode=ReadWriteMany --claim-size=1Gi \
--claim-name=my-shared-storage --claim-class=ocs-storagecluster-cephfs \
--mount-path=/opt/app-root/src/uploaded \
-n my-shared-storage

#List storage classes
oc -n openshift-storage get sc

Openshift Container Platform Build

#Manual build from source  
oc start-build ruby-ex

#Manual build from source and follow logs 
oc start-build ruby-ex -F

#Stop a build that is in progress 
oc cancel-build <build_name> 

#Changing the log level of a build: 
oc set env bc/my-build-name BUILD_LOGLEVEL=[1-5] 

Openshift Container Platform Deployment

#Manual deployment 
$ oc rollout latest ruby-ex

#Pause automatic deployment rollout
oc rollout pause dc $DEPLOYMENT

# Resume automatic deployment rollout
oc rollout resume dc $DEPLOYMENT 

#Define resource requests and limits in DeploymentConfig
oc set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi

#Define livenessProve and readinessProve in DeploymentConfig
oc set probe dc/nginx --readiness --get-url=http://:8080/healthz --initial-delay-seconds=10
oc set probe dc/nginx --liveness --get-url=http://:8080/healthz --initial-delay-seconds=10

#Scale the number of Pods to 2
oc scale dc/nginx --replicas=2

#Define Horizontal Pod Autoscaler (hpa)
oc autoscale dc $DC_NAME --max=4 --cpu-percent=10

Openshift Container Platform Routes

#Create route
$ oc expose service ruby-ex

#Read the Route Host attribute
oc get route my-route -o jsonpath --template="{.spec.host}"

Openshift Container Platform Services

#Make a service idle. When the service is next accessed will automatically boot up the pods again: 
$ oc idle ruby-ex

#Read a Service IP
oc get services rook-ceph-mon-a --template='{{.spec.clusterIP}}'

Clean up resources

#Delete all resources
oc delete all --all

#Delete resources for one specific app
$ oc delete services -l app=ruby-ex
$ oc delete all -l app=ruby-ex

#CleanUp old docker images on nodes
#Keeping up to three tag revisions 1, and keeping resources (images, image streams and pods) younger than sixty minutes:
oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m

#Pruning every image that exceeds defined limits:
oc adm prune images --prune-over-size-limit

Openshift Container Platform Troubleshooting

#Check status of current project 	
oc status

#Get events for a project
oc get events --sort-by='{.lastTimestamp}'

# get the logs of the myrunning-pod-2-fdthn pod 
oc logs myrunning-pod-2-fdthn<br />
# follow the logs of the myrunning-pod-2-fdthn pod 
oc logs -f myrunning-pod-2-fdthn<br />
# tail the logs of the myrunning-pod-2-fdthn pod 
oc logs myrunning-pod-2-fdthn --tail=50

#Check the integrated Docker registry logs:
oc logs docker-registry-n-{xxxxx} -n default | less

#run cluster diagnostics
oc adm diagnostics

Openshift Container Platform Security

#Create a secret from the CLI and mount it as a volume to a deployment config:
oc create secret generic oia-secret --from-literal=username=myuser
 --from-literal=password=mypassword
oc set volumes dc/myapp --add --name=secret-volume --mount-path=/opt/app-root/
 --secret-name=oia-secret

Openshift Container Platform Manage user roles

oc adm policy add-role-to-user admin oia -n python
oc adm policy add-cluster-role-to-user cluster-reader system:serviceaccount:monitoring:default
oc adm policy add-scc-to-user anyuid -z default

Misc commands

#Manage node state
oc adm manage node <node> --schedulable=false

#List installed operators
oc get csv

#Export in a template the IS, BC, DC and SVC
oc export is,bc,dc,svc --as-template=app.yaml

#Show user in prompt
function ps1(){
   export PS1='[\u@\h($(oc whoami -c 2>/dev/null|cut -d/ -f3,1)) \W]\$ '
}

#backup openshift objects

oc get all --all-namespaces --no-headers=true | awk '{print $1","$2}' | while read obj
do
  NS=$(echo $obj | cut -d, -f1)
  OBJ=$(echo $obj | cut -d, -f2)
  FILE=$(echo $obj | sed 's/\//-/g;s/,/-/g')
  echo $NS $OBJ $FILE; oc export -n $NS $OBJ -o yaml > $FILE.yml
done

Running WildFly on Openshift

Let’s learn in this tutorial how to run the latest version of WildFly on Openshift.

Openshift uses Image Streams to reference a Docker image. An image stream comprises one or more Docker images identified by tags. It presents a single virtual view of related images, similar to a Docker image repository, and may contain images from any of the following:

  1. Its own image repository in OpenShift’s integrated Docker Registry

  2. Other image streams

  3. Docker image repositories from external registries

The evident advantage of using Image Streams vs a standard Docker image is that OpenShift components such as builds and deployments can watch an image stream to receive notifications when new images are added and react by performing a build or a deployment. In other words, the Image Stream can let you decouple your application from a specific Docker Image.

Once started openshift, you should be able to find the available image streams with:

$ oc get is -n openshift
NAME         DOCKER REPO                            TAGS                         UPDATED
jenkins      172.30.1.1:5000/openshift/jenkins      latest,1,2                   6 minutes ago
mariadb      172.30.1.1:5000/openshift/mariadb      10.1,latest                  6 minutes ago
mongodb      172.30.1.1:5000/openshift/mongodb      latest,3.2,2.6 + 1 more...   6 minutes ago
mysql        172.30.1.1:5000/openshift/mysql        latest,5.6,5.5               6 minutes ago
nodejs       172.30.1.1:5000/openshift/nodejs       0.10,4,latest                6 minutes ago
perl         172.30.1.1:5000/openshift/perl         latest,5.20,5.16             6 minutes ago
php          172.30.1.1:5000/openshift/php          latest,5.6,5.5               6 minutes ago
postgresql   172.30.1.1:5000/openshift/postgresql   latest,9.5,9.4 + 1 more...   7 minutes ago
python       172.30.1.1:5000/openshift/python       3.4,3.3,2.7 + 2 more...      5 minutes ago
redis        172.30.1.1:5000/openshift/redis        latest,3.2                   3 minutes ago
ruby         172.30.1.1:5000/openshift/ruby         latest,2.3,2.2 + 1 more...   5 minutes ago
wildfly      172.30.1.1:5000/openshift/wildfly      10.1,10.0,9.0 + 2 more...    3 minutes ago

This is the default set of images you should be able to use when you start Openshift origin. What happens if for some reasons you are not able to see all the above image streams, for example in case you have accidentally deleted one of them ? That’s not a big issue, perform the following actions:

Login in as administrator:

$ oc login -u system:admin

Now reload the Image streams from the following link: https://raw.githubusercontent.com/wildfly/wildfly-s2i/wf-18.0/imagestreams/wildfly-centos7.json

$ oc create -f https://raw.githubusercontent.com/wildfly/wildfly-s2i/wf-18.0/imagestreams/wildfly-centos7.json

The image streams which are already loaded will be skipped. Now login as developer so that your WildFly application will be available in that namespace:

$ oc login 
Authentication required for https://192.168.1.194:8443 (openshift)
Username: developer
Password:  developer

Now you can test it loading a Git Hub project which uses WildFly Image Stream:

$ oc new-app wildfly~https://github.com/fmarchioni/ocpdemos --context-dir=wildfly-basic --name=wildfly-basic

Finally, expose the application wildfly-basic to the router so that it’s available to outside:

$ oc expose service wildfly-basic

Next, checkout the Route which has been created:

$ oc get route
NAME            HOST/PORT                                        PATH   SERVICES        PORT       
wildfly-basic   wildfly-basic-demo.apps.fmarchio-qe.rh-ocs.com          wildfly-basic   8080-tcp                  

Open the browser at the Route Host address, and here is your example application on Openshift:

Provisioning WildFly layers on Openshift with Galleon

If you don’t need the full sized WildFly application server, you can provision an Image of it which just contains the layers you need. For example, if you only need to use REST Server API (and their dependencies such as the Web Server), you can create the above example:

oc new-app wildfly~https://github.com/fmarchioni/ocpdemos --context-dir=wildfly-basic --name=wildfly-basic  --build-env GALLEON_PROVISION_LAYERS=jaxrs-server

 You can verify your custom WildFly configuration by logging into the Pod which runs WildFly:

$ oc get pods
NAME                     READY   STATUS      RESTARTS   AGE
wildfly-basic-1-build    0/1     Completed   0          5m22s
wildfly-basic-1-deploy   0/1     Completed   0          2m27s
wildfly-basic-1-gk6z9    1/1     Running     0          2m22s

 Now launch a remote shell (rsh) into the Running Pod:

$ oc rsh wildfly-basic-1-gk6z9

sh-4.2$

 And have a look at the extensions installed in your WildFly Server (for the sake of brevity, just the top of the configuration is shown):

sh-4.2$ cat /wildfly/standalone/configuration/standalone.xml 
<server xmlns="urn:jboss:domain:10.0">
    <extensions>
        <extension module="org.jboss.as.clustering.infinispan"/>
        <extension module="org.jboss.as.connector"/>
        <extension module="org.jboss.as.deployment-scanner"/>
        <extension module="org.jboss.as.ee"/>
        <extension module="org.jboss.as.jaxrs"/>
        <extension module="org.jboss.as.jmx"/>
        <extension module="org.jboss.as.jpa"/>
        <extension module="org.jboss.as.logging"/>
        <extension module="org.jboss.as.naming"/>
        <extension module="org.jboss.as.transactions"/>
        <extension module="org.jboss.as.weld"/>
        <extension module="org.wildfly.extension.bean-validation"/>
        <extension module="org.wildfly.extension.core-management"/>
        <extension module="org.wildfly.extension.elytron"/>
        <extension module="org.wildfly.extension.io"/>
        <extension module="org.wildfly.extension.request-controller"/>
        <extension module="org.wildfly.extension.security.manager"/>
        <extension module="org.wildfly.extension.undertow"/>
    </extensions>

 As you can see, just the RESTEasy API, its dependencies and the Core dependencies have been created.

Clustering WildFly on Openshift using WildFly Operator

Do you want to learn how to start quickly a WildFly cluster running on Openshift using WildFly Operator? then keep reading the rest of this article!

First of all, what is an Operator? In essence, an Operator is a standard method of packaging, deploying and managing a Kubernetes application. With OpenShift 4, everything is deployed as an Operator. And to keep things simple, Red Hat has integrated the Kubernetes Operator Hub into OpenShift 4. I encourage you to go have a look at the Operator Hub, which is available at: https://operatorhub.io/

As you can see, the Operator Hub contains shortcuts for installing the Operator on all applications listed in the Hub:

Requirements to install Operators

In order to install the Operator we need to make sure kubectl binary is installed on your machine. You can download the latest release with the command:

$ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl

Then, make the kubectl binary executable.

$ chmod +x ./kubectl

Move the binary in to your PATH.

$ sudo mv ./kubectl /usr/local/bin/kubectl

And finally test to ensure the version you installed is up-to-date:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

Installing WildFly Operator

We will follow the instructions contained in https://operatorhub.io/operator/wildfly

So at first install the Operator Lifecycle Manager (OLM), which is a tool to help manage the Operators running on your cluster.

curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.10.0/install.sh | bash -s 0.10.0

Then, we can install WildFly Operator itself by running kubectl against WildFly’s Operator YAML file:

$ kubectl create -f https://operatorhub.io/install/wildfly.yaml
subscription.operators.coreos.com/my-wildfly created

After installing, verify that the Operator is listed in your resources with:

$ kubectl get csv -n operators

NAME                      DISPLAY   VERSION   REPLACES                  PHASE
wildfly-operator.v0.2.0   WildFly   0.2.0     wildfly-operator.v0.1.0   Succeeded

So from now, you have a Custom Resource Definition named WildFlyServer which can be used to deliver new instances of WildFly Application Server. At minimum, you can provide an application image to it and it will be built on the top of WildFly:

apiVersion: wildfly.org/v1alpha1
kind: WildFlyServer
metadata:
  name: quickstart
spec:
  applicationImage: 'quay.io/jmesnil/wildfly-operator-quickstart:16.0'
  size: 1

Notice the parameter applicationImage which refererences a Docker Image and size which is the number of Pods that will be started with that Image.

Let’s see now a more complex example which involves a custom WildFly configuration to be loaded by the Operator.

Clustering WildFly using the Operator

To install a WildFly cluster, we will need to provide the HA XML configuration as a ConfigMap that is accessible by the Operator. Let’s see how to do it. First of all, some grants are required, so provide to your user the service account Role for your project with:

oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default -n $(oc project -q)
role "view" added: "system:serviceaccount:myproject:default"

Then, we will use the example configuration from GitHub: https://github.com/wildfly/wildfly-operator/tree/master/examples/clustering

As said, the standalone XML file must be put in a ConfigMap that is available to the operator. The standaloneConfigMap must provide the name of this ConfigMap as well as the key corresponding to the name of standalone XML file.

Pick up the standalone-openshift.xml from the config folder and create a ConfigMap with:

$ kubectl create configmap clusterbench-config-map --from-file standalone-openshift.xml 
configmap/clusterbench-config-map created

Now, we will add the Custom Resource Definition, which is available in the crds folder:

$ kubectl apply -f clusterbench.yaml
wildflyserver.wildfly.org/clusterbench created

Great! The cluster has been created. As you can see from the CRD, the clusterbench.yaml file will start 2 Pods in Cluster:

apiVersion: wildfly.org/v1alpha1
kind: WildFlyServer
metadata:
  name: clusterbench
spec:
  applicationImage: "quay.io/jmesnil/clusterbench-ee7:17.0"
  size: 2
  standaloneConfigMap:
    name: clusterbench-config-map
key: standalone-openshift.xml

This is verified by:

$ oc get pods
NAME             READY     STATUS    RESTARTS   AGE
clusterbench-0   1/1       Running   0          3m
clusterbench-1   1/1       Running   0          2m

A service named clusterbench-loadbalancer has been created as well:

$ oc get services
NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP                   PORT(S)          AGE
clusterbench-loadbalancer   LoadBalancer   172.30.26.204   172.29.128.16,172.29.128.16   8080:31886/TCP   3d

In order to test the application, expose the service clusterbench-loadbalancer with a Route:

$ oc expose svc/clusterbench-loadbalancer
route.route.openshift.io/clusterbench-loadbalancer exposed

Now let’s try to access the application on the browser with:

http://clusterbench-loadbalancer-myproject.192.168.42.215.nip.io/clusterbench/session

You will see that by refreshing, the page counter keeps incrementing:

Now let’s scale down the Pods, by editing the Operator configuration:

kubectl edit wildflyserver clusterbench

Set the size parameter to 1 and save the changes from the editor. Next, check that the Pods have scaled down:

Let’s try to access again the application on the browser with:

http://clusterbench-loadbalancer-myproject.192.168.42.215.nip.io/clusterbench/session

As you can see, the session counter keeps increasing, even if we scaled down the number of Pods.

Great! We just managed to set up a basic example of WildFly cluster on Openshift using WildFly Operator. Replace the example image with your application image to see your application running on a WildFly cluster.

2 Ways to run Thorntail applications on Openshift

Thorntail offers an paradigm to packaging and running Enterprise and MicroProfile applications by packaging them with just enough of the server runtime to “java –jar”. In this tutorial we will show two strategies to deploy a Thorntail application on Openshift

 

 

First make sure that Openshift / Minishift environment is started. Then, create a new project for your application:

oc new-project thorntail

Now let’s see how to deploy a Thorntail application in it.

Option #1 : Use Java8 Container Image

The first option consists in using the Java8 Container image as a wrapper to execute the thorntail Uber Jar file. We will need at first to import the image from Red Hat Registry:

oc import-image java:8 --from=registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift --confirm

Next, create a new application pointing to a repository which contains a Thorntail application:

oc new-app --name thorntail-java8 'java:8~https://github.com/fmarchioni/mastertheboss' --context-dir='thorntail/thorntail-rest-basic'

Finish, by exposing the Service to a Route:

oc expose svc/thorntail-java8

Option #2 : Use Fabric8 Maven plugin

The second option can be applied directly from your Thorntail project, by including the Fabric8 Maven plugin in a profile. Here’s the pom.xml of our Thorntail application:

<?xml version="1.0"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <properties>
    <version.thorntail>2.5.0.Final</version.thorntail>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
    <failOnMissingWebXml>false</failOnMissingWebXml>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  </properties>
  <dependencyManagement>
    <dependencies>
      <dependency>
        <groupId>io.thorntail</groupId>
        <artifactId>bom-all</artifactId>
        <version>${version.thorntail}</version>
        <scope>import</scope>
        <type>pom</type>
      </dependency>
    </dependencies>
  </dependencyManagement>
  <groupId>com.mastertheboss.jaxrs</groupId>
  <artifactId>thorntail-rest-basic</artifactId>
  <packaging>war</packaging>
  <version>1.0.0</version>
  <name>Demo REST Service</name>
  <url>http://www.mastertheboss.com</url>

  <dependencies>

    <dependency>
      <groupId>io.thorntail</groupId>
      <artifactId>jaxrs</artifactId>
    </dependency>
    <dependency>
      <groupId>io.thorntail</groupId>
      <artifactId>cdi</artifactId>
    </dependency>

  </dependencies>

  <build>
    <finalName>demo</finalName>
    <plugins>
      <plugin>
        <groupId>io.thorntail</groupId>
        <artifactId>thorntail-maven-plugin</artifactId>
        <version>${version.thorntail}</version>

        <executions>
          <execution>
            <goals>
              <goal>package</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>

  <profiles>

    <profile>
      <id>openshift</id>
      <build>
        <plugins>
          <plugin>
            <groupId>io.fabric8</groupId>
            <artifactId>fabric8-maven-plugin</artifactId>
            <version>3.5.40</version>
            <executions>
              <execution>
                <goals>
                  <goal>resource</goal>
                  <goal>build</goal>
                </goals>
              </execution>
            </executions>
            <configuration>
              <generator>
                <includes>
                  <include>thorntail-v2</include>
                </includes>
                <excludes>
                  <exclude>webapp</exclude>
                </excludes>
              </generator>
              <enricher>
                <config>
                  <thorntail-v2-health-check>
                    <path>/</path>
                  </thorntail-v2-health-check>
                </config>
              </enricher>
              <resources>
                <env>
                  <AB_OFF>true</AB_OFF>
                  <JAVA_OPTIONS>-Djava.net.preferIPv4Stack=true</JAVA_OPTIONS>
                </env>
              </resources>
            </configuration>
          </plugin>
        </plugins>
      </build>
    </profile>
  </profiles>
</project>

Deploy the application on Openshift as follows:

mvn fabric8:deploy -Popenshift -DskiptTests=true

And here is our application deployed on Openshift using both strategies:

Enjoy Thorntail on Openshift!

Source code for this example: https://github.com/fmarchioni/mastertheboss/tree/master/thorntail/thorntail-rest-basic

How to deploy an application on Openshift using a Binary Build

In this short tutorial we will see how to deploy an Enterprise applications on Openshift (The Kitchensink demo) using a Binary Build, therefore having as input just the local WAR file.

In most cases you can create applications on Openshift using the S2I (Source to Image) process using as input for your Templates a remote github project. On the other hand, you can as well build and deploy your application from a local folder. As an example, download the kitchensink project and save it locally. Build it so that you eventually end with a WAR file in the target directory.

$ cd kitchensink
$ mvn clean install

Now let’s make a folder called deployments and copy the kitchensink.war there

$ mkdir deployments
$ cp target/kitchensink.war deployments/ROOT.war

Notice we have have also renamed the WAR file to be ROOT.war so that the application will be deployed on the Root context. Next I’m going to log into OpenShift and setup a project to build and run my app in. Assumed you are running on Minishift (check this article to learn how to install Minishift: Getting started with Openshift using OKD)

$ oc login https://192.168.42.253:8443 -u developer

In order to deploy our application we will need the ImageStream for WildFly. This should be available in the openshift namespace:

oc get is -n openshift | grep wildfly
wildfly      172.30.1.1:5000/openshift/wildfly      8.1,9.0,latest + 5 more...   46 hours ago

Great. Now I can take that ImageStream name and plug it into a new build. I’m going to give the build a name of kitchensink:

$ oc new-build wildfly --name=kitchensink --binary=true
--> Found image af69006 (8 days old) in image stream "openshift/wildfly" under tag "13.0" for "wildfly"

    WildFly 13.0.0.Final 
    -------------------- 
    Platform for building and running JEE applications on WildFly 13.0.0.Final

    Tags: builder, wildfly, wildfly13

    * A source build using binary input will be created
      * The resulting image will be pushed to image stream tag "kitchensink:latest"
      * A binary build was created, use 'start-build --from-dir' to trigger a new build

--> Creating resources with label build=kitchensink ...
    imagestream.image.openshift.io "kitchensink" created
    buildconfig.build.openshift.io "kitchensink" created
--> Success

Now OpenShift just created for me an ImageStream and a BuildConfig object, both named kitchensink. The ImageStream will track the new images that get built as part of this new build process, and the BuildConfig contains all the instructions that tell OpenShift how to build my app.

Now I can kick off my build by pointing the oc client at my local project directory. The command to do that is oc start-build –from-dir=. In my case I will be choosing kitchensink as and the current directory where deployments contains the WAR file. Also, I will choose to follow the status of the build. To do that, I’ll add the –follow=true (follow the logs of the build) and –wait=true (wait until the build completes to return an exit code).

$ oc start-build kitchensink --from-dir=. --follow=true --wait=true 

Uploading directory "." as binary input for the build ...
Pushed 0/13 layers, 8% complete Pushed 1/13 layers, 31% complete Pushed 2/13 layers, 31% complete Pushed 3/13 layers, 31% complete Pushed 4/13 layers, 31% complete Pushed 5/13 layers, 38% complete Pushed 6/13 layers, 46% complete Pushed 7/13 layers, 54% complete Push successful

Now that I have my application image built, I can deploy it. This is very simple. I just run the oc new-app command and specify my ImageStream, kitchensink.

$ oc new-app kitchensink
--> Found image 275a79a (About a minute old) in image stream "myproject/kitchensink" under tag "latest" for "kitchensink"

    temp.builder.openshift.io/myproject/kitchensink-1:f4dfeb6d 
    ---------------------------------------------------------- 
    Platform for building and running JEE applications on WildFly 13.0.0.Final

    Tags: builder, wildfly, wildfly13

    * This image will be deployed in deployment config "kitchensink"
    * Port 8080/tcp will be load balanced by service "kitchensink"
      * Other containers can access this service through the hostname "kitchensink"

--> Creating resources ...
    deploymentconfig.apps.openshift.io "kitchensink" created
    service "kitchensink" created
--> Success
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose svc/kitchensink' 
    Run 'oc status' to view your app.

The application has been created. Let’s check that Pods are available:

$ oc get pods
NAME                   READY     STATUS      RESTARTS   AGE
kitchensink-1-85sq2    1/1       Running     0          2s
kitchensink-1-build    0/1       Completed   0          1m
kitchensink-1-deploy   1/1       Running     0          4s

In order to test it, let’s expose the Service to create a new Route:

$ oc expose svc/kitchensink
route.route.openshift.io/kitchensink exposed

Our Kitchensink Application is now available:

You can check the list of Members with the following GET request:

$ curl kitchensink-myproject.192.168.42.253.nip.io/rest/members
[{“id”:0,”name”:”John Smith”,”email”:”john.smith@mailinator.com“,”phoneNumber”:”2125551212″}]

Cool! why not adding a new Member with a POST ?

curl -d ‘{“name”:”john”, “email”:”john@gmail.com“, “phoneNumber”:”1234567890″}’ -H “Content-Type: application/json” -X POST kitchensink-myproject.192.168.42.253.nip.io/rest/members

And here’s the new list of Members:

$ curl kitchensink-myproject.192.168.42.253.nip.io/rest/members
[{“id”:0,”name”:”John Smith”,”email”:”john.smith@mailinator.com“,”phoneNumber”:”2125551212″},{“id”:1,”name”:”john”,”email”:”john@gmail.com“,”phoneNumber”:”1234567890″}]

Great! we just managed to deploy an application on WildFly container image running on Openshift, having as input just the WAR file.