Do you want to learn how to start quickly a WildFly cluster running on Openshift using WildFly Operator? then keep reading the rest of this article!
First of all, what is an Operator? In essence, an Operator is a standard method of packaging, deploying and managing a Kubernetes application. With OpenShift 4, everything is deployed as an Operator. And to keep things simple, Red Hat has integrated the Kubernetes Operator Hub into OpenShift 4. I encourage you to go have a look at the Operator Hub, which is available at: https://operatorhub.io/
As you can see, the Operator Hub contains shortcuts for installing the Operator on all applications listed in the Hub:
Requirements to install Operators
In order to install the Operator we need to make sure kubectl binary is installed on your machine. You can download the latest release with the command:
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
Then, make the kubectl binary executable.
$ chmod +x ./kubectl
Move the binary in to your PATH.
$ sudo mv ./kubectl /usr/local/bin/kubectl
And finally test to ensure the version you installed is up-to-date:
$ kubectl version Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Installing WildFly Operator
We will follow the instructions contained in https://operatorhub.io/operator/wildfly
So at first install the Operator Lifecycle Manager (OLM), which is a tool to help manage the Operators running on your cluster.
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.10.0/install.sh | bash -s 0.10.0
Then, we can install WildFly Operator itself by running kubectl against WildFly’s Operator YAML file:
$ kubectl create -f https://operatorhub.io/install/wildfly.yaml subscription.operators.coreos.com/my-wildfly created
After installing, verify that the Operator is listed in your resources with:
$ kubectl get csv -n operators NAME DISPLAY VERSION REPLACES PHASE wildfly-operator.v0.2.0 WildFly 0.2.0 wildfly-operator.v0.1.0 Succeeded
So from now, you have a Custom Resource Definition named WildFlyServer which can be used to deliver new instances of WildFly Application Server. At minimum, you can provide an application image to it and it will be built on the top of WildFly:
apiVersion: wildfly.org/v1alpha1 kind: WildFlyServer metadata: name: quickstart spec: applicationImage: 'quay.io/jmesnil/wildfly-operator-quickstart:16.0' size: 1
Notice the parameter applicationImage which refererences a Docker Image and size which is the number of Pods that will be started with that Image.
Let’s see now a more complex example which involves a custom WildFly configuration to be loaded by the Operator.
Clustering WildFly using the Operator
To install a WildFly cluster, we will need to provide the HA XML configuration as a ConfigMap that is accessible by the Operator. Let’s see how to do it. First of all, some grants are required, so provide to your user the service account Role for your project with:
oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default -n $(oc project -q) role "view" added: "system:serviceaccount:myproject:default"
Then, we will use the example configuration from GitHub: https://github.com/wildfly/wildfly-operator/tree/master/examples/clustering
As said, the standalone XML file must be put in a ConfigMap that is available to the operator. The standaloneConfigMap must provide the name of this ConfigMap as well as the key corresponding to the name of standalone XML file.
Pick up the standalone-openshift.xml from the config folder and create a ConfigMap with:
$ kubectl create configmap clusterbench-config-map --from-file standalone-openshift.xml configmap/clusterbench-config-map created
Now, we will add the Custom Resource Definition, which is available in the crds folder:
$ kubectl apply -f clusterbench.yaml wildflyserver.wildfly.org/clusterbench created
Great! The cluster has been created. As you can see from the CRD, the clusterbench.yaml file will start 2 Pods in Cluster:
apiVersion: wildfly.org/v1alpha1 kind: WildFlyServer metadata: name: clusterbench spec: applicationImage: "quay.io/jmesnil/clusterbench-ee7:17.0" size: 2 standaloneConfigMap: name: clusterbench-config-map key: standalone-openshift.xml
This is verified by:
$ oc get pods NAME READY STATUS RESTARTS AGE clusterbench-0 1/1 Running 0 3m clusterbench-1 1/1 Running 0 2m
A service named clusterbench-loadbalancer has been created as well:
$ oc get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE clusterbench-loadbalancer LoadBalancer 172.30.26.204 172.29.128.16,172.29.128.16 8080:31886/TCP 3d
In order to test the application, expose the service clusterbench-loadbalancer with a Route:
$ oc expose svc/clusterbench-loadbalancer route.route.openshift.io/clusterbench-loadbalancer exposed
Now let’s try to access the application on the browser with:
http://clusterbench-loadbalancer-myproject.192.168.42.215.nip.io/clusterbench/session
You will see that by refreshing, the page counter keeps incrementing:
Now let’s scale down the Pods, by editing the Operator configuration:
kubectl edit wildflyserver clusterbench
Set the size parameter to 1 and save the changes from the editor. Next, check that the Pods have scaled down:
Let’s try to access again the application on the browser with:
http://clusterbench-loadbalancer-myproject.192.168.42.215.nip.io/clusterbench/session
As you can see, the session counter keeps increasing, even if we scaled down the number of Pods.
Great! We just managed to set up a basic example of WildFly cluster on Openshift using WildFly Operator. Replace the example image with your application image to see your application running on a WildFly cluster.