Running any Docker image on Openshift

In this tutorial we will learn how to run a Docker image, built from a Dockerfile, on Openshift.

So our starting point will be a simple Dockerfile definition which pulls the default “WildFly” image and adds some customizations to it. In our case, it simply adds a management user:

FROM jboss/wildfly
RUN /opt/jboss/wildfly/bin/add-user.sh admin Password1! --silent
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement","0.0.0.0"]

Of course you can replace this Dockerfile with any valid Dockerfile definition. Now start your Openshift cluster.

Next thing will be creating a Binary Build that will hold our Image. You can do it through the “oc new-build” command:

$ oc new-build --binary --name=mywildfly -l app=mywildfly
    * A Docker build using binary input will be created
      * The resulting image will be pushed to image stream tag "mywildfly:latest"
      * A binary build was created, use 'start-build --from-dir' to trigger a new build

--> Creating resources with label app=mywildfly ...
    imagestream.image.openshift.io "mywildfly" created
    buildconfig.build.openshift.io "mywildfly" created
--> Success

Great. Now verify that the Binary Build has been correctly added:

$ oc get bc
NAME                TYPE      FROM      LATEST
mywildfly           Docker    Binary    0

As it is, the Binary Build does not contain any reference to a Dockerfile. We can edit its configuration to include, in the dockerfilePath param, the location where our Dockerfile is (in our case, it’s in the current folder):

$ oc patch bc/mywildfly -p '{"spec":{"strategy":{"dockerStrategy":{"dockerfilePath":"Dockerfile"}}}}'
buildconfig.build.openshift.io/mywildfly patched

Now start the Build:

$ oc start-build mywildfly --from-dir=. --follow
Uploading directory "." as binary input for the build ...

Uploading finished
build.build.openshift.io/mywildfly-1 started
Receiving source from STDIN as archive ...
Pulling image jboss/wildfly ...
Pulled 1/5 layers, 21% complete
Pulled 2/5 layers, 62% complete
Pulled 3/5 layers, 71% complete
Pulled 4/5 layers, 88% complete
Pulled 5/5 layers, 100% complete
Extracting
Step 1/5 : FROM jboss/wildfly
 ---> 5de2811bb236
Step 2/5 : RUN /opt/jboss/wildfly/bin/add-user.sh admin Password1! --silent
 ---> Running in 4c17b5d59a26
 ---> 63651cb80af2
Removing intermediate container 4c17b5d59a26
Step 3/5 : CMD /opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0
 ---> Running in 3f2043e97715
 ---> 2edcf0d3758a
Removing intermediate container 3f2043e97715
Step 4/5 : ENV "OPENSHIFT_BUILD_NAME" "mywildfly-1" "OPENSHIFT_BUILD_NAMESPACE" "myproject"
 ---> Running in 2e1ec757227a
 ---> 24974e156028
Removing intermediate container 2e1ec757227a
Step 5/5 : LABEL "io.openshift.build.name" "mywildfly-1" "io.openshift.build.namespace" "myproject"
 ---> Running in c948609a8a5e
 ---> 57af1ff0bb97
Removing intermediate container c948609a8a5e
Successfully built 57af1ff0bb97
Pushing image 172.30.1.1:5000/myproject/mywildfly:latest ...
Pushed 2/6 layers, 33% complete
Pushed 3/6 layers, 76% complete
Pushed 4/6 layers, 96% complete
Pushed 5/6 layers, 100% complete
Pushed 6/6 layers, 100% complete
Push successful

Well done, the ImageStream has been pushed into the Internal Registry. Check it with:

$ oc get is
NAME                DOCKER REPO                                   TAGS      UPDATED
mywildfly           172.30.1.1:5000/myproject/mywildfly           latest    12 seconds ago

Now that the ImageStream is available, let’s create an application in the current project which uses this ImageStream:

$ oc new-app --image-stream=mywildfly
--> Found image 57af1ff (48 seconds old) in image stream "myproject/mywildfly" under tag "latest" for "mywildfly"

    * This image will be deployed in deployment config "mywildfly"
    * Port 8080/tcp will be load balanced by service "mywildfly"
      * Other containers can access this service through the hostname "mywildfly"

--> Creating resources ...
    deploymentconfig.apps.openshift.io "mywildfly" created
    service "mywildfly" created
--> Success
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose svc/mywildfly' 
    Run 'oc status' to view your app.

We are almost done. We only need to expose the Route to external clients:

$ oc expose svc/mywildfly
route.route.openshift.io/mywildfly exposed

And verify the Route URL with:

$ oc get route
NAME                HOST/PORT                                         PATH      SERVICES            PORT       TERMINATION   WILDCARD
mywildfly           mywildfly-myproject.192.168.42.5.nip.io                     mywildfly           8080-tcp                 None

As you can see from the Web Console, the application is available:

As last step, we will verify that we can actually login with the management user that has been incldued in the Image. For this purpose we need to define a PortForward to reach the port 9990 in the Pod where the Application us running. Let’s check the Pod list:

$ oc get pods
NAME                        READY     STATUS      RESTARTS   AGE
mywildfly-1-build           0/1       Completed   0          6m
mywildfly-1-mzkmf           1/1       Running     0          4m

Now we will forward the Port 9990 to the equivalent Port on localhost:

$ oc port-forward mywildfly-1-mzkmf 9990:9990
Forwarding from 127.0.0.1:9990 -> 9990
Forwarding from [::1]:9990 -> 9990

Try connecting from a local installation of WildFly, you will be prompted to enter Username and Password:

$ cd wildfly-16.0.0.Final/bin
$ ./jboss-cli.sh -c
Authenticating against security realm: ManagementRealm
Username: admin
Password: 
[standalone@localhost:9990 /] 

That’s all. In this tutorial we have learned how to run a Docker image, created from a Dockerfile, in your own OpenShift cluster.

Running MicroProfile applications on Openshift

In this tutorial we will learn how to deploy a Thorntail application on Openshift. This is the quickest path to leverage Microprofile applications in a PaaS.

Thorntail is the new name for WildFly Swarm, and contains everything you need to develop and run Microprofile applications by packaging server runtime libraries with your application code and running it as Uber Jar application.

In order to deploy Thorntail application we need a Java Runtime that can be used to run the JAR file produced by Thorntail Maven plugin. For this purpose we will use Red Hat Java S2I builder image

Red Hat Java S2I for OpenShift is a Source-to-Image (S2I) builder image designed for use with OpenShift. It allows users to build and run plain Java applications (fat-jar and flat classpath) within a containerized image on OpenShift.

Let’s start by creating a new Openshift project named “microprofile-demo”:

oc new-project "microprofile-demo"

Now let’s import the Java version 8 image from Red Hat’s Repository:

oc import-image java:8 --from=registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift --confirm

Now let’s use any Thorntail example available on Github. For example this is my fork of Thorntail’s examples:

oc new-app --name rest-demo 'java:8~https://github.com/fmarchioni/thorntail-examples' --context-dir='jaxrs/jaxrs-cdi'

As you can see from the output, the following resources will be created:

--> Found image b4b953c (2 weeks old) in image stream "microprofile-demo/java" under tag "8" for "java:8"

    Java Applications 
    ----------------- 
    Platform for building and running plain Java applications (fat-jar and flat classpath)

    Tags: builder, java

    * A source build using source code from https://github.com/fmarchioni/thorntail-examples will be created
      * The resulting image will be pushed to image stream tag "rest-demo:latest"
      * Use 'start-build' to trigger a new build
    * This image will be deployed in deployment config "rest-demo"
    * Ports 8080/tcp, 8443/tcp, 8778/tcp will be load balanced by service "rest-demo"
      * Other containers can access this service through the hostname "rest-demo"

--> Creating resources ...
    imagestream.image.openshift.io "rest-demo" created
    buildconfig.build.openshift.io "rest-demo" created
    deploymentconfig.apps.openshift.io "rest-demo" created
    service "rest-demo" created
--> Success
    Build scheduled, use 'oc logs -f bc/rest-demo' to track its progress.
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose svc/rest-demo' 
    Run 'oc status' to view your app.

Great! now we only need to expose the Service through a Route so that we can test it:

$  oc expose svc/rest-demo

Check that the Pod has started:

Now we can test the application, as discussed in the README.md of the project:

curl http://rest-demo-microprofile-demo.192.168.42.103.nip.io/employees |jq
 
[
  {
    "id": 1,
    "name": "emp01"
  },
  {
    "id": 2,
    "name": "emp02"
  }
]

Basically this application showed how to create a REST Service in a MicroProfile applications. As you can see, running Microprofile applications with Openshift is pretty simple and you can combine its features with the standard Enterprise API that are available as Thorntail fractions.

Java EE example application on Openshift

In this tutorial we will learn how to deploy a Java EE application on WildFly Container image running on the top of OpenShift Container Platform using OKD.

First of all, you need an OpenShift Container Platform cluster available. We suggest you having a look at the following article to learn how to install the Community version of OpenShift Container Platform:

Getting started with Openshift using OKD

Once that your cluster is up and runing, Log in the OpenShift Container Platform Web console:

You will be redirected in the Browse Catalog screen:

Choose to Create a Project by clicking on the right-screen button and enter a Project Name:

From there, select “WildFly” Container Image. Click “Next” so that you move from the Information screen to the Configuration:

As you can see, in the above screen, we have set the following options:

  • Add to Project: The project in which the WildFly SourceToImage Container will be created
  • Version: The Version of the application server (Select the latest available version)
  • Application Name: This is the name of the OpenShift Container Platform application. There can be only one application with the same name on each Project
  • Git Repository: This is the Maven repository that will be used to create the application to be embedded on WildFly. For the purpose of our example we will use this example application: https://github.com/fmarchioni/openshift-jee-sample

Click on Create.

In a few seconds, the build process will start and the application will be deployed on a Pod. A Route will be created as for letting the Service to be accessed from outside:

Now, click on the Route Link (in our example http://jee-demo-demo-wildfly.192.168.42.173.nip.io/index.xhtml) to verify that the application is running:

Customizing the database

The example application we have deployed on OpenShift Container Platform uses, by default, the H2 Database that is included in WildFly distribution. We can verify it, by checking into the persistence.xml file (https://github.com/fmarchioni/openshift-jee-sample/blob/master/src/main/resources/META-INF/persistence.xml):

<persistence version="2.0"
   xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="
        http://java.sun.com/xml/ns/persistence
        http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
   <persistence-unit name="primary">
     
      <jta-data-source>${env.DATASOURCE:java:jboss/datasources/ExampleDS}</jta-data-source>
  
      <properties>
         <!-- Properties for Hibernate -->
         <property name="hibernate.hbm2ddl.auto" value="create-drop" />
         <property name="hibernate.show_sql" value="false" />
        
      </properties>
   </persistence-unit>
</persistence>

The jta-data-source element, contains an environment variable expression that allows a default value (The ExampleDS Datasource), in case the DATASOURCE environment variable is not defined.

For this purpose we will add a PostgreSQL Database and configure the application to use it. The first step will be adding the PostgreSQL Datasource in WildFly. Let’s check for a moment the configuration of our server, by starting a Terminal session on the “jee-demo” Pod.

As an alternative, you can reach the “jee-demo” Pod directly from the shell, using the ‘oc rsh ‘ command, as showed in this transcript:

$ oc project demo-wildfly
Now using project "demo-wildfly" on server "https://192.168.42.173:8443".

$ oc get pods
NAME               READY     STATUS      RESTARTS   AGE
jee-demo-1-build   0/1       Completed   0          15m
jee-demo-2-build   0/1       Completed   0          6m
jee-demo-2-fmj6n   1/1       Running     0          6m

$ oc rsh jee-demo-2-fmj6n

sh-4.2$ vi /wildfly/standalone/configuration/standalone.xml

As you can see, the WildFly template already contains 3 datasource definitions. The ExampleDS is, however, the only one which is enabled:

<datasource jndi-name="java:jboss/datasources/ExampleDS" pool-name="ExampleDS" enabled="true" use-java-context="true">
    <connection-url>jdbc:h2:mem:test;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE</connection-url>
    <driver>h2</driver>
    <security>
        <user-name>sa</user-name>
        <password>sa</password>
    </security>
</datasource>
<datasource jndi-name="java:jboss/datasources/${env.OPENSHIFT_MYSQL_DATASOURCE}" enabled="false" use-java-context="true" pool-name="${env.OPENSHIFT_MYSQL_DATASOURCE}" use-ccm="true">
    <connection-url>jdbc:mysql://${env.OPENSHIFT_MYSQL_DB_HOST}:${env.OPENSHIFT_MYSQL_DB_PORT}/${env.OPENSHIFT_MYSQL_DB_NAME}</connection-url>
    <driver>mysql</driver>
    <security>
      <user-name>${env.OPENSHIFT_MYSQL_DB_USERNAME}</user-name>
      <password>${env.OPENSHIFT_MYSQL_DB_PASSWORD}</password>
    </security>
    <validation>
        <check-valid-connection-sql>SELECT 1</check-valid-connection-sql>
        <background-validation>true</background-validation>
        <background-validation-millis>60000</background-validation-millis>
        <!--<validate-on-match>true</validate-on-match>-->
    </validation>
    <pool>
        <flush-strategy>IdleConnections</flush-strategy>
    </pool>
</datasource>
<datasource jndi-name="java:jboss/datasources/${env.OPENSHIFT_POSTGRESQL_DATASOURCE}" enabled="false" use-java-context="true" pool-name="${env.OPENSHIFT_POSTGRESQL_DATASOURCE}" use-ccm="true">
    <connection-url>jdbc:postgresql://${env.OPENSHIFT_POSTGRESQL_DB_HOST}:${env.OPENSHIFT_POSTGRESQL_DB_PORT}/${env.OPENSHIFT_POSTGRESQL_DB_NAME}</connection-url>
    <driver>postgresql</driver>
    <security>
      <user-name>${env.OPENSHIFT_POSTGRESQL_DB_USERNAME}</user-name>
      <password>${env.OPENSHIFT_POSTGRESQL_DB_PASSWORD}</password>
    </security>
    <validation>
        <check-valid-connection-sql>SELECT 1</check-valid-connection-sql>
        <background-validation>true</background-validation>
        <background-validation-millis>60000</background-validation-millis>
        <!--<validate-on-match>true</validate-on-match>-->
    </validation>
    <pool>
        <flush-strategy>IdleConnections</flush-strategy>
    </pool>
</datasource>

In order to enable one of the available Datasources we need to perform some simple steps:

1) Add to our OpenShift Container Platform Project the Database (PostgreSQL or MySQL)

2) Configure some environment variables so that the WildFly application can use the Database to configure the Datasource

Let’s start by adding PostgreSQL Container image to our Project. Click on “Add to Project” -> “Browse Catalog” and select PostgreSQL:

In the PostgreSQL configuration screen, select “postgres” as Username, Password, and Database Name. Click on “Create”.

In a few seconds the Pod will be started:

We will now activate PostgreSQL Datasource in WildFly, by setting the following Environment Variable, in the Deployment Descriptor of the “jee-demo” Deployment. Click on “Deployments” > “jee-demo” and select the Environment Tab. Add the following variables:

- name: POSTGRESQL_USER
  value: postgres
- name: POSTGRESQL_PASSWORD
  value: postgres
- name: POSTGRESQL_DATABASE
  value: postgres 

Here is your Deployment Config:

Click on “Save”. The Deployment will be re-generated. Now if you log in again into the jee-demo Pod, you will see that the PostgreSQL datasource has been enabled:

<datasource jndi-name="java:jboss/datasources/PostgreSQLDS" enabled="true" use-java-context="true" pool-name="PostgreSQLDS" use-ccm="true">
    <connection-url>jdbc:postgresql://172.30.159.224:5432/sampledb</connection-url>
    <driver>postgresql</driver>
    <security>
      <user-name>postgres</user-name>
      <password>postgres</password>
    </security>
    <validation>
        <check-valid-connection-sql>SELECT 1</check-valid-connection-sql>
        <background-validation>true</background-validation>
        <background-validation-millis>60000</background-validation-millis>
        <!--<validate-on-match>true</validate-on-match>-->
    </validation>
    <pool>
        <flush-strategy>IdleConnections</flush-strategy>
    </pool>
</datasource>

Now, the only thing we have left to do, is telling our application to use “java:jboss/datasources/PostgreSQLDS” datasource in the persistence.xml file. As we have already an available environment variable to customize the Datasource (in persistence.xml), all we have to do is setting the DATASOURCE env variable in the jee-demo Deployment Descriptor to “java:jboss/datasources/PostgreSQLDS“.

Click on Save. The application will be regenerated. Now if you look at the jee-demo Pod logs, you will see that.

15:55:46,779 INFO [org.jboss.as.jpa] (ServerService Thread Pool -- 67) WFLYJPA0010: Starting Persistence Unit (phase 2 of 2) Service 'ROOT.war#primary'

Now reach the application and insert some sample data. You will see, by logging into the postgresql Pod that the data has been stored in the Database:

sh-4.2$ psql
psql (9.6.10)
Type "help" for help.

postgres=# \dt
             List of relations
Schema |      Name      | Type  |  Owner
public | simpleproperty | table | postgres (1 row) 

postgres=# select * from simpleproperty; id | value key1 | value1 (1 row)

Conclusion

In this tutorial we have covered a basic JEE application and its installation on WildFly template running on the top of OpenShift Container Platform. We have also learnt how to enable and configure a Datasource to connect to another Service running in your Project

How to customize WildFly applications on Openshift

In this tutorial Java EE example application on Openshift we have deployed a sample Java EE application using a database on Openshift. We will explore now other possibilities such as running CLI commands as part of the Vuild process and include a set of custom modules and finally we will learn how to add artifacts directly into the deployments folder of the application server.

Injecting CLI commands when Building WildFly Applications

If you need a fine grained control over WildFly configuration on OpenShift we recommend using the S2I WildFly server customization hooks (https://github.com/wildfly/wildfly-s2i) which includes:

  • Wildfly configuration files from the `<application source>/<cfg|configuration>` are copied into the wildfly configuration directory.
  • Pre-built war files from the `<application source>/deployments` are moved into the wildfly deployment directory.
  • Wildfly modules from the `<application source>/modules` are copied into the wildfly modules directory.
  • Execute WildFly CLI scripts by using the `install.sh` script.

In the first example, we will show how to execute a CLI script as part of the build process. This example (available here https://github.com/fmarchioni/mastertheboss/tree/master/openshift/config) uses the following project structure:

$ tree -a

├── extensions
│   ├── configuration.cli
│   └── install.sh
├── pom.xml
├── README.md
├── .s2i
│   └── environment
└── src
    └── main
        └── webapp
            └── index.jsp
  • The `extensions/configuration.cli` contains the CLI commands to be executed.
  • The `extensions/install.sh` launches the CLI using the utility function `run_cli_script` available in the `/usr/local/s2i/install-common.sh` script of WildFly S2I Builder Image:
#!/usr/bin/env bash
injected_dir=$1
source /usr/local/s2i/install-common.sh

S2I_CLI_SCRIPT="${injected_dir}/configuration.cli"

run_cli_script "${S2I_CLI_SCRIPT}"

Finally, within the `.s2i/environment` folder we can set environment variables such as the location of extensions dir.

You can build the application and expose the service as follow:

$ oc new-app --as-deployment-config wildfly:26.0~https://github.com/fmarchioni/mastertheboss.git --context-dir=openshift/config --name demo-config

$ oc expose service/wildfly-config

By requesting the Route Address, you will see the System Property which has been injected with the CLI:

$ curl wildfly-config-wildfly.apps-crc.testing

 <html>
  <body bgcolor=white>

  <table border="0" cellpadding="10">
    <tr>
      <td>
         <h1>WildFly - sample OpenShift configuration</h1>
      </td>
    </tr>
  </table>
  <br />
  <p>Prints System Property from configuration: HelloWorld</p>
  </body>
</html> 

Adding custom modules and deployments to WildFly on Openshift

Our second application, available on github at https://github.com/fmarchioni/mastertheboss/tree/master/openshift/module, will show how to add external modules and deployments during the Source to Image process.

As you can see from the above picture, you can use the following folders:

  • modules: Place here modules just like you would do in a bare-metal installation of WildFly. They will be automatically uploaded on WildFly
  • deployments: Place here artifacts that are to be deployed on the application server.
├── modules
│   └── com
│       └── itext
│           └── main
│               ├── itext-5.0.5.jar
│               └── module.xml
├── pom.xml
├── README.md
└── src
    └── main
        ├── java
        │   └── com
        │       └── mastertheboss
        │           └── CreatePDFExample.java
        └── webapp
            ├── index.jsp
            └── WEB-INF
                ├── jboss-deployment-structure.xml
                └── web.xml

The module.xml loads the itext JAR file and uploads it into the application server.

<module xmlns="urn:jboss:module:1.1" name="com.itext">

    <properties>
        <property name="jboss.api" value="private"/>
    </properties>

      <resources>
        <resource-root path="itext-5.0.5.jar"/> 
    </resources>

</module>

In order to link the library with the application server add a jboss-deployment-structure.xml file:

<jboss-deployment-structure>
  <deployment>
    <dependencies>
      <module name="com.itext" />
    </dependencies>
    </deployment>
</jboss-deployment-structure>

To build the application on Openshift specify the root folder as Git Repository URL: https://github.com/fmarchioni/mastertheboss

Next, specify the subfolder “openshift/modules” as path:

Here is the application in action:

That’s all. You can check the rest of the source code on Github: https://github.com/fmarchioni/mastertheboss/tree/master/openshift/module

Openshift Interview questions

Do you want to test your Openshift / Kubernetes knowledge ? try with our “Openshift interview questions” questions!

1) What are the three main Docker components? 

a) Docker Hub, Docker Image, Docker Registry
b) Docker Runtime, Docker Image, Docker Hub
c) Docker Container, Docker Image, Docker Hub

2) Choose two valid Openshift registry types:

a) Personal Registry
b) Private Registry
c) Public Registry
d) Security Registry

3) Docker by default ships with a Persistent Storage:
a) True
b) False

4) What Linux framework can be used to controls resource limitations (e.g. utilize fewer or more Memory, CPUs or disk I/O throughput) for a Docker container?

a) Namespaces
b) SELinux
c) Cgroups
d) chroot

5) What is the main downside of performing a Docker commit to your image ?

a) You cannot save the image to an upstream repository
b) If you don’t keep track of changes in Docker file you might lose changes
c) The image will grows excessively in size
d) Makes difficult to distribute updates after a change

6) What are the two core object types used by Kubernetes?

a) Box
b) Node
c) Group
d) Master

7) On which platforms can run Kubernetes ?

a) A laptop
b) A Cloud provider
c) Bare metal servers
d) All of them

8) Which tool can you use to run Kubernates locally as a single-node cluster?

a) Minikube
b) Minishift
c) A Deployment Unit
d) A cluster replica

9) What are the main two Kubernetes services running on a Kubernetes Node? (choose two)

a) etcd
b) kubelet
c) kube-proxy
d) kube-node
d) kube-apiserver

10) What file formats can be used for creating Kubernetes resources with the kubectl create -f command? (choose two)

a) JSON
b) Javascript
c) CSV
d) YANG
e) YAML

11) Which Openshift component is the equivalent of Kubernetes Ingress:

a) Node
b) Pod
c) Router
d) Proxy

12) In Kubernetes you can use  ImageStreams for managing container images.

a) True
b) False
13) OpenShift has  stricter security policies than default Kubernetes

a) True
b) False

14) What is CRI-O ?

a) An agent that runs on each node in the cluster. It makes sure that containers are running in a pod.
b) An abstraction used by Kubernetes to support multiple virtual clusters on the same physical cluster
c) A Component on the master that exposes the Kubernetes API.
d) It is a lightweight alternative to using Docker as the runtime for kubernetes.

15) What container technologies are supported by CRI-O? (choose 2):

a) Docker
b) Minishift
c) Rkt
d) Minikube

16) Which two statements are true about CRI-O? choose 2:
a) CRI-O talks directly to Linux Kernel
b) CRI-O talks directly to Container Runtime
c) CRI-O is OCI-compliant
d) CRI-O is the only Container Runtime Interface available for Kubernetes

17) Which cloud platform OpenShift is ?
a) IaaS
b) PaaS
c) MaaS
d) SaaS

18) You have to run a pre-built local openshift cluster using the Community Version of it. You will need:
a) Minikube for Kubernetes
b) OpenShift Enterprise
c) Minishift for OKD  
d) CDK for OpenShift Container Platform

19) You have to run a pre-built local openshift cluster with the latest version of OpenShift Container Platform. You will need:
a) Minikube for Kubernetes
b) OpenShift Dedicated
c) Minishift for OKD  
d) CDK for OpenShift Container Platform

20) You are running Amazon Web Services and you want to orchestrate them using high-availability Kubernetes clusters. You will need:
a) Openshift dedicated  
b) Openshift Online
c) Openshift Container Engine
d) Minishift for OKD  

21) Kubernates Ingress and Openshift Routes are quite similar. However you have to use Openshift Routes if you need:

a) External access to services
b) Load-balancing strategies (e.g. round robin)
d) Persistent (sticky) sessions
d) TLS passthrough for improved security

22) Which of the following is NOT a new feature added by OpenShift in comparison to Kubernetes? choose one:

a) SCM integration
b) GUI and web console
c) Multi-tenancy
d) Persistent storage

23) Which components are unique to OpenShift in comparison to Kubernetes? (choose two)

a) Router as an ingress traffic control
b) Internal Registry
c) Master
d) Node

24) Both Openshift Container Engine and Openshift Platform are built on the same enterprise Kubernetes core platform and contain crucial Linux, container runtime, networking, management and security capabilities.

a) true
b) false

25) Which of the following feature is unique to Openshift Container Platform, compared with Openshift Container Engine:
 
a) Automated Container builds, built-in CI/CD pipeline and application console
b) Enterprise support
c) Can run monitoring solutions like Prometheus
d) Can run advanced networking like Multi-tenant SDN

26) What is the main prerequisite for the oc cluster up solution? (Choose one):
a) Docker
b) Minishift
c) Virtualbox
d) Hyper-V

27) Which tool can be used to diagnose issues in your Kubernetes/CRI-O daemons?

a) Podman
b) Buildah
c) Docker CLI
d) Crictl
28) Which tool can be used to manage pods and containers without requiring a container daemon?

a) Podman
b) Buildah
c) Docker CLI
d) Crictl

29) You are upgrading from Openshift 3.5 to Openshift 3.11. Which of the following components will need update?

a) Maria DB 10.1
b) Redis 3.2
c) etcd 3.0
d) Docker 1.12

30) Which of the following OpenShift storage plugins supports the ReadWriteMany access mode? choose two:

a) GlusterFS
b) NFS
c) Openstack Cinder
d) Azure Disk

31) A Persistent Volume (PV) object can be claimed by which project/s ?

a) default
b) openshift
c) any project
d) openshift-infra

32) Which network plugin is used to provide connectivity for pods across the entire cluster with no limitations ?

a) ovs-subnet
b) ovs-multitenant
c) ovs-networkpolicy
d) ovs-net

33) You have to design a OpenShift multi-DC HA design for 3 Data Centers. Your options are (A) 3 OpenShift cluster per 3 Data Centers or (B) a single Openshift cluster for all data centers. Which options provides better scalability and DC redundancy ?

a) A
b) B
 
34) You need a monitoring solution for your Openshift cluster. Considering that you are primarily doing metrics, need  a powerful query language, alerting, and notification functionality, plus higher availability and uptime for graphing and alerting. What is your first choice ?

a) InfludDB
b) OpenTSDB
c) Nagios
d) Prometheus  

35) You need a monitoring solution for your Openshift cluster. Considering that you need long term persistence of data and eventually consistent view of data between replicas. What is your first choice ?

a) InfludDB
b) OpenTSDB
c) Nagios
d) Prometheus

 

Answers:

What are the three main Docker components?
a) Docker Hub, Docker Image, Docker Registry

2 Choose two valid Openshift registry types:
b) Private Registry
c) Public Registry

3. Docker by default ships with a Persistent Storage:

b) False

4. What Linux framework can be used to controls resource limitations (e.g. utilize fewer or more Memory, CPUs or disk I/O throughput) for a Docker container?

c) Cgroups

5. What is the main downside of performing a Docker commit to your image ?

b) If you don’t keep track of changes in Docker file you might lose changes

6. What are the two Node types used by Kubernetes?:

b) Node
d) Master

7. On which platforms can run Kubernates:

d) All of them

8. Which tool can you use to run Kubernates locally as a single-node cluster?

a) Minikube

9. What are the main two Kubernetes services running on a Kubernetes Node? (choose two)

b) kubelet
c) kube-proxy

10. What file formats can be used for creating Kubernetes resources with the kubectl create -f command? (choose two=:

a) JSON
e) YAML

11) Which Openshift component is the equivalent of Kubernetes Ingress:

c) Router

12) In Kubernetes you can use  ImageStreams for managing container images.

b) False

13) OpenShift has more strict security policies than default Kubernetes

a) True

14) What is CRI-O ?

d) It is a lightweight alternative to using Docker as the runtime for kubernetes.

15 What container technologies are supported by CRI-O? (choose 2):

a) Docker
c) Rkt

16) Which two statements are true about CRI-O? choose 2:
b) CRI-O talks directly to Container Runtime
c) CRI-O is OCI-compliant

17) Which cloud platform OpenShift is ?
b) PaaS

18) You have to run a local openshift cluster using the Community Version of it. You will need:
c) Minishift for OKD  

19) You have to run a pre-built local openshift cluster with the latest version of OpenShift Container Platform. You will need:
d) CDK for OpenShift Container Platform

20) You are running Amazon Web Services and you want to orchestrate them using high-availability Kubernetes clusters. You will need:
a) Openshift dedicated  

21) Kubernates Ingress and Openshift Routes are quite similar. However you have to use Openshift Routes if you need:

d) TLS passthrough for improved security

22) Which of the following is NOT a new feature added by OpenShift in comparison to Kubernetes? choose one:

d) Persistent storage

23) Which components are unique to OpenShift in comparison to Kubernetes? 

a) Router as an ingress traffic control
b) Internal Registry

24) Both Openshift Container Engine and Openshift Platform are built on the same enterprise Kubernetes core platform, and contain crucial Linux, container runtime, networking, management and security capabilities

a) true

25) Which of the following feature is unique to Openshift Container Platform, compared with Openshift Container Engine
 
a) Automated Container builds, built-in CI/CD pipeline and application console
d) Can run advanced networking like Multi-tenant SDN

26) What is the main prerequisite for the oc cluster up solution? (Choose one):
a) Docker

27) Which tool can be used to diagnose issues in your Kubernetes/CRI-O daemons?

d) Crictl

28) Which tool can be used to manage pods and containers without requiring a container daemon?

a) Podman

29) You are upgrading from Openshift 3.5 to Openshift 3.11. Which of the following components will need update?

c) etcd 3.0
d) Docker 1.12

30) Which of the following OpenShift storage plugins supports the ReadWriteMany access mode? choose two:

a) GlusterFS
b) NFS

31) A Persistent Volume (PV) object can be claimed by which project/s ?

a) default
b) openshift
c) any project
d) openshift-infra

32) Which network plugin is used to provide connectivity for pods across the entire cluster with no limitations ?

a) ovs-subnet
 
33) You have to design a OpenShift multi-DC HA design for 3 Data Centers. Your options are (A) 3 OpenShift cluster per 3 Data Centers or (B) a single Openshift cluster for all data centers. Which options provides better scalability and DC redundancy ?

a) A

34) You need a monitoring solution for your Openshift cluster. Considering that you are primarily doing metrics, need  a powerful query language, alerting, and notification functionality, plus higher availability and uptime for graphing and alerting. What is your first choice ?

d) Prometheus  

35) You need a monitoring solution for your Openshift cluster. Considering that you need long term persistence of data and eventually consistent view of data between replicas. What is your first choice ?
a) InfludDB

Configuring Persistent Storage on Openshift

This tutorial will introduce you to configuring Storage on Openshift and use it for building stateful applications

By default, OpenShift/Kubernetes containers don’t store data persistently. In practice, when you start an a container from an immutable Docker image Openshift will use an ephemeral storage, that is to say that all data created by the container will be lost when you stop or restart the container. This approach works mostly for stateless scenarios; however for applications, like a database or a messaging system need a persistent storage that is able to survice the container crash or restart.

In order to do that, you need an object called PersistentVolume which is a storage resource in an OpenShift cluster that is available to developers via Persistent Volume Claims (PVC).

A Persistent Volume is shared across the OpenShift cluster since any of them can be potentially used by any project. On the other hand, a Persistent Volume Claim is a kind of resources which is specific to a project (namespace).

Behind the hoods, when you create a PVC, OpenShift tries to find a matching PV based on the size requirements, access mode (RWO, ROX, RWX). If found, the PV is bound to the PVC and it will not be able to be bound to other volume claims.

Persistent Volumes and Persistent Volume Claims

So to summarize: a PersistentVolume OpenShift API object which describes an existing storage infrastructure (NFS share, GlusterFS volume, iSCSI target,etc). A Persistent Volume claim represents a request made by an end user which consumes PV resources.

Persistent Volume Types

OpenShift supports a wide range of PersistentVolume. Some of them, like NFS, you probably already know and all have some pros and cons :

NFS

Static provisioner, manually and statically pre-provisioned, inefficient space allocation

Well known by system administrators, easy to set up, good for tests

Supports ReadWriteOnce and ReadWriteMany policy

Ceph RBD

Can provision dynamically resources, Ceph block devices are automatically created, presented to the host, formatted and presented (mounted into) to the container

Excellent when running Kubernetes on top of OpenStack

Ceph FS

Same as RBD but already a filesystem, a shared one too

Supports ReadWriteMany

Excellent when running Kubernetes on top of OpenStack with Ceph

Gluster FS

Dynamic provisioner

Supports ReadWriteOnce

Available on-premise and in public cloud with lower TCO than public cloud providers Filesystem-as-a-Service

Supports Container Native Storage

GCE Persistent Disk / AWS EBS / AzureDisk

Dynamic provisioner, block devices are requested via the provider API, then automatically presented to the instance running Kubernetes/OpenShift and the container, formatted etc

Does not support ReadWriteMany

Performance may be problematic on small capacities ( <100GB, typical for PVCs)

AWS EFS / AzureFile

Dynamic provisioner, filesystems are requested via the provider API, mounted on the container host and then bind-mounted to the app container

Supports ReadWriteMany

Usually quite expensive

NetApp

dynamic provisioner called trident

supports ReadWriteOnce (block or file-based), ReadWriteMany (file-based), ReadOnlyMany (file-based)

Requires NetApp Data OnTap or Solid Fire Storage

Two kinds of Storage

There are essentially two types of Storages for Containers:

✓ Container‐ready storage: This is essentially a setup where storage is exposed to a container or a group of containers from an external mount point over the network. Most storage solutions, including SDS, storage area networks (SANs), or network‐attached storage (NAS) can be set up this way using standard interfaces. However, this may not offer any additional value to a container environment from a storage perspective. For example, few traditional storage
platforms have external application programming interfaces (APIs), which can be leveraged by Kubernetes for Dynamic Provisioning.
✓ Storage in containers: Storage deployed inside containers, alongside applications running in containers, is an important innovation that benefits both developers and administrators. By containerizing storage services and managing them under a single management plane such as Kubernetes, administrators have fewer housekeeping tasks to deal with, allowing them to focus on more value‐added tasks. In addition, they can run their applications and their storage platform on the same set of infrastructure, which reduces infrastructure expenditure.
Developers benefit by being able to provision application storage that’s both highly elastic and developer‐friendly. Openshift takes storage in containers to a new level by integrating Red Hat Gluster Storage into Red Hat OpenShift Container Platform — a solution known as Container‐Native Storage.In this tutorial we will use a Container-ready storage example that uses an NFS mount point on “/exports/volume1”

Configuring Persistent Volumes (PV)

To start configuring a Persistent Volume you have to switch to the admin account:

$ oc login -u system:admin

We assume that you have available an NFS storage at the following path: /mnt/exportfs

The following mypv.yaml provides a Persistent Volume Definition and a related Persistent Volume Claim:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: mysql-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/exportfs"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

 You can create both resources using:

oc create -f mypv.yaml

Let’s check the list of Persistent Volumes:

$ oc get pv

NAME              CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE
mysql-pv-volume   10Gi      RWO            Recycle          Available    

As you can see, the “mysql-pv-volume” Persistent Volume is included in the list. You will also find a list of pre-built Persistent Volumes.

Using Persistent Volumes in pods

After creating the Persistent Volume, we can request storage through a PVC to request storage and later use that PVC to attach it as a volume to containers in pods. For this purpose, let’s create a new project to manage this persistent storage:

oc new-project persistent-storage

Now let’s a MySQL app, which contains a reference to our Persistent Volume Claim:

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
  selector:
    app: mysql
  clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
          # Use secret in real usage
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim

Create the app using the ‘oc command’:

oc create -f mysql-deployment.yaml

This will automatically deploy MySQL container in a Pod and place database files on the persistent storage.

$ oc get pods
NAME               READY     STATUS    RESTARTS   AGE
mysql-1-kpgjb    1/1       Running   0          13m
mysql-2-deploy   1/1       Running   0          2m
mysql-2-z6rhf    0/1       Pending   0          2m

Let’s check that the Persistent Volume Claim has been bound correctly:

$ oc describe pvc mysql-pv-claim

Name:         mysql-pv-claim
Namespace:    default
StorageClass:
Status:       Bound
Volume:       mysql-pv-volume
Labels:       <none>
Annotations:    pv.kubernetes.io/bind-completed=yes
                pv.kubernetes.io/bound-by-controller=yes
Capacity:     10Gi
Access Modes: RWO
Events:       <none>

 Done! now let’s connect to the MySQL Pod using the mysql tool and add a new Database:

$ oc rsh mysql-1-kpgjb
sh-4.2$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.

Let’s add a new Database named “sample”:

MySQLDB [(none)]> create database sample;
Query OK, 1 row affected (0.00 sec)

MySQLDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| openshift          |
| performance_schema |
| sample             |
| test               |
+--------------------+
6 rows in set (0.00 sec)

MySQL [(none)]> exit

As proof of concept, we will kill the Pod where MySQL is running so that it will be automatically restarted:

$ oc delete pod mysql-1-kpgjb
pod "mysqldb-1-kpgjb" deleted

In a few seconds the Pod will restart:

$ oc get pods
NAME               READY     STATUS    RESTARTS   AGE
mysql-1-5jmc5    1/1       Running   0          27s
mysql-2-deploy   1/1       Running   0          3m
mysql-2-z6rhf    0/1       Pending   0          3m

Let’s connect again to the Database and check that the “sample” DB is still there:

$ oc rsh mysql-1-5jmc5
sh-4.2$ mysql -u root -p
Enter password:

Welcome to the MySQLDB monitor.  Commands end with ; or \g.
Your MySQLDB connection id is 9
Server version: 10.2.8-MySQLDB MySQL Server

Copyright (c) 2000, 2017, Oracle, MySQL Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| openshift          |
| performance_schema |
| sample             |
+--------------------+

Great! As you can see, the Persistent Volume Claim used by the app, made the changes to the Database persistent!

Getting started with Openshift using OKD

OKD is a distribution of Kubernetes optimized for continuous application development and multi-tenant deployment. OKD is the upstream Kubernetes distribution embedded in Red Hat OpenShift and can be used to add developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams.

Let’s see how we can install OKD on your system. You have two options:

1) Run OKD in a Container

This option allows to get started quickly by running OKD in a container by itself with as little as “oc cluster up”. This option is discussed in tutorial: How to start an openshift local cluster using oc utility

2) Run the All-In-One VM with Minishift

Minishift is a tool that helps you run OKD locally by launching a single-node OKD cluster inside a virtual machine. This option is discussed in this tutorial.

Prepare your environment for installing Minishift

As an example, we will show how to install minishift on a Fedora 29 distribution.

Minishift requires a hypervisor to start the virtual machine on which the OpenShift cluster is provisioned. Make sure KVM is installed and enabled on your system before you start Minishift on Fedora.

Start by installing libvirt and qemu-kvm on your system:

$ sudo dnf install libvirt qemu-kvm

Then, add yourself to the libvirt group to avoid sudo:

$ sudo usermod -a -G libvirt <username>

Next, update your current session for the group change to take effect:

$ newgrp libvirt

Next, start and enable libvirtd and virlogd services.

$ systemctl start virtlogd 
$ systemctl enable virtlogd 
$ systemctl start libvirtd 
$ systemctl enable libvirtd

Finally, install the docker-machine-kvm driver binary to provision a VM. Then make it executable. The instructions below are using version 0.8.2.

sudo curl -L https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.8.2/docker-machine-driver-kvm -o /usr/local/bin/docker-machine-driver-kvm

sudo chmod +x /usr/local/bin/docker-machine-driver-kvm

Installing Minishift

Download the archive for your operating system from the releases page and unpack it.

wget https://github.com/minishift/minishift/releases/download/v1.27.0/minishift-1.27.0-linux-amd64.tgz
tar -xvf minishift-1.27.0-linux-amd64.tgz

Copy the contents of the directory to your preferred location.

cp minishift ~/bin/minishift

If your personal ~/bin folder is not in your PATH environment variable already (use echo $PATH to check), add it:

export PATH=~/bin:$PATH

Starting Minishift

Finally, provisions OpenShift single node cluster in your workstation by running the following command. The output will look similar to below:

$ minishift start
Starting local OpenShift cluster using 'kvm' hypervisor...
Login to server ...
Creating initial project "myproject" ...
Server Information ...
OpenShift server started.

The server is accessible via web console at:
    https://192.168.42.215:8443/console

You are logged in as:
    User:     developer
    Password: <any value>

To login as administrator:
    oc login -u system:admin

If you browse to the http address you can log into the the Web console:

In the next screen you can access to the main panel:

You can create a new application easily by clicking on any of the available templates. For example, select WildFly template:

The select a github project that will be used to build the image with the Source to Image process

At the end of the build process, the template has created for you also a router so that the application is accessible also externally.

Using the oc binary to build a process

In the above example we have used the Web console to build an example application. Openshift, however, can be reached out using the oc client binary. If you want the oc to be added automatically toyour path, use the “minishift oc-env” to display the command to add the oc binary to your PATH. For example:

$ ./minishift oc-env
export PATH="/home/user1/.minishift/cache/oc/v3.11.0/linux:$PATH"
# Run this command to configure your shell:
# eval $(minishift oc-env)

Now you can try building projects also with the ‘oc’ client tool:

oc login
Authentication required for https://192.168.1.194:8443 (openshift)
Username: developer
Password:  developer

Next, you can test the ‘oc’ client tool loading a Git Hub project which uses WildFly Image Stream:

$ oc new-app wildfly~https://github.com/fmarchioni/ocpdemos --context-dir=wildfly-basic --name=wildfly-basic

Finally, expose the application wildfly-basic to the router so that it’s available to outside:

$ oc expose service wildfly-basic

In this tutorial we have learned how to install Minishift on your environment and we have deployed an example application on the top of it. Also, Minishift ships with the ‘oc’ client tool which just needs to be added to your path to be used.

How to start an openshift local cluster using oc utility

The OpenShift client utility named oc can start a local OpenShift cluster, which includes all of the required services, such as an internal registry, a router, templates, and so on. This is one of the easiest ways to start a development environment. oc cluster up creates a default user and project, and once it is complete, it will allow you to use any commands to work with the OpenShift environment, such as oc new-app.

This method provides a containerized OpenShift environment that can easily be run on a number of platforms.

System requirements and prerequisites

The oc cluster up method supports Linux, macOS, and Windows-based workstations. By default, the method requires an environment with a Docker machine installed. However, the command can create a Docker machine by itself. The following table shows the available deployment scenarios:

Operating system Docker implementation
Linux Default docker daemon for OS
macOS Docker for macOS
macOS Docker Toolbox
Windows Docker for Windows
Windows Docker Toolbox

Starting an Openshift Cluster on Linux

The deployment process involves several steps:

1. Install Docker

2. Configure an insecure registry

3. Allow ports on the firewall

4. Download the OpenShift client utility

5. Start a cluster

Note that this method can also be used on Fedora or RHEL-based hosts.

Here are the steps in detail:

1. Docker installation: This doesn’t involve anything special, and was described in the previous chapters. The following commands must be run under the root account:

$ sudo -i
# sudo yum -y install docker
# systemctl enable docker

2. Configuring an insecure registry: This is required to be able to use an internal Docker registry, which comes with the OpenShift installation. If this is not configured, oc cluster up will fail.

To allow for an insecure OpenShift registry, run the following commands under the root user:

# cat << EOF >/etc/docker/daemon.json
{
   "insecure-registries": [
     "172.30.0.0/16"
   ]
}
EOF

# systemctl start docker

Note that this requires restarting the Docker daemon so as to apply the new configuration.

3. Configuring the firewall: The default firewall configuration doesn’t enable all of the ports required for an OpenShift cluster. You need to adjust the settings using firewall-cmd:

# firewall-cmd --permanent --new-zone dockerc
# firewall-cmd --permanent --zone dockerc --add-source 172.17.0.0/16
# firewall-cmd --permanent --zone dockerc --add-port 8443/tcp
# firewall-cmd --permanent --zone dockerc --add-port 53/udp
# firewall-cmd --permanent --zone dockerc --add-port 8053/udp
# firewall-cmd --reload

4. Downloading the oc utility: The OpenShift client utility named oc is available in standard repositories; however, you can download the utility from https://github.com/openshift/origin/releases. It is recommended that you use the standard CentOS repositories:

# yum -y install centos-release-openshift-origin39
# yum -y install origin-clients

5. Starting an OpenShift cluster: Once all the prerequisites are met, you will be able to start the cluster by running oc cluster up. The command will download all the required Docker images from public repositories, and then run all of the required containers:

# oc cluster up --version=v3.9.0

Starting OpenShift using openshift/origin:v3.9.0 ...
  • ..
  • ..
The server is accessible via web console at:
    https://127.0.0.1:8443

You are logged in as:
    User: developer
    Password: <any value>

To login as administrator:
    oc login -u system:admin

In the above example, the version of the OpenShift cluster has been statically bound to v3.9.0. In most cases, you don’t have to specify a version. So, you just need oc cluster up without any arguments.

As you can see, oc cluster up deployed a ready-to-use, one-node OpenShift environment.

By default, this OpenShift environment was configured to listen on the loopback interface (127.0.0.1). This means that you may connect to the cluster using https://127.0.0.1:8443. This behavior can be changed by adding special parameters, such as –public-hostname=. A full list of available options can be shown by using the following command:

# oc cluster up --help

6. Verification: Once the cluster has been deployed, you can verify that it is ready to use. The default OpenShift configuration points you to an unprivileged user, named developer. You may raise your permissions by using the following commands:

# oc login -u system:admin
Logged into "https://127.0.0.1:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project <projectname>':

    default
    kube-public
    kube-system
  * myproject
    openshift
    openshift-infra
    openshift-node

Using project "myproject".

Once you have admin access rights, you can verify the node configuration with oc get node:

# oc get node
NAME      STATUS  AGE     VERSION
localhost Ready   9m       v1.7.6+a08f5eeb62

7. Shutting down: Once an oc cluster up environment has been deployed, it can be shut down with oc cluster down.

Starting an Openshift Cluster on macOS

The installation and configuration process for macOS assumes that Docker for macOS is being used. The deployment process involves the following:

1. Docker for macOS installation and configuration

2. Installation of openshift-cli and required packages

3. Starting a cluster

The oc cluster up command requires Docker to be installed on your system because, essentially, it creates a Docker container and runs OpenShift inside that Docker container. It is a very elegant and clean solution. The Docker for macOS installation process is described at the official portal, https://docs.docker.com/docker-for-mac.

Once the Docker service is running, you need to configure the insecure registry (172.30.0.0/16). From the Docker menu in the toolbar, you need to select the Preferences menu and click on the Daemon icon. In the Basic tab of the configuration dialog, click on the + icon under Insecure registries and add the following new entry: 172.30.0.0/16.

When finished, click on Apply & Restart. 

Once the Docker service is configured, you need to install all the required software and start the cluster using the following steps:

1. OpenShift client installation: Install the socat and openshift-cli packages on your system as follows:

$ brew install openshift-cli socat --force

2. Starting and stopping the OpenShift cluster: The cluster can be started like this:

$ oc cluster up
Starting OpenShift using registry.access.redhat.com/openshift3/ose:v3.7.23 ...
OpenShift server started.

The server is accessible via web console at:
https://127.0.0.1:8443

You are logged in as:
User: developer
Password: <any value>

To login as administrator:
oc login -u system:admin

An installation verification can be performed by the OpenShift admin user, as follows:

$ oc login -u system:admin
Logged into "https://127.0.0.1:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project <projectname>':

default
kube-public
kube-system
* myproject
openshift
openshift-infra
openshift-node

Using project "myproject".

The Openshift cluster is up and ready for work. We may check the status of the cluster using the following command:

$ oc get nodes
NAME      STATUS AGE VERSION
localhost Ready 20h  v1.7.6+a08f5eeb62

The cluster can be stopped as follows:

$ oc cluster down

Starting an Openshift Cluster on Windows

The OpenShift environment can be deployed on Windows on a machine that supports Docker for Windows. The Docker for Windows installation process is described at https://docs.docker.com/docker-for-windows.

Once Docker is running, you will need to configure the insecure registry settings, as follows:

1. Right-click on the Docker icon in the notification area, and select Settings.

2. Click on Docker Daemon in the Settings dialog. 

3. Edit the Docker daemon configuration by adding 172.30.0.0/16 to the “insecure-registries” setting:

{
"registry-mirrors": [],
"insecure-registries": [ "172.30.0.0/16" ]
}

4. Click on Apply, and Docker will restart.

Once the Docker service is configured, the OpenShift client oc can be installed as shown below. The example also shows how to start the cluster:

OpenShift client installation: Download the Windows oc.exe binary from https://github.com/openshift/origin/releases/download/v3.7.1/openshift-origin-client-tools-v3.7.1-ab0f056-mac.zip and place it in C:\Windows\system32 or another path folder.

You can also download the latest code from https://github.com/openshift/origin/releases under Assets.

Starting/stopping a cluster: The Windows version of the OpenShift client can also start and stop the cluster, as follows:

C:\> oc cluster up

The Openshift cluster is up. You may want to check the status of the cluster using the following:

C:\> oc get node
NAME STATUS AGE
origin Ready 1d

Accessing OpenShift through a web browser

Whether you use oc cluster up or any other solution, when OpenShift is up and running, you can access it via a web browser. OpenShift is available on port 8443 by default. In the case of oc cluster up, you can reach the OpenShift login page at https://localhost:8443/:

Use the developer login to log in to OpenShift. Once you log in, you will be presented with the service catalog, which lets you to choose from the available language runtimes:

Projects in OpenShift extend the concept of namespaces from Kubernetes and serve as a means of separating teams and individual users working with the same OpenShift cluster. Another term often used for projects is tenant (for example, in OpenStack). You can create projects from the web console by clicking on the Create Project button and specifying its name.

After the project is created, you can click on its name on the right side of the screen. You will be redirected to the project’s overview page, from where you can create applications and other resources:

Just to give you the basic understanding of how to navigate through OpenShift web console, see the short guide below:

The Applications menu is used to access resources directly responsible for your application, like Deployments, Pods, Services, Stateful Sets, and Routes.

The Builds menu lets you manage the configuration of Builds and build strategies, such as Pipelines, as well as the Images used to build your application from the source code.

The Resources menu gives you access to other secondary resources that can be used by your application in advanced use cases, such as Quota, Config Maps, Secrets, and Other Resources. You can also use this menu to view and manage the Membership for your project.

The Storage menu is used to request persistent storage by creating persistent storage claims.

The Monitoring menu provides you with access to various metrics collected by OpenShift on CPU, RAM, and network bandwidth utilization (if you have metrics enabled), as well as Events going on in real time.

Finally, the Catalog menu is a shortcut you can take to access service catalog directly from the project you are currently in without having to go back to the first page. This was introduced in OpenShift Origin 3.9.

The oc utility is a very elegant method to start a development environment. It makes the entire process seamless and hassle-free, while starting the utility in itself can be achieved with incredible ease.

If you found this article helpful and want to learn more in detail about container management with OpenShift, you can explore Learn Open Shift, an end-to-end OpenShift guide sysadmins, DevOps engineers and solution architects.

Configuring JVM settings on Openshift

In this tutorial we will learn how to configure JVM settings for WildFly Container running on Openshift. More in detail, we will see how to apply Compute resource constraints to shape the JVM memory settings.

The recommended option to set JVM memory settings for WildFly / JBoss EAP is to use resource constraints. Resource constraints can be used to set both the amount of memory to assign to the JVM process and how much CPU. There are several ways to define Compute resource constraints:

  • Via Pod
  • Via Deployment Configurations
  • Via Templates
  • Via Project limit ranges

Here is a sample Compute Resource Constraints we can apply to our Deployment Config:

  resources:
    limits:
      memory: 2G
    requests:
      memory: 2G

As you can see, there are two type of Memory Resource constraints:

1) Memory Requests

By default, a container is able to consume as much memory on the node as possible. In order to improve placement of pods in the cluster, specify the amount of memory required for a container to run. The scheduler will then take available node memory capacity into account prior to binding your pod to a node. A container is still able to consume as much memory on the node as possible even when specifying a request.

2) Memory Limits

If you specify a memory limit, you can constrain the amount of memory the container can use. For example, if you specify a limit of 200Mi, a container will be limited to using that amount of memory on the node. If the container exceeds the specified memory limit, it will be terminated and potentially restarted dependent upon the container restart policy.

When you change the deployment settings with the above resource limits you will see that the following -Xms and -Xmx have been used:

  JAVA_OPTS: -javaagent:"/opt/wildfly/jboss-modules.jar"  -server -Xms238m -Xmx954m -XX:MetaspaceSize=96m -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=jdk.nashorn.api,com.sun.crypto.provider -Djava.awt.headless=true -XX:+UseParallelOldGC -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -XX:+ExitOnOutOfMemoryError -Djava.security.egd=file:/dev/./urandom  -Djboss.modules.settings.xml.url=file:///opt/jboss/container/wildfly/s2i/galleon/settings.xml  --add-exports=java.base/sun.nio.ch=ALL-UNNAMED --add-exports=jdk.unsupported/sun.misc=ALL-UNNAMED --add-exports=jdk.unsupported/sun.reflect=ALL-UNNAMED

As a matter of fact, the heap memory size is a portion of the total memory in a container. So, setting a limit of 2G will grant approximately 50% of that memory.

In order to change the minimum memory size, you can give bigger JAVA_INITIAL_MEM_RATIO  as a DeploymentConfig environment variable.

oc set env dc/<DC_NAME> JAVA_INITIAL_MEM_RATIO=40

You can also increase the maximum memory size but you should keep in mind that the container needs extra memory for metadata, thread, code cache, etc. Therefore, you have to take caution when adjusting the JAVA_MAX_MEM_RATIO.

oc set env dc/<DC_NAME> JAVA_MAX_MEM_RATIO=65

And here is the new memory settings after setting JAVA_INITIAL_MEM_RATIO and JAVA_MAX_MEM_RATIO:

  JAVA_OPTS: -javaagent:"/opt/wildfly/jboss-modules.jar"  -server -Xms496m -Xmx1240m -XX:MetaspaceSize=96m -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=jdk.nashorn.api,com.sun.crypto.provider -Djava.awt.headless=true -XX:+UseParallelOldGC -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -XX:+ExitOnOutOfMemoryError -Djava.security.egd=file:/dev/./urandom  -Djboss.modules.settings.xml.url=file:///opt/jboss/container/wildfly/s2i/galleon/settings.xml  --add-exports=java.base/sun.nio.ch=ALL-UNNAMED --add-exports=jdk.unsupported/sun.misc=ALL-UNNAMED --add-exports=jdk.unsupported/sun.reflect=ALL-UNNAMED

Setting other JVM arguments

As far as other JVM System Properties are concerned, you can set them through the JAVA_OPTS_APPEND environment variable as follows:

oc set env dc/<DC_NAME> JAVA_OPTS_APPEND=-Dmy.property=1

How to autoscale your applications with Openshift

This tutorial will demonstrate how to use the Autoscaling feature to let Openshift automatically scale the number of Pods based on the amount of resources required by the Pod, up to a certain limit.

Out of the box when you create a new application with Openshift, a Pod will be automatically deployed with that application and the Replication Controller ensures that the configured number of Pods will remain available. However the number of Pods, out of the box, is a static value which can be changed manually with:

oc scale [object_type] --replicas=[number of replicas]

One key aspect of Openshift Paas is the ability to provide an automatic mechanism to scale Pods. Some steps are however necessary:

At first you need to enable metrics

Start Openshift using the –metrics option as described in detail in this tutorial: Configuring metrics on Openshift

$ oc cluster up --metrics

Create an application

In order to test autoscaling we will create an application based on WildFly image that will be requested in a loop so that the autoscaler will be able to scale up the number of pods:

$ oc new-app wildfly~https://github.com/fmarchioni/ocpdemos --context-dir=wildfly-basic --name=wildfly-basic

And expose the service through the router with:

$ oc expose service wildfly-basic

Now wait for the application pod to be available:

$ oc get pods
NAME                    READY     STATUS      RESTARTS   AGE
wildfly-basic-1-build   0/1       Completed   0          1h
wildfly-basic-2-0cxrt   1/1       Running     0          36m

Ok, now in order to configure an autoscaler mechanism, we need to define a limit range, which is a mechanism for specifying default project CPU and memory limits and requests.

See as an example the following limits.json file:

{
    "kind": "LimitRange",
    "apiVersion": "v1",
    "metadata": {
        "name": "mylimits",
        "creationTimestamp": null
    },
    "spec": {
        "limits": [
            {
                "type": "Pod",
                "max": {
                    "cpu": "0.2",
                    "memory": "1Gi"
                },
                "min": {
                    "cpu": "30m",
                    "memory": "5Mi"
                }
            },
            {
                "type": "Container",
                "max": {
                    "cpu": "1",
                    "memory": "1Gi"
                },
                "min": {
                    "cpu": "50m",
                    "memory": "5Mi"
                },
                "default": {
                    "cpu": "50m",
                    "memory": "200Mi"
                }
            }
        ]
    }
}

The above json file, defines some min and max ranges both for the Pod and for the Container. You can create the limits in your namespace with:

$ oc create -f limits.json -n myproject

You can check that limits are available by querying through the mylimits object:

$ oc describe limits mylimits
Name:		mylimits
Namespace:	myproject
Type		Resource	Min	Max	Default Request	Default Limit	Max Limit/Request Ratio
--------------------------------------------------------------------------------------------------------
Pod		memory		5Mi	1Gi	-		-		-
Pod		cpu		30m	200m	-		-		-
Container	cpu		50m	1	50m		50m		-
Container	memory		5Mi	1Gi	200Mi		200Mi		-

Ok, we have set on purpose some tight CPU/Memory resource limits on the Pod so that autoscaling will be easily triggered.

Create the Autoscaler

The last step will be creating the Autoscaler object which will describe the range of Pods which the Autoscaler will be allowed to create, based on the CPU threshold:

$ oc autoscale dc wildfly-basic --min 1 --max 3 --cpu-percent=25

In our case, we allow a minimum of 1 Pod and a maximum of 3 Pods for the Deployment config “wildfly-basic”. The –cpu-percent parameter is the percentage of the requested CPU that each pod should be using. In case the amount of CPU consumed is higher, the Autoscaler will attempt to create additional Pods.

Load Testing

Now we can finally load test our application. The default route for the application is the following one:

$ oc get routes
NAME            HOST/PORT                                       PATH      SERVICES        PORT       TERMINATION
wildfly-basic   wildfly-basic-myproject.10.201.210.194.xip.io             wildfly-basic   8080-tcp   

And here is the number of Pods which are available before the test:

Now we can trigger a number of HTTP requests with:

for i in {1..500}; do curl http://wildfly-basic-myproject.10.201.210.194.xip.io ; done;

As you can see, the Autoscaler has created two additional Pods to meet the number of requests:

If you keep the application idle for a while, you will see that the Autoscaler will downscale your Pods back to 1.