This tutorial will discuss about using A/B deployments on Openshift Paas in order to balance and weigh the load between multiple applications with as little as some tweaks in your configuration
AB deployments are a simple and effective strategy to split traffic between different applications. One common use case is to split the load between the same application using a different template or database and measure the impact with different % of each applications. You can also use it to switch off completely one version of one application by setting its load to 0%.
Let's see in concrete a simple demo application running on WildFly 10, which is made up of two versions: the master branch and a new branch, named version2.
From any available project execute the following command to create the first app named v1:
oc new-app wildfly~https://github.com/fmarchioni/ocpdemos#master --context-dir=wildfly-basic --name='v1' -l name='v1' -e SELECTOR=v1
Now let's create the second app named v2, based on a different branch of the application:
$ oc new-app wildfly~https://github.com/fmarchioni/ocpdemos#version2 --context-dir=wildfly-basic --name='v2' -l name='v2' -e SELECTOR=v2
Let's check that our Pods are done:
$ oc get pods NAME READY STATUS RESTARTS AGE v1-1-4bvg3 1/1 Running 0 1h v1-1-build 0/1 Completed 0 1h v2-1-1dzkf 1/1 Running 0 1h v2-1-build 0/1 Completed 0 1h
Ok now we will expose both services on the router, plus we will create a third route named "ab" which will be used to split the load between the two routes:
$ oc expose service v1 --name v1 -l name='v1' $ oc expose service v2 --name v2 -l name='v2' $ oc expose service v1 --name='ab' -l name='ab'
Now let's set the policy for the ab router to use roundrobin strategy to split the load between applications:
$ oc annotate route/ab haproxy.router.openshift.io/balance=roundrobin
Finally, let's choose the % of traffic between the applications to be equal:
oc set route-backends ab v1=50 v2=50
This is confirmed by your routes list:
$ oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION ab ab-myproject.192.168.1.66.xip.io v1(50%),v2(50%) 8080-tcp v1 v1-myproject.192.168.1.66.xip.io v1 8080-tcp v2 v2-myproject.192.168.1.66.xip.io v2 8080-tcp
And by your Web administration console as well:
Now let's try to invoke our application which is running in the "demo" Web context:
$ curl http://ab-myproject.192.168.1.66.xip.io/demo <html> <body> <h1>Hello World from WildFly on Openshift</h1> </body> </html>
So the index.jsp returned the landing page for the application in the master branch. Now let's invoke again the application to see if we are receiving the alternate version of the application in the other branch:
$ curl http://ab-myproject.192.168.1.66.xip.io/demo <html> <body> <h1>Hello World from WildFly on Openshift</h1> <h2>Version 2 Running!!!</h2> </body> </html>
Great. So as you could see, we managed to split the load between two different routes using a weight for each route. Of course, you can at any time choose just one route as target for your application as in the following example:
$ oc set route-backends ab v1=100 v2=0