Openshift Interview Questions (2024)

Here is a list of OpenShift interview questions that can help you to prepare for a Cloud Interview where you will discuss about Red Hat OpenShift.

OpenShift Architecture

  • What is OpenShift? Answer: The Red Hat OpenShift platform is a cloud-based solution that utilizes Kubernetes to assist developers in creating applications. It provides automated installation, upgrades, and life cycle management for the entire container stack, encompassing the operating system, Kubernetes, cluster services, and applications, regardless of the cloud environment. OpenShift empowers organizations to accelerate application development, deployment, and scalability, whether on-premises or in the cloud. Additionally, it ensures enterprise-grade security to safeguard your development infrastructure on a large scale.
  • What is the Container Engine in OpenShift 4? Answer: With OpenShift 4, the default container engine changes from Docker to CRI-O, providing a lean, stable, container runtime that moves in lock step with Kubernetes. CRI-O provides a minimal runtime focused on running application containers securely and reliably, integrating tightly with Kubernetes to ensure seamless container orchestration.
openshift interview questions
  • What is the difference between OpenShift and Kubernetes? Answer:
    • OpenShift is a commercial product by Red Hat, while Kubernetes is an open-source container orchestration platform developed by Google.
    • Kubernetes focuses primarily on container orchestration, while OpenShift extends Kubernetes with additional features such as developer tools, integrated CI/CD pipelines, monitoring, and security enhancements.
    • OpenShift provides a more user-friendly experience and abstracts away some of the complexities of Kubernetes, making it easier to deploy and manage applications.
    • OpenShift offers enterprise-grade support, long-term stability, and security patches, while Kubernetes relies on community support and third-party vendors for enterprise-level assistance.
  • What is the role of the OpenShift master components in the architecture?Answer: The OpenShift master components are responsible for managing the cluster and coordinating operations. They include the API server, controller manager, and scheduler. The API server exposes the OpenShift API and handles requests from clients. The controller manager ensures the desired state of the cluster by monitoring and reconciling resources. The scheduler assigns pods to worker nodes based on resource availability and scheduling policies.
  • How does OpenShift handle container orchestration and management?Answer: OpenShift leverages Kubernetes for container orchestration and management. Kubernetes provides features like container scheduling, scaling, load balancing, and automated deployment and management of containerized applications. OpenShift builds upon Kubernetes, adding additional features and tools tailored for enterprise-grade container deployments.
  • Explain the concept of a pod in OpenShift and its significance in application deployment. Answer: In OpenShift, a pod is the smallest deployable unit and represents one or more containers that are scheduled and deployed together on the same worker node. Containers within a pod share the same network namespace and can communicate with each other using localhost. Pods enable co-located and tightly coupled containers, supporting multi-container applications, sidecar patterns, and shared storage. They are the atomic unit of deployment and scaling in OpenShift.
  • What is the role of an OpenShift router in the architecture?Answer: An OpenShift router is responsible for exposing services deployed within the OpenShift cluster to the outside world. It acts as a traffic ingress point and directs incoming requests to the appropriate services based on routing rules. The router integrates with the OpenShift networking layer to provide external access to applications running inside the cluster.
  • How does OpenShift ensure high availability of applications and fault tolerance in its architecture?Answer: OpenShift achieves high availability and fault tolerance through several mechanisms. It utilizes Kubernetes’ built-in features such as replica sets and pod auto-scaling to ensure application availability and responsiveness. OpenShift also supports clustering and multi-master configurations, providing redundancy and failover capabilities. Additionally, it offers automated rolling updates and rolling deployments to minimize downtime during application upgrades.

OpenShift Installation

  • What are the different installation options available for deploying OpenShift?Answer: OpenShift provides several installation options, including:
    • OpenShift Cloud Services Edition: Red Hat OpenShift is available as a turnkey application platform from major cloud providers such as AWS or Azure.
    • Self-managed Edition: A self-managed deployment option, Red Hat OpenShift Platform Plus can be installed on premise, cloud, managed cloud, or at the edge providing consistent user experience, management, and security across hybrid infrastructure.
    • OpenShift Local: This is a single Node installation of an OpenShift cluster that you can use for learning purpose. You cannot use this installation mode for a production cluster.
  • What are the prerequisites for installing OpenShift Container Platform?Answer: The prerequisites for installing OpenShift Container Platform include:
    • A supported infrastructure platform (bare metal, virtual machines, or public cloud).
    • Sufficient resources (compute, memory, and storage) based on the expected workload.
    • Network connectivity between nodes, including DNS resolution and firewall rules.
    • Container runtime, such as Docker or containerd, installed on all nodes.
    • Valid Red Hat subscriptions or entitlements for the OpenShift Container Platform.
  • Explain the installation process of OpenShift Container Platform.Answer: The installation process of OpenShift Container Platform typically involves the following steps:
    • Preparing the infrastructure, including provisioning nodes, configuring network connectivity, and meeting storage requirements.
    • Setting up the OpenShift prerequisites, such as installing and configuring the container runtime, enabling necessary services, and preparing DNS.
    • Creating an OpenShift installation configuration file (Installer-provisioned Infrastructure or User-provisioned Infrastructure) with appropriate settings.
    • Running the installation process using the OpenShift installer, which deploys the required components, sets up the control plane, and joins worker nodes to the cluster.
    • Verifying the successful installation and accessing the OpenShift Web Console for further configuration and management.
  • How can you perform upgrades or updates to an existing OpenShift installation?Answer: By utilizing the web console or the OpenShift CLI (oc), you have the capability to update an OpenShift Container Platform cluster with a single operation. Whenever an update becomes available for a cluster, platform administrators are promptly notified. The OpenShift Update Service (OSUS) constructs a graph of potential updates based on release images stored in the registry. This graph is established by considering recommended and thoroughly tested update paths from a specific version. OpenShift Container Platform clusters establish a connection with the Red Hat Hybrid Cloud servers and transmit information about the clusters being utilized, including their respective version details. In response, OSUS provides information regarding known update targets.

Openshift configuration

What is an OpenShift project and how does it relate to application configuration?

Answer: An OpenShift project is a logical unit that enables isolation, collaboration, and resource allocation within an OpenShift cluster. It acts as a boundary for organizing and managing applications and their associated resources, such as services, routes, and deployments. Application configuration, including environment variables, service bindings, and resource limits, is typically defined within an OpenShift project.

How can you configure environment variables for an OpenShift application?

Answer: Environment variables can be configured for an OpenShift application in multiple ways:

  • Using the OpenShift Web Console: Environment variables can be defined within the application deployment configuration or by editing the deployment YAML directly.
  • Using the OpenShift CLI (oc): The oc set env command allows adding, updating, or removing environment variables for a deployed application.
  • Using environment variable configuration files: Environment variables can be specified in a configuration file and applied during the deployment process.

What is a ConfigMap in OpenShift and how can it be used for configuration management?

Answer: A ConfigMap in OpenShift is an object that stores non-sensitive configuration data in key-value pairs. It allows decoupling of configuration from application code, simplifying management and promoting application portability. ConfigMaps can be used to inject configuration data into containers at runtime, making it easier to update or modify configuration settings without rebuilding the application image.

What is a Secret in OpenShift and how can it be utilized for sensitive configuration data?

Answer: A Secret in OpenShift is an object used to store sensitive information, such as passwords, API keys, or TLS certificates, securely. Secrets are base64 encoded and can be mounted as files or passed as environment variables to containers. They provide a secure mechanism for managing sensitive configuration data within an OpenShift cluster.

How can you scale an application in OpenShift to handle increased load?

Answer: Scaling an application in OpenShift can be achieved using the following methods:

  • Manual scaling: By modifying the deployment configuration, you can update the replica count to increase or decrease the number of application instances.
  • Automatic scaling: OpenShift supports horizontal pod autoscaling (HPA), which automatically adjusts the number of replicas based on CPU utilization or custom metrics.
  • Cluster-wide scaling: OpenShift also allows cluster-wide scaling using the Cluster Autoscaler, which adds or removes nodes dynamically based on resource demand.

What is a service account in OpenShift and how is it used for access control?

Answer: A service account in OpenShift is an identity associated with an application or service running within the cluster. It provides a way to authenticate and authorize access to resources. Service accounts can be assigned roles and permissions using role-based access control (RBAC), allowing fine-grained control over what an application or service can do within the cluster.

OpenShift Operators

  • What are OpenShift Operators and how do they enhance application management?Answer: OpenShift Operators are Kubernetes-native applications that extend the functionality of Kubernetes by automating the management and operation of complex applications and services. Operators encapsulate application-specific knowledge and best practices, enabling automated provisioning, configuration, scaling, and lifecycle management of applications. They help streamline application management tasks and ensure consistent deployments across clusters.
  • Explain the concept of Custom Resource Definitions (CRDs) in the context of OpenShift Operators.Answer: Custom Resource Definitions (CRDs) in OpenShift Operators define new object types and their schemas that extend the Kubernetes API. They allow Operators to introduce custom resources specific to the application or service they manage. CRDs define the desired state and behavior of these resources, enabling Operators to manage and interact with them using Kubernetes tools and APIs.
  • What are the main components of an OpenShift Operator?Answer: The main components of an OpenShift Operator typically include:
    • Custom Resource Definition (CRD): Defines the custom resources managed by the Operator.
    • Controller: Implements the reconciling logic that brings the actual state of resources in line with their desired state.
    • Operator Lifecycle Manager (OLM): Manages the installation, upgrades, and lifecycle of Operators within an OpenShift cluster.
    • Operator Hub: A centralized marketplace for discovering and sharing Operators, providing a catalog of Operators that can be easily installed and managed.
  • How can an OpenShift Operator help with day-2 operations, such as scaling or upgrading an application?Answer: An OpenShift Operator automates day-2 operations by providing the intelligence and automation necessary to manage an application throughout its lifecycle. Operators can handle tasks like scaling an application by adjusting the replica count, upgrading an application by applying rolling updates, or managing configuration changes without manual intervention. They ensure consistent and reliable operations, reducing the burden on administrators and enabling efficient management of applications in an OpenShift cluster.
  • How can you create your own custom OpenShift Operator?Answer: To create a custom OpenShift Operator, you can follow these general steps:
    • Define the desired behavior and capabilities of your Operator.
    • Create a Custom Resource Definition (CRD) to define the custom resources specific to your application.
    • Develop the Operator’s controller logic using a programming language such as Go or Ansible, implementing the reconcile loop to manage the resources’ actual state.
    • Package and distribute your Operator using Operator Lifecycle Manager (OLM) and the catalog.
    • Test and validate the Operator’s functionality in an OpenShift cluster, ensuring it aligns with the desired application management goals.

OpenShift Networking

  • What are the main networking components in an OpenShift cluster?Answer: The main networking components in an OpenShift cluster include:
    • Cluster Network: This is the network overlay used for communication between pods and services within the cluster.
    • Node Network: This refers to the physical or virtual network interfaces of the worker nodes that handle incoming and outgoing traffic.
    • Ingress Network: It provides external access to services and routes incoming traffic to the appropriate pods.
    • Egress Network: This handles outgoing traffic from pods to external destinations.
  • Explain the difference between a pod network and a service network in OpenShift.Answer: In OpenShift, a pod network is an overlay network that allows communication between pods within the cluster. It assigns each pod a unique IP address and enables seamless communication between pods on different nodes.On the other hand, a service network in OpenShift provides a stable and abstracted endpoint for accessing a set of pods. Services act as load balancers, routing incoming traffic to the appropriate pods based on selectors. Service networks enable service discovery and load balancing within the cluster.
  • How does OpenShift handle container-to-container communication within a pod?Answer: OpenShift allows containers within a pod to communicate with each other using the localhost network interface. Containers in the same pod share the same network namespace, enabling direct communication via the localhost IP address ( This local communication capability facilitates co-located and tightly coupled container deployments within a pod.
  • What is the role of the OpenShift SDN (Software-Defined Networking) in cluster networking?Answer: OpenShift SDN is a built-in network overlay technology that provides networking capabilities within an OpenShift cluster. It enables pod-to-pod communication, allocates IP addresses to pods, and handles routing and network isolation. OpenShift SDN automatically configures and manages the necessary networking components, allowing seamless communication between pods and services in the cluster.
  • Explain the concept of network policies in OpenShift and how they are used for network access control.Answer: Network policies in OpenShift allow fine-grained control over network traffic within the cluster. They enable administrators to define rules that specify how pods can communicate with each other based on various criteria, such as source and destination IP addresses, port numbers, protocols, and labels.

CI/CD and Service Mesh

Which tool uses OpenShift to perform GitOps operations on its cluster?

Answer: Red Hat OpenShift GitOps uses Argo CD to maintain cluster resources. Argo CD is an open-source declarative tool for the continuous integration and continuous deployment (CI/CD) of applications. Red Hat OpenShift GitOps implements Argo CD as a controller so that it continuously monitors application definitions and configurations defined in a Git repository. Then, Argo CD compares the specified state of these configurations with their live state on the cluster.

Which tool CI/CD tool can you use on OpenShift?

Answer: OpenShift uses Red Hat OpenShift Pipelines which is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. It uses Tekton building blocks to automate deployments across multiple platforms by abstracting away the underlying implementation details. Tekton introduces a number of standard custom resource definitions (CRDs) for defining CI/CD pipelines that are portable across Kubernetes distributions.

What is Service Mesh?

Answer: Based on Istio, OpenShift Service Mesh enhances application observability, traffic management, and security within microservices architectures. It provides features like traffic routing, load balancing, circuit breaking, fault injection, and secure communication between services, improving reliability and resilience of distributed applications.

What is the difference between Service Mesh and API Gateway ?

Answer: Service mesh is well-suited for managing communication between microservices, ensuring reliability, resilience, and security in distributed systems. It is particularly useful in complex, dynamic environments with a large number of interconnected services.

An API gateway is commonly used for exposing APIs to external clients, managing API traffic, enforcing security policies, and providing a unified interface for clients to access backend services. It is suitable for scenarios where external clients need to consume APIs in a controlled and secure manner.

OpenShift Security

  • What are some key security features and capabilities provided by OpenShift?Answer: OpenShift provides several security features and capabilities, including:
    • Role-Based Access Control (RBAC): It enables fine-grained access control by defining roles and permissions for users and service accounts.
    • Image Security Scanning: OpenShift can scan container images for known vulnerabilities and enforce security policies during the build and deployment process.
    • Pod Security Policies: These policies define security constraints for pods, such as privileged access, host namespace usage, and container capabilities.
    • Network Policies: They allow administrators to define rules for network traffic between pods, providing network segmentation and access control.
    • Security Context Constraints (SCCs): SCCs enforce security settings and restrictions for pods, controlling aspects like user permissions, container runtime privileges, and volume access.
  • How does OpenShift handle authentication and identity management?Answer: OpenShift supports various authentication methods, including:
    • Integration with external authentication providers, such as LDAP, Active Directory, or OAuth.
    • Built-in authentication providers, including the HTPasswd and the OpenShift OAuth server.
    • Integration with identity providers, such as Keycloak or Red Hat Single Sign-On (SSO), for enhanced authentication and authorization capabilities.
    OpenShift also supports user and group management through Role-Based Access Control (RBAC) and integrates with external identity and access management systems for centralized user management.
  • Explain the concept of Pod Security Policies (PSPs) in OpenShift and how they enhance security.Answer: Pod Security Policies (PSPs) in OpenShift define a set of security constraints that must be met for pods to be admitted to the cluster. PSPs enforce security measures, such as limiting privileged access, controlling host namespace usage, and restricting container capabilities. By defining and applying PSPs, administrators can ensure that pods adhere to security best practices, mitigating potential security risks and enforcing a secure runtime environment.
  • What are the best practices for securing container images in OpenShift?Answer: To secure container images in OpenShift, consider the following best practices:
    • Use trusted base images from reputable sources.
    • Regularly update and patch base images and application dependencies.
    • Scan container images for vulnerabilities using OpenShift’s image security scanning feature or third-party tools.
    • Implement image signing and verification to ensure the integrity and authenticity of images.
    • Follow container security best practices, such as running containers as non-root users, using read-only file systems, and implementing least privilege principles.
  • How does OpenShift handle secure communication between components within the cluster?Answer: OpenShift ensures secure communication between cluster components through several mechanisms:
    • TLS (Transport Layer Security) encryption: OpenShift uses TLS to encrypt communication between components, including API requests, internal cluster communication, and inter-node communication.
    • Certificate management: OpenShift manages X.509 certificates for secure communication and authentication between components and users.
    • Service mesh integration: OpenShift can integrate with service mesh technologies like Istio to enhance secure communication and observability between microservices within the cluster.


This article was a walk through some of the most common interview questions for Red Hat OpenShift. If you want a comprehensive OpenShift cheatsheet check this article: Openshift Cheatsheet for DevOps