OpenShift

OpenShift

OpenShift is a family of containerization software products developed by Red Hat. Its flagship product is the OpenShift Container Platform an on-premises platform as a service built around Docker containers orchestrated and managed by Kubernetes on a foundation of Red Hat Enterprise Linux. The family's other products provide this platform through different environments: OKD serves as the community-driven upstream (akin to the way that Fedora is upstream of Red Hat Enterprise Linux), OpenShift Online is the platform offered as software as a service, and Openshift Dedicated is the platform offered as a managed service.

The OpenShift Console has developer and administrator oriented views. Administrator views allow one to monitor container resources and container health, manage users, work with operators, etc. Developer views are oriented around working with application resources within a namespace. OpenShift also provides a CLI that supports a superset of the actions that the Kubernetes CLI provides, focused on developer experience and application security that's platform agnostic. OpenShift helps you develop and deploy applications to one or more hosts. These can be public facing web applications, or backend applications, including micro services or databases.

Working

OpenShift is a layered system wherein each layer is tightly bound with the other layer using Kubernetes and Docker cluster. The architecture of OpenShift is designed in such a way that it can support and manage Docker containers, which are hosted on top of all the layers using Kubernetes. Unlike the earlier version of OpenShift V2, the new version of OpenShift V3 supports containerized infrastructure. In this model, Docker helps in creation of lightweight Linux-based containers and Kubernetes supports the task of orchestrating and managing containers on multiple hosts.

Components of OpenShift

One of the key components of OpenShift architecture is to manage containerized infrastructure in Kubernetes. Kubernetes is responsible for Deployment and Management of infrastructure. In any Kubernetes cluster, we can have more than one master and multiple nodes, which ensures there is no point of failure in the setup.

Kubernetes Master Machine Components

Etcd − It stores the configuration information, which can be used by each of the nodes in the cluster. It is a high availability key value store that can be distributed among multiple nodes. It should only be accessible by Kubernetes API server as it may have sensitive information. It is a distributed key value Store which is accessible to all.

API Server − Kubernetes is an API server which provides all the operation on cluster using the API. API server implements an interface which means different tools and libraries can readily communicate with it. A kubeconfig is a package along with the server side tools that can be used for communication. It exposes Kubernetes API”.

Controller Manager − This component is responsible for most of the collectors that regulate the state of the cluster and perform a task. It can be considered as a daemon which runs in a non-terminating loop and is responsible for collecting and sending information to API server. It works towards getting the shared state of the cluster and then make changes to bring the current status of the server to a desired state. The key controllers are replication controller, endpoint controller, namespace controller, and service account controller. The controller manager runs different kind of controllers to handle nodes, endpoint, etc.

Scheduler − It is a key component of Kubernetes master. It is a service in master which is responsible for distributing the workload. It is responsible for tracking the utilization of working load on cluster nodes and then placing the workload on which resources are available and accepting the workload. In other words, this is the mechanism responsible for allocating pods to available nodes. The scheduler is responsible for workload utilization and allocating a pod to a new node.

Kubernetes Node Components

Following are the key components of the Node server, which are necessary to communicate with the Kubernetes master.

Docker − The first requirement of each node is Docker which helps in running the encapsulated application containers in a relatively isolated but lightweight operating environment.

Kubelet Service − This is a small service in each node, which is responsible for relaying information to and from the control plane service. It interacts with etcd store to read the configuration details and Wright values. This communicates with the master component to receive commands and work. The kubelet process then assumes responsibility for maintaining the state of work and the node server. It manages network rules, port forwarding, etc.

Kubernetes Proxy Service − This is a proxy service which runs on each node and helps in making the services available to the external host. It helps in forwarding the request to correct containers. Kubernetes Proxy Service is capable of carrying out primitive load balancing. It makes sure that the networking environment is predictable and accessible but at the same time it is isolated as well. It manages pods on node, volumes, secrets, creating new containers health checkup, etc.

Why Use OpenShift?

OpenShift provides a common platform for enterprise units to host their applications on cloud without worrying about the underlying operating system. This makes it very easy to use, develop, and deploy applications on cloud. One of the key features is, it provides managed hardware and network resources for all kinds of development and testing. With OpenShift, PaaS developer has the freedom to design their required environment with specifications.

OpenShift also offers on-premises version known as OpenShift Enterprise. In OpenShift, developers have the leverage to design scalable and non-scalable applications and these designs are implemented using HAproxy servers.

Features

No alt text provided for this image


There are multiple features supported by OpenShift. Few of them are −

  • Multiple Language Support
  • Multiple Database Support
  • Extensible Cartridge System
  • Source Code Version Management
  • One-Click Deployment
  • Multi Environment Support
  • Standardized Developers’ workflow
  • Dependency and Build Management
  • Automatic Application Scaling
  • Responsive Web Console
  • Rich Command-line Toolset
  • Remote SSH Login to Applications
  • Rest API Support
  • Self-service On Demand Application Stack
  • Built-in Database Services
  • Continuous Integration and Release Management
  • IDE Integration
  • Remote Debugging of Applications

Products

OpenShift Container Platform

  • OpenShift Container Platform (formerly known as OpenShift Enterprise) is Red Hat's on-premises private platform as a service product, built around a core of application containers powered by Docker, with orchestration and management provided by Kubernetes, on a foundation of Red Hat Enterprise Linux and Red Hat Enterprise Linux CoreOS (RHCOS).

OKD

  • OKD, known until August 2018 as OpenShift Origin(Origin Community Distribution) is the upstream community project used in OpenShift Online, OpenShift Dedicated, and OpenShift Container Platform. Built around a core of Docker container packaging and Kubernetes container cluster management, OKD is augmented by application lifecycle management functionality and DevOps tooling. OKD provides an open source application container platform.

Red Hat OpenShift Online

  • Red Hat OpenShift Online (RHOO) is Red Hat's public cloud application development and hosting service which runs on AWS and IBM Cloud.

OpenShift 3 is built around Kubernetes. It can run any Docker-based container, but Openshift Online is limited to running containers that do not require root.

OpenShift Dedicated

  • OpenShift Dedicated is Red Hat's managed private cluster offering, built around a core of application containers powered by Docker, with orchestration and management provided by Kubernetes, on a foundation of Red Hat Enterprise Linux. It is available on the Amazon Web Services (AWS), IBM Cloud, Google Cloud Platform (GCP) and Microsoft Azure marketplaces since December 2016.

Case Study: Cisco

No alt text provided for this image

Cisco is a leading IT industry best known for its networking products. Headquartered in California, Cisco develops, manufactures, and sells networking hardware, telecoms equipment, and other IT services and products.

Challenge

Cisco’s success depends on its ability to quickly deliver innovative IT products and solutions to customers. Delays can cost the company business. To encourage speed to market and improve satisfaction, Cisco needed to keep its 1,000+ developers fully engaged in designing and building applications. They guard against high employee turnover, low productivity, and slow response times.

Solution

To meet the demanding requirements, Cisco partnered with RedHat to build its Lightweight Application Environment(LAE). Running on the OpenShift Container platform, it supports 100+ applications that powered a variety of business functions and gave developers on-demand access to the infrastructure, operating system, middleware, and system functions to develop applications without any manual provisioning.

Results

The new LAE platform has become a catalyst for innovation and progress, enabling Developers at Cisco to get what they need, when they need it. It has reduced time-to-market, streamline infrastructure using containers, and increased operational efficiencies.

Now the developers don’t have to wait for the months for the project to be provisioned. The developers push a button and the service is provisioned within a matter of minutes. More productivity means customers get innovative products and services faster. The solution reduces demands on limited IT resources and gives developers more time to focus on creative projects, increasing employee satisfaction.

Thank you!

To view or add a comment, sign in

More articles by Kalpana Tale

  • K-means clustering

    K-means clustering is one of the simplest and popular unsupervised machine learning algorithms. Unsupervised algorithms…

  • Implementation of OSPF using Dijkastra Algorithm

    Dynamic Routing Dynamic Routing is a network routing procedure that facilitates the routers to pick and choose the…

  • MongoDB and its Use Cases

    MongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database…

  • Use case of JavaScript

    JavaScript is a text-based programming language used both on the client-side and server-side that allows you to make…

  • Confusion Matrix and Cyber crime cases

    Confusion Matrix After data cleaning, pre processing and feeding it to the model we get the output. And to get to know…

  • Jenkins

    Jenkins is a free and open source automation server. It helps automate the parts of software development related to…

  • Arth Task 11.3

    Restarting HTTPD Service is not idempotence in nature and also consume more resources suggest a way to rectify this…

Explore content categories