OpenShift vs Kubernetes
Should you go with OpenShift or Kubernetes? Before you get stuck in an indecisive limbo, read this post to find out.
When I initially explored OpenShift(circa version 3.6), I had a fair idea that it had many components in addition to Kubernetes, but wasn’t sure what it built on top of it. In this post, we shall explore the differences between the two and what factors you would want to consider to choose one over the other.
Time for a history lesson
OpenShift Origin is now called OKD(OpenShift Kubernetes Distribution), and from their website:
OKD is a distribution of Kubernetes optimized for continuous application development and multi-tenant deployment. OKD adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams.
Surprisingly, not many folks know that Kubernetes is built into OpenShift. It wasn’t always that way. The previous generation of OpenShift was based on the concept of gears. Gears provided the isolation needed for a multitenant applications setup, as was similar in many ways to Docker containers. The equivalent of docker images was an abstraction called cartridges. When containers became mainstream, all these primitive technologies were replaced by docker.
Somewhere in 2014, Google open sourced its container orchestration platform Borg. Around the same time, RedHat was shopping for solutions to orchestrate and manage containers in the new generation of OpenShift. They decided to collaborate with Kubernetes. Thus, Kubernetes became the de facto container orchestration component of OpenShift from there on. Now, RedHat is one of the main contributors to the project.
The definition of Kubernetes from their website:
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
Now, clearly, there is an overlap between both tools. But like OKD’s website says, it brings a lot more to the mix. Before we dive in what that “lot more” is, I’d want to get one thing out of the way. Plain Kubernetes solely focuses on providing container orchestration, nothing more. You can build other stuff on top of it by configuring Kubernetes, but that’s mostly all there is to it. Why do you need container orchestration? If you have applications grouped into one or many containers of a different flavor, you realize soon that the span across nodes in your cluster. You have to logically group them by application, figure out where to boot the next container, optimize your infrastructure usage, watch for downtime and take action for it etc. Kubernetes does all this for you. Another solution which does container orchestration is docker swarm.
If you use Linux today, you hardly boot your own Linux kernel build and use the OS. You download some existing distribution out there like Debian or Redhat. In a similar vein, plain vanilla Kubernetes is hardly used in the real world. People use some distribution of Kubernetes. These distributions provide variation in terms of the environment they run(local, cloud, bare metal), level of support(community, commercial support, licensing) and add ons. Redhat OpenShift(now OKD) is one such distribution of Kubernetes with both community and commercial support, optimized to run on Redhat flavored infrastructure(RHEL, Centos etc.) on mostly any cloud(AWS, Azure, Open Stack) and tons of add-ons on top of plain Kubernetes.
What OpenShift adds to Kubernetes
The prominent one you notice is a web-based UI. This is called web console and runs as part of the OKD cluster as one of its apps.
The web console is a node based application written in AngularJS(that’s right, version 1.x) which mostly compliments the command-line utility called
Different build strategies
Kubernetes primarily deals with well-rounded docker images when it comes to building containers for deployment. You furnish information like container registry(accompanied by its credentials if necessary), a prebuilt docker image with namespace and a version/tag, which Kubernetes consumes and deploys your app. In OpenShift, this comes in many variations in addition to the Docker image-based deployment. We can opt for source 2 image strategy, which will inject the source code of your app into a prebuilt base Docker image and deploy it. There’s also the pipeline strategy, which tightly integrates with Jenkins. If you want more simplicity, you can just specify a Dockerfile which OpenShift will consume and create an image to deploy your application.
Built in container registry
OpenShift ships with a builtin container registry. This registry is used internally by OpenShift to store the images required for various build strategies discussed above. If you want to use different container registries, you can configure that in OpenShift.
Version control integration
OpenShift tightly integrates with your source code management tool, whatever it is(Gitlab, Github, Bitbucket etc). In fact, you can run Gitlab inside of OpenShift. Tight integration means support for deployment webhooks, CI and better developer experience similar to platforms like Heroku. You can add similar solutions on top of a Kubernetes cluster.
Kubernetes has RBAC(Role Based Access Control) features. But honestly, it is hard to tweak the dials and get the right security setup for your cluster. Most setups start with a non-production setup and stick to it and continue using it. OpenShift has security and fine-grained access control baked in from the start and has security policies already defined and done-for-you. It also provides some add ons to manage users, like identity providers which is not present in Kubernetes. Also, some build strategies like S2I follow best practices for security like running the docker process as non-root. This is one of the reasons I like OpenShift, I need not worry about security if I’m operating in an OpenShfit cluster, as most of the best practices are already factored in. Also, OpenShift has an OAuth server builtin.
Resources and API
Both OpenShift and Kubernetes deal with the concept of resources. Let’s first define what resources are and then see the differences between what each solution offers. A pod is an example of a resource, with a bunch of configuration, as to what image(s) it runs, what ports it exposes etc. How you deploy an app is a resource. A resource is essentially a set of specifications on how you deal with things. All resources have 2 things in common, their name and the type of resource. Kubernetes and OpenShift manage a cluster by doing a CRUD of these resources according to the previleges of the user doing the operation. Though this might sound oversimplified, that’s what these tools do. Because OpenShift operates on top of Kubernetes, it consumes and works on the same et of resources and adds more type of resources.
Some resource types shared between OpenShift and Kubernetes:
- Namespaces. Though these are called projects in OpenShift, the only difference being projects have some added security annotation to Kubernetes namespace resource.
- Deployment config
- Persistent volumes and volume claims
Some resource types endemic to OpenShift:
- Image streams. Same as images but associated with trigger actions like build when the image changes.
- Templates. A blueprint for an app, similar to Helm.
- Build config(how an app or service should be built, has source control information factored in)
- Routes. This is similar to Kubernetes ingress but was designed before ingresses became part of Kubernetes
As I write this, new resource types and abstractions continue to be added to both technologies, but that’s the gist of it. All the resources can be operated using an API. OpenShift has an API server which works along with Kubernetes API server to take in these requests.
Logging and monitoring
OpenShift ships with builtin EFK stack. You can choose to install it or otherwise. You can configure this setup to consume application logs as well. For Kubernetes, you can technically have it, but it is more of DIY.
Routing and load balancing
This component is cloud-specific in Kubernetes. For instance, you wire up an AWS load balancer if you use EKS. OpenShift has a HA proxy component as a part of its architecture which takes care of the routing and load balancing aspects.
There are so many ways to install Kubernetes on your infrastructure. You can even choose one of the hosted solutions. Also, because there are different variants of Kubernetes, this sometimes leads to an analysis paralysis on which installation to go for. For OpenShift, the one true recommended way is through Ansible. They maintain an excellent set of Ansible playbooks which strikes a fine balance between automation and configurability. One caveat I have to warn about is that OpenShift installation is a complex exercise, even when installing via Ansible. This is partly because of the complexity of the domain and the fact that there are tons of variables to tweak. At the time of writing this, the Ansible configuration file for a minimal recommended production setup is ~1 KLOC.
Both OpenShift and Kubernetes have DIY flavors, or community editions which you can choose to evaluate and stick to them. Kubernetes support comes in different forms from with DevOps agencies which tailor make your Kubernetes cluster according to your needs and provide ongoing support, or hosted Kubernetes versions which support at the cluster level but not further up the stack, like connecting with your SCM system, on-demand environment creation etc. I’d like to mention here that Gitlab integrates nicely with almost any Kubernetes cluster to provide most of what you need.
OpenShift support comes in 4 different forms:
- The community version, OKD, where you are on your own. This was known as OpenShift Origin until version 3.10.
- The OpenShift container platform, which provides Redhat enterprise-grade support.
- OpenShift online. A multitenant online version of OpenShift with the infrastructure managed by Redhat, having easy to use app templates provided by Redhat. This is a good choice if you want to Quickly try out OpenShift with minimal investment. Use OpenShift but don’t want to manage infrastructure
- OpenShift dedicated. OpenShift container platform running on AWS. Honestly, not sure how different this is from the OpenShift container platform in terms of offerings 🙂
Like most commercial open source projects, the community version is bleeding edge and the enterprise versions are the battle-tested previous stable versions.
Long story short, both projects have very stable and strong support ecosystems, and all the technology standards are backed by CNCF.
So, which one?
Having used both, I think OpenShift builds a lot of value and abstractions over Kubernetes. But that’s not to say that everyone should use OpenShift. One size
does not fit all. If you are testing the waters and totally new to both, I’d recommend starting with Kubernetes. That’s a good initial investment. Because of the weight it carries in its shoulders, OpenShift has a steep learning curve(which is made easier by excellent documentation available). Once you see yourself spending more time and resources on infrastructure, getting bogged by decisions, then its time to move to OpenShift. You can definitely tailor your devops journey with Kubernetes better than with OpenShift. But out of the box, OpenShift has a lot to offer. I wouldn’t want to worry about security, make decisions with already limited time and energy I have. OpenShift already does all that for me. I can get it running out of the box in 30 minutes or so and start shipping.
Another important criterion is the DevOps literacy of your team. How comfortable are they with building Docker images? When using OpenShift, it is possible to shield people away from things like building Docker images for every push, exposing ports, figuring out where the logs pipe to etc. You can bake all that as a part of your CI/CD process in Kubernetes as well, but again, I want to emphasize on the done-for-you aspect of OpenShift here. At the end of the day, OpenShift is a distribution of Kubernetes packaged with a lot of tried and tested goodies.