Kubernetes vs Openshift vs Tectonic: Comparing Enterprise Options (Part II)

In Part1, we covered a high-level overview, differences and use cases for Openshift, Tectonic and vanilla Kubernetes. In this post we will take a deep-dive and evaluate following aspects in greater detail:

  • Supported Environments
  • Storage
  • Networking
  • Ease of Operations
  • Application Ecosystem
kubernetes vs openshift

Openshift vs Tectonic vs vanilla Kubernetes

Supported Environments

Vanilla Kubernetes has a lot of installation options for various environments.

  • Minikube is a single node cluster available for local testing and development.
  • Google Container Engine provides a hosted Kubernetes solution where GCP takes care of maintaining the master.
  • Kops is a solution for installing Kubernetes on AWS.
  • Kubeadm is another tool which makes it easy to install Kubernetes on Linux VMs running Ubuntu 16.04+ or CentOS 7.

Tectonic has GUI installers for AWS and bare-metal platforms. There are also Terraform installers available for AWS, Bare-Metal, Azure (alpha), VMWare (pre-alpha) and Openstack (pre-alpha). Other cloud support is not specified.

Openshift can be installed in two ways via RPM or via containerized components. Ansible scripts are also provided which allow automated installation and can be tuned as required.

Openshift Origin has documented support for AWS, OpenStack, GCE and Azure. Minishift is a small Openshift installation on a single VM which allows a quick installation and is useful for anyone to do local testing.

Vanilla KubernetesTectonicOpenshift
  • Bare Metal
  • All Clouds
  • Dev workloads on minishift

 

No clear steps for Production ready configuration

  • Bare Metal
  • AWS
  • Azure (alpha)
  • VMWare (pre-alpha)
  • Openstack (pre-alpha)

Production ready clusters out of box

  • Bare Metal
  • AWS
  • GCE
  • Openstack
  • Azure

Clear set of Guidelines for Production Best practices.


Storage

Vanilla Kubernetes provides persistent volume support for storage backends such as nfs, iscsi, fiber-channel, gluster-fs, AzureDiskVolume, AWS EBS, GCE Persistent Disks, Openstack Cinder, CEPH RBD, vSphere volume, OpenEBS, Quobyte, Portworx and ScaleIO. As vanilla Kubernetes can be installed across multiple hardware types, the range of supported storage is also pretty wide. Vanilla Kubernetes also provides an abstraction in form of Storage Classes which hide away the storage complexity from the user.

Since Tectonic provides additional features on top of vanilla Kubernetes, all of above storages are supported. Openshift has documented support for most of above but support for third-party storage solutions such as vSphere volume, Quobyte, Portworx and ScaleIO is not mentioned.

Some of above options are bound to their respective clouds whereas some others are open source as well as cloud agnostic (such as nfs, gluster etc). Using a cloud-based storage is a good fit for a scenario where all implementations of the product are going to be on the same cloud as there is less maintenance overhead. Whereas, for a product which may have to be deployed at any cloud/on-premise, a cloud agnostic solution can be a better fit.

Vanilla KubernetesTectonicOpenshift
NFS

GlusterFS

AzureDiskVolume

AWS EBS

GCE Persistent Disks

Cinder

Ceph

vSphere

OpenEBS

QuoByte

Portworx

ScaleIO

NFS

GlusterFS

AzureDiskVolume

AWS EBS

GCE Persistent Disks

Cinder

Ceph

vSphere

OpenEBS

QuoByte

Portworx

ScaleIO

NFS

GlusterFS

AzureDiskVolume

AWS EBS

GCE Persistent Disks

Cinder

Ceph


Networking

Again, vanilla Kubernetes is most flexible as it supports networking plugins such as Cilium, Contiv, Contrail, Flannel, GCE, direct L2 networking (experimental), Nuage VCS, Open VSwitch, Open Virtual Routing, Calico, Romana, Weave, CNI-Genie and user can choose based on their requirements.

Openshift supports the following plugins: Openshift SDN, Nuage SDN, Flannel, Contiv and F5-BIG-IP.

Tectonic, predictably, comes installed with flannel which is CoreOS networking plugin.

An interesting project in the networking area is Container Networking Interface(CNI); managed by CNCF, CNI is designed to be a minimal spec concerned only with configuring network interfaces within Linux containers and removing the networking resources when the container itself is removed. This creates a uniform standard against which various networking plugins can be created.

(image credit: Lee Calcote, Sr. Director, Technology Strategy at SolarWinds)

Kubernetes, Tectonic and Openshift are listed container run-time adopters for CNI which makes it easy to swap various networking plugins which are following CNI specifications.

Vanilla KubernetesTectonicOpenshift
Cilium

Contiv

Contrail

Flannel

GCE

direct L2 networking (experimental)

Nuage VCS

Open VSwitch

Open Virtual Routing

Calico

Romana

Weave

CNI-Genie

flannelOpenshift SDN

Nuage SDN

Flannel

Contiv

F5-BIG-IP

 

Ease of Operations

In this section, we shall look at two major operational activities; Upgrades and RBAC implementation.

Upgrade of vanilla Kubernetes depends on the method of installation (direct / kubeadm / hyperkube / juju-charms among others) and hence there are multiple ways to upgrade a cluster which can get confusing. In addition, there aren’t any well documented generic upgrade procedures which are quickly available. Most Kubernetes upgrade guides allow for upgrades within a major version number.

Tectonic provides an experimental automatic upgrade of Kubernetes components within its cluster which can be fully automatic or approval based. While minor upgrades can be done seamlessly, upgrades between releases (such as 1.5.x to 1.6.x) are still a work in progress. Also, a clear procedure for manual upgrades is missing.

Openshift provides the ability to automate upgrades via ansible playbooks in addition to a well defined manual upgrade process. Although, the upgrades need to be in sequence.

Vanilla Kubernetes 1.6 has a new RBAC feature which can be enabled by passing --authorization-mode=RBAC flag to kube-apiserver. There are two types of roles, regular roles are scoped within a namespace while ClusterRoles are scoped for the entire cluster. A RoleBinding resource is used to bind roles with subjects which can be users, groups or service accounts.

Tectonic has RBAC implementation similar to vanilla Kubernetes but it also adds on audit logging capability so that the audit logs can be streamed to log aggregation backends which is a requirement in certain industries.

Openshift RBAC implementation is similar to Kubernetes with some differences such as policy levels and security context constraints

Application Ecosystem

Helm is a package manager for Kubernetes which allows deployment of pre-configured Kubernetes resources (called Charts). The helm charts repository contains a lot of applications packaged as charts which can be easily deployed on a Kubernetes cluster. This makes it easy to manage and update applications rather than handling individual resources. In addition to the charts available in the helm repository, users can create charts for their own applications and use them as a way of distributing their application. Helm client can receive charts from source repositories, zip archives as well as directories. Both vanilla Kubernetes and Tectonic allow applications to be deployed via Helm charts.

Openshift has a similar concept called templates which is used to deploy a list of parameterized objects on an Openshift cluster. Openshift also maintains a library of curated templates similar to helm charts. Users can also write their own templates and upload them to their cluster for further deployment.

Conclusion

Comparing Openshift, Tectonic and vanilla Kubernetes, we see that in terms of handling storage they are all almost at par with each supporting a wide range of storage backends. In terms of networking, vanilla Kubernetes provides the widest variety of plugins whereas Tectonic and Openshift have relatively fewer plugins to support.

Looking at upgrades of Kubernetes itself, Tectonic provides an automated way to upgrade between minor versions, Openshift provides scripts for performing upgrades from one version to next and requires upgrades to be handled sequentially. vanilla Kubernetes still needs a lot of clarity on upgrade procedure.

Helm Charts are a good way of packaging applications for vanilla Kubernetes and Tectonic which are agnostic of underlying layers. Openshift, on the other hand, has its own mechanism of templates which may not be as portable.

Get Netsil Updates

Get good content for good karma!

Copyright © 2015 - 2017 Netsil Inc. All Rights Reserved. | Privacy Policy

Share This