Kubernetes, Kontainers and Kubernetes-native in 2018

  • Spotlight on Operational Challenges of Running Kubernetes-native Applications
  • Kubernetes on Public Cloud is The Winner. Kubernetes on Private Cloud Not Yet
  • Early Majority Asks – What Applications Do We Run on K8s?
  • A Debate Heats Up – To Pod or To Lambda
  • Containers Will Be Spelt “Kontainers” And Cloud-native Will Have a Reincarnation Called Kubernetes-native

Image Credit: Kubecon 2017 Keynote: A Community of Builders: CloudNativeCon Opening Keynote – Dan Kohn

Recap

Kubernetes has emerged as the dominant container orchestration platform. It pretty much slayed all of its competition including Docker Swarm, DC/OS, AWS ECS in 2017. Containers are a convenient pattern that simplify packaging, shipping and running of software applications particularly “microservices” applications. But containers come with the headaches of managing namespaces and underlying infrastructure abstractions such as network namespaces, routing tables, persistent storage volume mounts, noisy/bad container neighbors, etc. So, if you have lots of containers, you will have lots of problems. This is where Kubernetes comes in.

From a business perspective, Kubernetes, Kubernetes-native applications, Kontainers, etc., are all essentially promising that the software development teams will be able to deliver features faster and quickly address business needs. However, if majority of your software velocity is slow due to organizational and process related issues, these technologies may not result in any benefits.

With that in mind, let’s take a closer look at the Kubernetes 2018 outlook

Realizing Operational Challenges of Running Kubernetes-native Applications

Common wisdom in the valley is that building a software is only 10% of the challenge and the 90% lies in running, monitoring, securing, maintaining or operating the software. Microservices architectures of Kubernetes-native applications are no exception. 2018 will be the year when operational challenges of Kubernetes-native applications takes center stage. The progress made in 2017 indicates the following operational challenges: 

  1. Service Mesh: In Kubernetes-native applications almost every service is heavily dependent on a large number of other services and the dominant mode of dependency is the network (i.e API calls). Hence the problems of service discovery, routing service calls, fault tolerant circuit breaking, handling time-outs and retries, etc. are omnipresent in Kubernetes-native applications.And indeed, one of the areas of focus at KubeCon (Dec 2017) was a service-mesh, i.e. a network of proxies with those functionalities related to inter-service communication without requiring every service writer to implement them. (Read: Matt Klein’s blog on load balancers and proxies for modern applications)
  2. Observability:Kubernetes-native applications can be thought of as a graph or map where each node represents a service and edges represent the communication/dependency. If such a visualization is absent, then every operational aspect becomes daunting. If a service is having issues the entire dependency chain will be impacted yet it is very hard to identify all the services in that chain. This challenge will arise every time there is a bad deployment or an incident. Additionally, now that a majority of the business logic relies on the inter-service API calls, it is crucial to monitor the golden signals of these interactions — latency, throughput and error rates.(Read: Netsil’s blog on observability)
  3. Security:Kubernetes has always put security first. Kudos to the developers and thought leaders for baking things like TLS among cli, kubelet and master api endpoints since the very early days. In 2016 and 2017 we saw the RBAC and network policies as major steps towards security. However, there is still a significant way to go on the security front. At the core, the namespace and cgroup underpinnings of containers are very “thin boundaries” from a security perspective (as opposed to the VM or good ol’ bare metal OS). That problem still needs to be addressed and yes, reliable docker images were the “d’oh” moment in that direction but then developers are picking open source packages all the time. Beneath the signed image could be lurking something dangerous that crosses container boundaries without too much difficulties. (Read: Rkt examples of security challenges)

On the security of inter-service communications, the DMZ concept has already proven to be obsolete. So the state-of-the-art is micro-segmentation or security groups. Today, the billion dollar revenue claims of VMware NSX and the fact that security groups are fundamental elements for all public clouds are a testament that micro-segmentation is crucial. However, the IP level security does not work for containers that keep changing their port and IP address with every incarnation. Moreover, the modern attacks are already piggybacking the existing sanctioned communications. In a nutshell, a deeper application aware security of the inter-service communication will be needed. (Read: Project Cilium for http aware security).

Kubernetes on Public Cloud is The Winner. Kubernetes on Private Cloud Not Yet

In 2017, we saw every major public cloud embracing Kubernetes, including AWS and Azure as well as private cloud vendor including VMware and Pivotal. The fastest and most reliable path to successful Kubernetes-native applications to production will be leveraging the K8s-as-a-service offering of the public cloud vendors. GKE naturally has a huge lead there with polished integrations for handling networking volumes and ingress controllers. AWS is the incumbent cloud leader and one of the fastest moving companies in terms of meeting their customer needs, and Azure is making huge strides with its Deis acquisition and Brendon Burns, one of the founding engineers of Kubernetes, now in the Azure camp.

Kubernetes in private cloud will suffer though, from the simple fact the private cloud still largely struggles in reality. Let’s first establish that a virtualized environment is not a cloud. The breadth of services, high SLAs and comprehensive API and ecosystem offered by public clouds have an advantage over any virtualized private datacenter however close to a cloud they try to be. Even if good APIs, automation, RBAC, authentication, etc. are addressed in a private cloud, there are still big gaps such as object storage (S3) or a reliable Database-as-a-Service (RDS). Where does the “state” get stored since the Kubernetes-native paradigm encourages building stateless apps that push the state out to such services as S3, Spanner, or Cloud SQL? Then there is the networking challenge. This talk by Kelsey Hightower on “Container Networking” illustrates the inadequacies of the private cloud with regard to Kubernetes-native applications.

Early Majority Asks – What applications to run on K8s?

When all is said and done what do you build on K8s? A challenge is that much of your workforce is baby-sitting what was built in the past decades. Those older application don’t work well with the modern “stateless”, “ephemeral”, “ci/cd”,… paradigms of the Kubernetes world. So it will likely be the green field of applications and services that will be built from scratch but mechanisms to discover and interact with the old world will still be needed. As K8s enters early majority, the early adopters will continue to present their use cases and help pave the way in these conversations. As an example, here is a brilliant talk from Kubecon 2017 describing the challenges of porting existing legacy applications to the Kubernetes-native landscape.

Image Credit: Josef Adersberger, QAware, “The Good, the Bad and the Ugly of Migrating Hundreds of Legacy Applications to Kubernetes”

To Pod or To Lambda

While you were reading the sections above, a new paradigm is already brewing hot in the market — Serverless Computing. While technically not server-less, this new paradigm essentially takes your “function code” and schedules them to run on servers. No need to baby-sit schedulers, worry about routing the calls, load balancing, etc., the “FaaS” takes care of it. If this walks like a PaaS and quacks like a PaaS then perhaps it is, but with fewer constraints than the PaaS of yesteryears and done on the larger scale of the cloud. Serverless computing is still in its nascent stages. The debate will be whether or not Kubernetes is needed or should energy rather be focused on application logic, describing it to a Lambda service which will take care of operationalizing it. For an excellent article on this topic, you can read Karl Stoney from Thoughtworks.

Image Credit: Jeff Mangan

Conclusion

Change is the only constant attribute of the technology industry. The pace of change has also been accelerating, particularly with the democratization of computing via public clouds. Kubernetes is the promising layer of democratization across the clouds. It will certainly have a significantly disruptive impact on the way applications are designed and run in the coming years. Of course it runs the risk of getting disrupted itself by the likes of Lambda services or AI programs that write and run software on their own!

Best wishes from the Netsil family, we look forward to engaging with you in your tech endeavors in 2018 and beyond

 

Get Netsil Updates

Get good content for good karma!

Copyright © 2015 - 2018 Netsil Inc. All Rights Reserved. | Privacy Policy

Share This