BlogDeploying Cloud-Native Applications with Kubernetes


Due to disparity between the hypervisors and the virtual machine managers running on-premises and the cloud, cloud-native infrastructures have never been easy. Thankfully two major trends have changed the face of the hybrid cloud – containers and Kubernetes.

The container runtime became the lowest common denominator to run workloads across physical machines, private cloud, and the public cloud. Container images have become the preferred deployment units of software. Our experience working with our customers is that they are relying more and more on containers for developing and testing any new application.

In a lot of ways, Docker and container runtimes became an alternative to hypervisors. A containerized application developed on macOS deployed in Amazon EC2, Google Compute Engine, or Azure VMs with absolutely no changes to the code and configuration.

If Docker is the new hypervisor, Kubernetes became the replacement for proprietary virtual machine managers. With containers as the deployment unit and Kubernetes as the orchestration manager, the industry has finally agreed on a standard compute infrastructure layer.

The basic idea behind hyper-converged infrastructure (HCI) is that resources in each server in the data center — memory, storage, processing power, networking bandwidth — can be pooled together rather than perform as a bunch of individual components. And VMware has dominated this server virtualization market over the last 15 years. Public cloud providers are now adopting VMware’s ESXi and the vSphere ecosystem as an on-ramp to the public cloud. AWS, Google, and Azure are running vSphere on their cloud infrastructure.

However, modern containerization that was popularized by Docker and industrialized by Kubernetes now offers a better alternative. Containers are lightweight, easy to deploy, and offer significant advantages in maintenance and general operation. The premise of Kubernetes is that applications and individual services may be distributed throughout a data center, utilizing infrastructure more fluidly. Kubernetes has taken off because organizations don’t want to have vendor lock-in. So, Kubernetes became essential to multi-cloud architectures bridging the gap between the infrastructure layer and the containerized applications running on it.

Tweet: Deploying Cloud-Native Applications with #Kubernetes #cloud #datacenter #HCIKubernetes is the de facto Infrastructure as a Service (IaaS) layer for deploying cloud-native applications. This common layer of abstraction allows us to extend from the I/O of a single container up to multiple clusters in hybrid cloud environments. This enables seamless mobility of applications and data as required at any time between any cloud providers with minimal impact. Kubernetes is a cloud-native solution for the next decade of computing.

Thanks to the standardization efforts and the conformance program, a developer testing containerized software on his desktop can confidently deploy it in a production environment running Kubernetes. With container runtime and Kubernetes becoming the gold standard of modern infrastructure, the original promise of the hybrid cloud can now guarantee compatibility across different environments. The Cloud-native Computing Foundation, the governance body that manages Kubernetes, played a key role in making sure that the commercial implementations conform to a standard.

About the Author

Mark Teter

Mark Teter, Corporate Technologist

In his role, Mark is responsible for the strategic direction of ASG’s emerging technology offerings and advancing the deployment of present-day hybrid cloud solutions for our customers. Mark has served as Faculty Staff Member at Colorado State University and has written over 50 white papers on subjects including Data Center Ethernet, Linux and Open Source, Storage Area Networks and Computer Virtualization. He published Paradigm Shift in 2006, a book on emerging technologies. He is a Google Certified Professional Cloud Architect.