For many years, the traditional IT model has been to build out separate silos of technology, covering servers/virtualization, storage, and networking. Then, in 2009, Cisco, EMC, and NetApp entered the server solution market. At the time it was called converged infrastructure (CI). Cisco and EMC manufactured vBlocks, while Cisco and NetApp released FlexPods. These were computer, network, and storage building blocks that were simpler to install, operate and manage, and support than traditional separate rack systems of the 1990s. CI systems were tailored with application-specific configurations, such as for VDI or Oracle databases. These solutions were sort of like Ragu pasta sauce; everything is included. Oracle Exadata is another good example of a converged application infrastructure.
Then around 2010, a group of vendors came to market with an even newer idea ⎯ hyper-converged infrastructures (HCI). Nutanix, SimpliVity (now HPE), Scale Computing and others realized that data center systems could be built from even smaller IT building blocks. That is, by combining servers, networking, machine virtualization software, and direct-attached disk into a single 2U or 4U rack enclosure, all with no external shared storage array. These hyper-converged systems are scaled out by adding more pre-configured nodes. And viola, the server cluster automatically has more storage, network, and compute capacity. Just add another node if there is not enough compute or storage resources.
As a result, these solutions mean compute-intensive workloads may get way too much storage, and storage-intensive workloads could get way too much computing power. The unfortunate part was computing resources, and unused storage in an HCI system, really could not be used elsewhere in the data center. The real beauty of this latest HCI design is that it allowed us now to be able to move applications (virtual machines or containers) along with their data to and from the public cloud. Enter cloud native computing.
A cloud native infrastructure is one that allows organizations to add, remove, and change business services. It must be reliable, software-driven, and easy to integrate into the cloud. A cloud native infrastructure also means flexible hardware and software resource pools, which simplify management and increase agility. We recommend bare metal, container-driven infrastructure optimized for the networking and storage demands of Kubernetes. Network and storage traffic should be offloaded to dedicated PCI-e processors, and the platform should automatically adjust network and storage resource allocations across the cluster for Quality of Service levels and to satisfy each container’s I/O requirements.
Bare-metal Kubernetes, in practical terms, means not using a hypervisor or virtual machine to run Kubernetes. By adopting a bare-metal Kubernetes-optimized infrastructure, Kubernetes can run containers, VMs, and all other non-virtualized applications across the hybrid cloud data center.
In the past, IT organizations managed and provisioned VMs and their storage resources, but in the future, we believe end-user application owners will drive all IT application resource provisioning and management through Kubernetes. This will orchestrate everything an application needs, enable cloud native applications to run anywhere from on-premises to multiple cloud environments, and will give IT the freedom to run their applications in the best location for performance, compliance, or cost.
Thanks to Kubernetes, these hybrid cloud platforms not only enable workload portability but also deliver the ability to scale the workloads across disparate environments. Going forward, Kubernetes is the universal control plane that can manage containers, virtual machines, legacy workloads, and modern-day applications.
For more information, download my new white paper – The Rise of Being Cloud Native.