Because containerized applications lend themselves to distributed environments and use compute, storage, and network resources differently than do legacy applications, they require a layer of infrastructure dedicated to orchestrating containerized workloads.
Kubernetes has emerged as the dominant container orchestrator and is often characterized as the operating system of the cloud. It is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.
assigns containers to machines (scheduling)
boots the specified containers through the container runtime
deals with upgrades, rollbacks, and the constantly changing nature of the system
responds to failures (container crashes, etc.)
creates cluster resources like service discovery, inter-VM networking, cluster ingress/egress, etc.
Kubernetes is designed for scalability, availability, security, and portability. It optimizes the cost of infrastructure, distributing workloads across available resources. Each component of a Kubernetes cluster can also be configured for high availability.
Kubernetes functionality evolves rapidly as a result of continuous contributions from its active global community, and there now exists a variety of Kubernetes distributions available to users. While users are best served by CNCF-certified distributions (conformance enables interoperability), intelligent automation around Kubernetes lifecycle management features, coupled with easy integration of storage, networking, security, and monitoring capabilities are critical for production-grade cloud native environments.
Challenges with going ‘cloud native’
Cloud native technologies are the new currency of business agility and innovation, but configuring, deploying, and managing them are challenging for any type of enterprise. First, Kubernetes and its ecosystem of cloud native technologies is deep and evolves fast. Furthermore, legacy infrastructure isn’t architected for the way Kubernetes and containers use IT resources. This is a significant impediment to software developers, who often require resources on demand, along with easy-to-use services for their cloud native applications. Finally, every organization ends up employing a mix of on-prem and public cloud-based Kubernetes environments, but benefitting from multicloud flexibility depends on being able to effectively manage and monitor all deployments.
Cloud native best practices
Building an enterprise Kubernetes stack in the datacenter is a major undertaking for IT operations teams. It is essential to run Kubernetes and containerized applications on infrastructure that is resilient and can scale responsively in support of a dynamic, distributed system.
Nutanix hyperconverged infrastructure (HCI) is the ideal infrastructure foundation for cloud native workloads running on Kubernetes at scale. Nutanix offers better resilience for both Kubernetes platform components and application data, and superior scalability to bare metal infrastructure. Nutanix also simplifies infrastructure lifecycle management and persistent storage for stateful containers, eliminating some of the most difficult challenges organizations face in deploying and managing cloud native infrastructure.