Nutanix Glossary

What is Kubernetes?

April 7, 2023 | min

Also known as K8s or kube, Kubernetes is an open-source container orchestration platform that allows users to schedule and automate processes for deploying, scaling, and managing containerized applications. It groups all the various containers that form an application into logical units, which makes them easier to manage.

Developed for in-house use by Google engineers, it was offered outside the company as an open source system in 2014. Since then, Kubernetes has grown in popularity and usage to become a critical element of the cloud-native story; in fact, the system (and containers in general) are largely considered to be the basic foundational components of today’s modern cloud applications and infrastructure.

Kubernetes runs on a wide range of infrastructure - including hybrid cloud environments, public cloud, private clouds, virtual machines, and bare metal servers - giving IT teams excellent flexibility.

How does Kubernetes work?

Several main components make up the Kubernetes architecture. They are:

Clusters and nodes

As the building blocks of Kubernetes, clusters are made up of physical or virtual compute machines called nodes. A single master node operates as the cluster’s control plane and manages, for example, which applications are running at any one time and which container images are used. It does this by running a scheduler service that automates container deployment based on developer-defined requirements and other factors.

Multiple worker nodes are responsible for running, deploying, and managing workloads and containerized applications. The worker nodes include the container management tools the organisation has chosen, such as Docker, as well as a Kubelet, which is a software agent that receives orders from the master node and executes them.

Clusters can include nodes that span an organisation’s entire architecture, from on-premise to public and private clouds to hybrid cloud environments. This is part of the reason Kubernetes can be such an integral component in cloud-native architectures. The system is ideal for hosting cloud-native apps that need to scale rapidly.


Pods are the smallest unit of scalability in Kubernetes. They are groups of containers that share the same network and compute resources. Grouping containers together is beneficial because if a specific container is receiving too much traffic, Kubernetes automatically creates a replica of that pod in other nodes in the cluster to spread out the workload.

How it all works together

The Kubernetes platform runs on top of the system’s OS (typically Linux) and communicates with pods operating on the nodes. Using a command-line interface called kubectl, an admin or DevOps user enters the desired state of a cluster, which can include which apps should be running, with which images and resources, and other details.

The cluster’s master node receives these commands and transmits them to the worker nodes. The platform is able to determine automatically which node in the cluster is the best option to carry out the command. The platform then assigns resources and the specific pods in the node that will complete the requested operation.

Kubernetes doesn’t change the basic processes of managing containers, it simply automates them and takes over part of the work so admin and DevOps teams can achieve a high level of control without having to manage every node or container separately. Human teams simply configure the Kubernetes system and define the elements within them. Kubernetes takes on all the actual container orchestration work.

Features and capabilities of Kubernetes

Kubernetes has a good range of features and capabilities that simplify container orchestration across multiple nodes, enable automation of cluster management, and optimize resource utilization. These include:

  • Automatic scaling – scale containers and their resources up or down as needed based on usage
  • Lifecycle management – allows admin to pause and continue deployments as well as roll back to previous versions
  • Desired state declaration – admins define what they need and Kubernetes makes it happen
  • Self-healing and resiliency – includes automatic restarts, placements, replication, and scaling
  • Scalable storage – admins can dynamically add storage as needed
  • Load balancing – the system uses a number of tools to balance loads internally and externally
  • Support for DevSecOps – helps simplify container operations security across the container lifecycle and clouds and allows teams to get secure apps to market faster

7 Ways to Simplify Kubernetes Lifecycle Management

Benefits of Kubernetes

Kubernetes offers a wide range of benefits, especially to those organisations that are focusing on cloud-native applications. The following benefits are just part of the reason Kubernetes is far and away the most popular container management system available today: 

  • Move workloads wherever they operate best – the platform’s ability to run on-premises and in the cloud make it simple.
  • Simplify monitoring, managing, deploying, and configuring containerized apps of any size or scale.
  • Integrate Kubernetes easily into existing architecture with its high extensibility.
  • Keep IT spending under control through Kubernetes’ built-in resource optimization, ability to run workloads anywhere, and automatic scalability based on demand.
  • Free up IT and DevOps teams to focus on more critical tasks instead of managing and orchestrating containerized apps.
  • Optimize hardware resource usage, including network bandwidth, memory, and storage I/O, with the ability to define usage limits.
  • Increase application efficiency and uptime with Kubernetes’ self-healing features.
  • Schedule software updates without causing downtime.
  • Future-proof your infrastructure with Kubernetes’ ability to run on decoupled architectures and handle quick and massive growth. 

What is Kubernetes used for?

Kubernetes helps organisations better manage their most complex applications and make the most of existing resources. It also helps ensure application availability and greatly reduce downtime. Through container orchestration, the platform automates many tasks, including application deployment, rollouts, service discovery, storage provisioning, load balancing, auto-scaling, and self-healing. This takes a lot of the management burden off the shoulders of IT or DevOps teams.

Here’s an example: Say a container fails. To keep downtime to a minimum (or eliminate it altogether), Kubernetes can detect the container failure and automatically execute a changeover by restarting, replacing, and/or deleting failed containers. The system also oversees all clusters and determines where to best run containers depending on where and how resources are already being consumed. All of this work happens automatically and within milliseconds – no human team can match that.

Kubernetes security best practices

Security is a top priority for every organisation today, regardless of where they are running their workloads and applications. Here are some recommended best practices for securing your Kubernetes system and the applications and data within it:

Turn on Kubernetes Role-Based Access Control (RBAC)

This built-in security feature allows you to authorize specific users to access the Kubernetes API and define what they can do with it. The feature keeps clusters protected when a user’s credentials are lost or stolen. Because an attacker who gets into the system on a user’s credentials will have the same permissions and roles of that user, keep permissions as detailed and specific as possible and avoid granting over-privileged roles. Experts recommend using namespace-specific permissions rather than cluster-wide permissions – and not to allow cluster admin privileges even when debugging.

Implement third-party authentication on Kubernetes API server

Using an integrated third-party security system on Kubernetes can provide extra security features like multifactor authentication. It can also ensure that the API server isn’t changed or compromised when you add or remove users.

Ensure that etcd is well-protected and encrypted

Kubernetes stores all cluster data using etcd, a distributed, reliable key-value store, so it’s imperative that etcd is strongly protected and that access is restricted to only those who need it. Protect it with firewall that allows only other Kubernetes components to pass through. Encrypting etcd data at rest is also recommended – it’s not encrypted by default.

Keep Kubernetes nodes hardened and isolated

Your nodes should reside on a network that is not accessible through public networks. That will require isolating the system’s control and data traffic. Using an ingress controller, you can make sure that the node network only allows connections from the master node. Also, harden your nodes by keeping them up-to-date with patches, kernel revisions, and so on.

Monitor network traffic regularly

Compare active network traffic to the traffic governed by Kubernetes policies to gain visibility into and an understanding of how applications interact and also to detect suspicious activity or communications. Comparing the different types of traffic also allows you to recognize the network policies that your cluster workloads don’t use. With this information, you can eliminate any unnecessary connections to reduce vulnerabilities.

Enable audit logging

Audit logging allows you to monitor authentication failures and other suspicious API calls. Any failures will show a message of “Forbidden” and could signal an attacker’s attempts to get into the system with someone’s credentials. You can define which events should be logged in the Kubernetes system and set up alerts to be sent upon an authentication failure.

Use process whitelists

Process whitelisting allows you to identify when unexpected processes are running. This takes an initial awareness of which processes normally run during certain operations, and then creating a whitelist for those.

Always use the latest version of Kubernetes

Keeping Kubernetes up-to-date and current will help ensure you are protecting your system from known vulnerabilities and threats. Upgrading Kubernetes is a complex process, but it’s worth it. Many providers offer automatic system updates.

Keep Kubelet locked down

Kubelet is the software agent inside a worker node that receives and executes commands from the master node. It includes an API that allows you to perform a number of operations such as starting or stopping pods. You can protect it in a few ways, which include: disabling anonymous access, setting an authorization mode, setting a NodeRestriction in the Kubernetes API server, and disabling services that no longer function, such as cAdvisor.

Get familiar with CIS benchmarks

The Kubernetes community has worked with the Center of Internet Security (CIS) to develop a security best practices benchmark for deploying Kubernetes. Making sure your system meets that benchmark is highly recommended.


Take a Test Drive of Nutanix Kubernetes Engine

Kubernetes use cases

Organisations are using Kubernetes today for an extremely wide range of use cases. These include:

  • Large-scale application deployment
  • Microservices management
  • Development of continuous integration/continuous deployment (CI/CD) software
  • Serverless computing enablement
  • Hybrid and multicloud deployments
  • Big data analytics
  • Large or complex computational projects
  • Machine learning projects
  • Migration of data from on-prem servers to the cloud

Kubernetes vs Docker

Organisations new to Kubernetes might have seen or heard about Docker, another system almost synonymous with containers, and wonder which one is better. But that’s actually the wrong question. Here’s why.

What is Docker?

Like Kubernetes, Docker is an open source solution that allows users to automate application deployment. Unlike Kubernetes, it’s also a container file format, and has become the de facto file format for Linux containers. Using the Docker Engine, you can build and run containers in a development environment. A container registry such as Docker Hub allows you to share and store container images. The Docker suite of solutions is really good at helping you deploy and run individual containers.

Docker does offer an orchestration solution, called Docker Swarm, but it hasn’t surpassed Kubernetes in popularity or use. It does offer some similar capabilities, but it is tied to the Docker API and Docker containers, and doesn’t offer nearly the range of customizability and extensions that Kubernetes does. In a head-to-head comparison, most experts agree that Docker Swarm’s simplicity to install and lightweight management make it a good choice for organisations just getting started with containers or having simple applications that don’t need frequent deployments. Kubernetes is the solution of choice for organisations that need to support and manage large-scale, complex workloads and applications or that need the complete range of advanced features and customisation options it offers.

Kubernetes and Docker play well together

While Docker is primarily focused on individual containers, Kubernetes can orchestrate groups of containers at scale. K8s has an API that manages where and how those container clusters will run. The two solutions can work very well together to help you build, deliver, and scale all of your containerized applications. Docker really shines in the development phase, with a focus on packaging and distributing containerized applications. Kubernetes is focused on operations and running those Docker (and other) containers in complex environments.


Kubernetes offers a range of benefits, from the way it simplifies and automates container orchestration and management to its robust open source community and dynamic scalability. It’s a critical component of cloud-native strategies and supports hybrid and multicloud computing models, which makes it a smart choice for organisations interested in increasing the rate of development, deploying applications anywhere, and running apps and services more efficiently.

Nutanix helps simplify Kubernetes operations and management even further with Nutanix Kubernetes Engine (NKE). With NKE, you can:

  • Deploy and configure production-ready Kubernetes clusters in minutes, not days or weeks
  • Simply integrate K8s storage, monitoring, logging, and alerting for a full cloud-native stack
  • Deliver a native Kubernetes user experience with open APIs

Recommended for you:

Explore our top resources

Nutanix Kubernetes Engine datasheet

Nutanix Kubernetes Engine

Benefits of using HCI for Kubernetes

5 benefits of using HCI for Kubernetes

Nutanix Kubernetes Engine Guide

7 Ways to Simplify Kubernetes Lifecycle Management

Related products and solutions

Hybrid Cloud Kubernetes

Through partnerships with Red Hat, Google Cloud, and Microsoft Azure, Nutanix offers a fast, reliable path to hybrid cloud Kubernetes.

Nutanix Kubernetes Engine

Fast-track your way to production-ready Kubernetes and simplify lifecycle management with Nutanix Kubernetes Engine, an enterprise Kubernetes management solution.

Kubernetes Storage

Nutanix data services and CSI extends simplicity to configure and manage persistent storage in Kubernetes.