What is Virtualization?

What is virtualization?

Virtualization is a core IT technology that uses a software layer called a hypervisor to create virtual versions of physical computing resources—such as servers, storage, memory, and networks. By separating software from hardware, the hypervisor pools and allocates these physical resources to multiple independent environments, each isolated from the others. The most common result of this process is a virtual machine (VM)—a fully functional, software-based computer that runs its own operating system and applications, just like a physical server.

By decoupling operating systems from hardware, virtualization enables one physical system to run many VMs simultaneously, improving resource utilization, scalability, and operational efficiency across data centers and cloud environments. It’s a foundational technology behind modern cloud computing platforms such as AWS, Microsoft Azure, Google Cloud, and IBM Cloud.

Virtualization in IT infrastructure

Virtualization in IT infrastructure refers to the abstraction of physical computing resources, such as servers, storage systems, and networks, into flexible, software-defined environments. Instead of running a single application on a dedicated server, IT teams can create multiple virtual machines on a single physical system, each capable of running its own OS and workloads. This enables more efficient use of hardware, easier scalability, simplified IT management, and a smoother path to cloud adoption. Virtualization is a foundational element for modern datacenters and is essential for supporting hybrid multicloud strategies, disaster recovery, and agile application development.

The history of virtualization technology

Virtualization has transformed computing over the past six decades, evolving from mainframe resource sharing to modern cloud-native architectures. It began in the 1960s when IBM developed CP-40 and CP-67, pioneering the concept of running multiple virtual environments on a single mainframe (IBM CP/CMS history). In the 1970s, IBM introduced VM/370, formalizing virtualization on System/370, while Unix added the chroot system call (1979), enabling process isolation and influencing containerization decades later. The 1980s expanded virtualization beyond mainframes with tools like AT&T’s Simultask (1985) and Merge/386 (1987), which allowed DOS applications to run on Unix systems using Intel 80286/80386 features. The 1990s brought PC-based virtualization, with Bochs introducing x86 emulation, Connectix Virtual PC launching in 1997, and VMware revolutionizing enterprise computing in 1999 by introducing efficient x86 virtualization on commodity hardware.

The 2000s saw OS-level virtualization emerge through FreeBSD Jails in 2000, Xen in 2003, KVM in 2006, and Linux Containers (LXC) in 2008, setting the stage for lightweight virtualization. In the 2010s, Docker transformed application deployment by making containers portable and developer-friendly, while Kubernetes, introduced in 2014, became the standard for container orchestration and scalable automation. By 2020, Kubernetes dominated container environments, deprecated Docker as a runtime in favor of containerd and CRI-O, and serverless solutions like AWS Fargate gained traction, while enterprises widely adopted Kubernetes for production workloads.

From IBM’s early experiments to Kubernetes dominating cloud-native infrastructure, virtualization has evolved into a cornerstone of modern computing. Today, it powers data centers, hybrid clouds, and containerized applications worldwide—continuing to drive innovation in scalability, security, and automation.

Types of virtualization

There are a few different types of virtualization technologies that can accomplish a variety of tasks.

Data virtualization

Data virtualization is a type of data management that integrates data from multiple applications and physical locations for use without the need for data replication or movement. It creates a single, virtual abstract layer that connects to different databases for virtual views of the data.

Server virtualization

Server virtualization is creating multiple instances of one server. These instances represent one virtual environment. Within each virtual environment is a separate operating system that can run on their own. This allows one operating machine to do the work of many machines, eliminating the need for data sprawl and saves on operating expenses.

Operating system virtualization

Operating system virtualization is similar to server virtualization. The host operating system is reconfigured to operate multiple isolated operating systems such as Linux and Windows on one machine that allows multiple users to work from it in different applications at the same time. This is also known as operating system-level virtualization.

Desktop virtualization

Desktop virtualization is a type of software that separates the main desktop environment from other devices that use it. This saves time and IT resources as one desktop environment is deployed onto many machines at once. This also makes it easier to deploy updates, fix systems and add security protocols across virtual desktops at once.

Network virtualization

Network virtualization combines network hardware and software functionality into a single entity. Often combined with resource virtualization, it combines multiple resources that are then split up into separate segments and assigned to the devices or servers that need them. This type of virtualization improves network speed, scalability, and reliability.

Storage virtualization

Storage virtualization is taking storage resources from multiple smaller devices and combining them into one large storage device. Administrators can use this storage as needed through a single, central console via virtual machines and physical servers. To do this, the software takes storage requests and determines which device has the capacity to use it as needed.

Application virtualization

Application virtualization (or app virtualization) is a technology that allows users to access and use applications from a remote server without installing them on their local devices. It isolates applications from the underlying operating system, enabling them to be run in a controlled, virtualized environment.

The benefits of using virtualization

Put simply, virtualization solutions streamline your enterprise datacenter. It abstracts away the complexity in deploying and administering a virtualized solution, while providing the flexibility needed in the modern datacenter.

Not to mention, virtualization can help create a “greener” IT environment by reducing costs on power, cooling, and hardware. But cost savings aren’t the only advantage of opting for virtualized solutions. Here are more benefits of going virtual:

  • Minimize Servers - Virtualization minimizes the amount of servers an organization needs, letting them cut down on heat buildup associated with a server-heavy datacenter. The less physical “clutter” your datacenter has, the less money and research you need to funnel into heat dissipation. 

  • Reduce Hardware - When it comes to saving money, minimizing hardware is key. With virtualization, organizations are able to reduce their hardware usage, and most importantly, reduce maintenance, downtime, and electricity overtime. 

  • Quick Redeployments - Virtualization makes redeploying a new server simple and quick. Should a server die, virtual machine snapshots can come to the rescue within minutes.

  • Simpler Backups - Backups are far simpler with virtualization. Your virtual machine can perform backups and take snapshots throughout the day, so you always have the most current data available. Plus, you can move your VMs between servers, and they can be redeployed quicker.

  • Reduce Costs & Carbon Footprint - As you virtualize more of your datacenter, you’re inevitably reducing your datacenter footprint—and your carbon footprint as a whole. On top of supporting the planet, reducing your datacenter footprint also cuts down dramatically on hardware, power, and cooling costs.

  • Better Testing - You’re better equipped to test and re-test in a virtualized environment than a hardware-driven one. Because VMs keep snapshots, you can revert to a previous one should you make an error during testing.

  • Run Any Machine on Any Hardware - Virtualization provides an abstraction layer between software and hardware. In other words, VMs are hardware-agnostic, so you can run any machine on any hardware. As a result, you don’t have the tie-down associated with vendor lock-in.

  • Effective Disaster Recovery - When your datacenter relies on virtual instances, disaster recovery is far less painful, and you end up facing much shorter, infrequent downtime. You can use recent snapshots to get your VMs up and running, or you may choose to move those machines elsewhere.

  • Cloudify Your Datacenter - Virtualization can help you “cloudify” your datacenter. A fully or mostly virtualized environment mimics that of the cloud, getting you set up for the switch to cloud. In addition, you can choose to deploy your VMs in the cloud.

How does virtualization work?

One of the main reasons businesses use virtualization technology is server virtualization, which uses a hypervisor to “simulate” the hardware underneath for shared access. In a non-virtualized environment, the guest operating system (OS) normally works in conjunction with dedicated hardware. When virtualized, the OS still runs as if it's on dedicated hardware, letting companies enjoy much of the same performance they expect with less hardware. Though the hardware performance vs. virtualized performance isn’t always equal, virtualization still works and is preferable since most guest operating systems don’t need full use of the hardware.

As a result, businesses can enjoy better flexibility and control and eliminate any dependency on a single piece of hardware. Because of its success with server virtualization, virtualization has spread to other areas of the datacenter, including applications, networks, data, and desktops.

Virtual Machines

Virtual machines (VMs) are a key component of virtualization technology. Virtualization refers to the creation of a virtual version of a computing resource, such as a server, storage device, operating system, or network, rather than relying on a physical resource and virtual machines are an emulation of a computer system. The underlying hardware is shared by a hypervisor to run multiple operating systems. While VMs have been around for 50 years, they are now becoming more popular with the advance of the remote workforce and end-user computing. Some popular virtualization stacks and hypervisors include VMware vSphere with ESXi, Microsoft Windows Server 2025 with Hyper-V, Nutanix Acropolis with AHV, Citrix XenServer, and Oracle Linux KVM.

Hypervisor

Hypervisors play a crucial role in virtualization as they are the software layer that enables the creation and management of virtual machines (VMs) on physical hardware.  A hypervisor is a software that abstracts and isolates hardware and operating systems into virtual machines with their own memory, storage, CPU power, and network bandwidth. Another key function of hypervisors is isolating the VMs from one another and handles communications between all the VMs.

The benefits of virtualization for organizations

Organizations hoping to pursue a more cloud-like IT environment will need to set their sights on virtualization first. VVirtualizing your datacenter helps you use your server resources far more efficiently. In the past, businesses would have to dedicate one application, such as email, on a single server. In those cases, businesses would either over-accumulate multiple servers to take on their multiple applications, or they’d face a different issue altogether: Resources being underused on an entire server.

Either way, this method is costly, space-consuming, and inefficient. Thanks to virtual solutions, IT teams can run multiple applications, workloads, and operating systems on just a single virtual machine, and resources can be added and removed as needed. Virtualization scales easily with businesses. As demands rise and fall, it helps organizations stay on top of their resource utilization and respond faster as changes arise.

Virtualization security: benefits and risks

Depending on the way a system is configured, virtualization is no less secure than physical components and systems. Virtualization solves some security challenges and vulnerabilities posed by physical systems, but it also creates new challenges and potential risks. That’s why it’s important to be aware of both security benefits and risks of virtualization so you can configure systems to offer the right kind of protection for the systems you’re using. As long as you are familiar with the potential security issues of physical and virtualized systems, you can implement the right solutions that will help mitigate those risks.   

Common security issues that come with virtualization:

  • Attacks on host servers – While virtual machines (VMs) are isolated from each other even when they reside on the same server (more on that below), if attackers get into your host-level server, they could theoretically access all the VMs that host controls. They could even create an admin account that gives them authorization to delete or gather important corporate information.
  • VM snapshots – Snapshots are images of VMs that represent a specific moment in time. They’re used most often for data protection, as a backup for disaster recovery. Because the state of the VM is always changing, snapshots are meant to be temporary, short-lived records. Keeping them stored somewhere over a long time period could make them vulnerable to attackers — who could gather a lot of proprietary data from them.
  • File sharing between host server and VMs – Typically, default settings prevent file sharing between the host server and VMs, as well as copy-paste between VMs and remote management dashboards. If those defaults are overridden, an attacker that hacks into the management system could copy confidential data from the VMs or even use management capabilities to infect the VMs with malware.
  • VM sprawl – Because it’s so quick and easy to spin up VMs whenever they’re needed — to use in testing during development, for instance — they are also easy to forget about once testing is over. Over time, an organization could have many unused VMs that exist unknown on the system, which means they don’t get patched or updated. An attacker can easily gain access to a forgotten VM and find further entry into the system from that vantage point.
  • Ransomware, viruses, and other malware – Just like physical systems, VMs are vulnerable to viruses, ransomware, and malware. Regular backups are a must in today’s ever-evolving threat landscape.

When considering what type of security solutions and capabilities you need for your virtualized systems, here are a few factors to keep in mind.

Hypervisor security and best practices

Hypervisors are increasingly targeted in cyberattacks due to their control over multiple virtual machines (VMs). In 2024, a critical VMware ESXi vulnerability (CVE-2024-37085) allowed attackers to gain full admin access, putting all hosted VMs at risk. In 2025, additional zero-day exploits (CVE-2025-22224 to -22226) enabled ransomware attacks that compromised over 41,000 exposed hypervisors, encrypting VM data and deleting backups. These incidents highlight the urgent need to secure hypervisors as a single breach can expose entire virtualized environments.

Best practices for keeping hypervisors secure include:

  • Staying on top of updates – While most hypervisors have automated features that check periodically for updates, it’s not a bad idea to manually check occasionally as well.
  • Using thin, or bare metal hypervisors – These hypervisors are generally more secure than other types because the control features are abstracted from the operating system, which is the most attack-prone element on a server.  If the OS is attacked, the attacker still doesn’t have access to the hypervisor.
  • Limiting use of network interface cards (NICs) – It’s always a good practice to limit the use of physical hardware that connects to the host server and hypervisor. If you must use a management NIC, be sure to protect it with a firewall. That ensures there is one fewer possible entry point for attackers.
  • Turning off non-required services – Any program that connects guests with the host OS should be disabled once it’s no longer in use. An example of this would be file sharing between the host and other users.
  • Mandating security features on guest OSes – Any OS that interfaces with the hypervisor should be required to have a certain level of security capabilities, such as firewalls.

VM isolation

One benefit of virtualization is that one server can host multiple VMs — while isolating each VM from the others so it doesn’t even detect that the others exist. This can be a security benefit as well, because if an attacker gains entry into a VM, it won’t be able to automatically get into the other VMs on that server. This VM isolation even applies to system admin roles. It’s still important to protect each VM just as you would protect physical machines, using solutions such as data encryption, antivirus applications, firewalls, etc.

Isolating all the hosted elements of a VM is important too. Hide elements that aren’t necessary for user interface in subnetworks so their addresses aren’t readily visible or available to potential attackers.

Host security

To reduce the risk of a successful attack on the host server, you can define exact percentages of resource usage and limitations. For instance, configuring a VM to always get at least 15% of available computing resources, but not more than 25%. That way, a VM under attack — such as a denial of service (DoS) attack — is prevented from using up so many resources that the other VMs on the server are negatively affected.

VM security

As stated above, you can (and should) protect each VM with firewalls, antivirus, encryption, and other security solutions to keep them protected. An unprotected VM can be used by an attacker to gain entry into the system and scan public and private address spaces. Any other unprotected VM on that host can be attacked easily as well.

VM escape attacks

VM escape is a system exploit that attackers use to gain access to the hypervisor from a VM. To do this, the attacker unleashes a code on the vulnerable VM that tells a rogue operating system contained inside to leave (or escape) the VM’s confines and interface directly with the hypervisor. With VM escape attacks, malicious actors can gain access to the host server’s OS and the other VMs on the host. Experts recommend the following to minimize risk of VM escape attacks:

  • Update and patch VM software.
  • Share resources only when absolutely mandatory. Limit sharing wherever possible, and turn it off when not needed.
  • Limit software installation because each new app brings new vulnerabilities.

Shared resources

As stated previously, sharing resources between VMs and/or guests creates vulnerabilities in the system. Shared folders or files are attractive targets for attackers that gain entry into the system via a compromised guest. Turn sharing off when not necessary. Also disable copy-pasting capabilities between the host and remote users.

Management interfaces

Some experts recommend creating a separation between management APIs to protect the network. They say you should separate not only infrastructure management apps but also orchestration capabilities from services. APIs are often a target of attackers because they are mini control programs.

When virtualizing machines in the cloud, it’s important to comply as closely as possible with the Virtual Network Functions Manager (VNFM) regulation created by the ETSI Network Functions Virtualisation Industry Specification Group. The regulation mandates certain security standards for APIs that interface with infrastructure and orchestration tools. 

Virtualization vs containerization

Virtualization and containerization are related concepts but represent different approaches to managing and deploying applications and computing resources.

Virtualization abstracts an operating system, data, and applications from the physical hardware and divides up compute, network, and storage resources between multiple VMs. Containers are sometimes described as a subset of virtualization, or OS-level virtualization. Containers virtualize software layers above the OS level, while VMs virtualize a complete machine that includes the hardware layers.

Just as each VM is isolated from the other VMs and behaves as if it were the only machine on the system, containerized applications also act as if they were the only apps on the system. The apps can’t see each other and are protected from each other, just as VMs are. So VMs and containers are similar in that they isolate resources and allocate them from a common pool.

One of the biggest differences between VMs and containers is simply scope. VMs are made up of a complete operating system, plus data and applications and all the related libraries and other resources. A VM can be dozens of gigabytes in size. Containers, on the other hand, only include an application and its related dependencies. Multiple containers share a common OS and other control features. Many isolated containers can run individually on a host OS.

Key differences between VMs and containers:

  •  Isolation – While VMs’ operating systems are isolated from each other, containers share an OS. That means if the host OS is compromised, all containers are at risk as well.

  • Underlying OS – VMs each have their own OSes, while containers share a common one.

  • Platform support – VMs can run practically any OS on the host server. For instance, you can have 10 VMs that run Windows and 10 VMs on the same server that run Linux. Because containers share a common OS, they must align with that OS — meaning, Linux containers have to run on Linux, and Windows on Windows.

  • Deployment – Each VM runs on a hypervisor that controls its functions and resource allocations. Containers are deployed by container applications, such as Docker, and a group of containers needs an orchestration app, such as Kubernetes.

  • Storage – VMs have separate virtual hard disks or shared volumes if they share a single pool of storage resources between multiple servers. Containers are non-persistent by default, or use a container volume for persistent data.

  • Load balancing – Virtualization relies on VM high availability mechanisms unique to the hypervisor for load balancing. Typically, groups of connected servers or nodes provide redundancy if one of the machines fails. Containers rely on the overarching orchestration application, such as Kubernetes, to balance resources and loads.

  • Virtual networks – VMs are networked via virtual network adaptors (VNAs) that are connected to a physical NIC, usually through a virtual switch in the hypervisor. With containers, a VNA is broken down into many separate views for a lightweight form of networking.

Despite their differences, it’s possible to combine the use of containers and VMs. You can run containers within a VM to experiment with system-on-chip (SoC) deployments, for instance, and try them out on a virtual machine before trying it with the physical hardware.

Most of the time, however, organizations primarily use one or the other. It all depends on the resources you need, individual use cases, and preference for the trade-offs.

Virtualization in cloud computing

Virtualization in cloud computing creates an abstracted ecosystem of a server operating system and storage devices. It allows people to use separate VMs that share a physical instance of a specific resource, whether networking, compute, or storage. Cloud virtualization makes workload management significantly more scalable, cost-effective, and efficient. In fact, virtualization is one of the elements that makes cloud computing work.

With cloud computing, you store data and applications in the cloud but use virtualization to allow users to share common infrastructure. While cloud service providers such as Amazon or Azure handle the physical servers, resources, and other hardware, organizations can use virtualization in concert with cloud computing to keep costs down.

What are the benefits of cloud virtualization?

  • Efficient, flexible resource allocation
  • More productive development ecosystem
  • Reduced IT infrastructure costs
  • Remote access to data and apps
  • Fast, simple scalability
  • Cost-effective pay-per-use IT infrastructure
  • Support for simultaneous OSes on a single server

Virtualization solutions and examples

Nutanix offers innovative products that transform virtualization for modern enterprises. At the core is Nutanix AHV, a modern, secure, and open-source virtualization platform that powers VMs and containers for applications and cloud-native workloads, in hybrid multicloud environments. It addresses complex IT challenges and has helped customers like the Texas A&M University System by eliminating hypervisor licensing costs, the software to backup their virtual environment, and the off-site storage device.

Nutanix Kubernetes Platform (NKP) enhances the capabilities of AHV by orchestrating and running containers on top of AHV and other cloud environments. NKP simplifies platform engineering by reducing operational complexity and establishes consistency across any environment. For example, NKP enables businesses to deploy high-availability Kubernetes clusters at scale in minutes with persistent storage included.

Virtualization FAQs

Yes, virtualization reduces IT complexity by consolidating servers, storage, and networking into a unified, software-defined platform. Nutanix enhances this with centralized management, automation, and a consistent operating model across hybrid multicloud environments.

Absolutely. Virtualization allows you to run multiple workloads on fewer physical servers, significantly shrinking your hardware and energy needs. Nutanix’s hyperconverged infrastructure (HCI) further optimizes space and resource utilization by integrating compute, storage, and networking into a single platform.

Virtualization enables multiple workloads to run on fewer physical machines, streamlining infrastructure and reducing duplication. Nutanix accelerates consolidation by integrating compute, storage, and networking into one platform.

Virtualization delivers higher ROI through reduced hardware costs, improved energy efficiency, and faster deployment. Most organizations see a return on investment (ROI) from virtualization within 6 to 12 months, depending on workload size and scale. According to an IDC whitepaper, Nutanix customers report an average 391% ROI over three years and a seven-month payback period, thanks to reduced CapEx, simplified operations, and faster time to value.

Learn more about virtualization