Technology

Data Center Risk Management: A Comprehensive and Effective Plan

Companies with data centers need to prepare for multiple natural and unnatural risks while maintaining compliance.
  • Article:Technology
  • Nutanix-Newsroom:Article

February 5, 2026

Artificial intelligence technology has permeated practically every corner of the world, fundamentally transforming data center risk management for enterprises both big and small.

According to the 2025 State of AI report by McKinsey, 88% of organizations now regularly utilize AI. That same year, a Nutanix survey of IT professionals found that 90% now consider AI a priority. The technology’s meteoric rise is increasing the strain on data centers, forcing IT teams to tighten the screws on their data center risk management.

This comes at a time when acquiring more data center capacity isn’t easy as in past years, according to Harmail Chatha, formerly Senior Director of Cloud Operations at Nutanix. 

“It's a seller's market versus a buyer's market,” he said in a video interview with The Forecast

“There just isn't any capacity available. Hyperscalers are starting to take up all the data center capacity. They're buying dirt waiting for the data center.”

RELATED Clouds With Borders: IT Teams Design for Geopatriation
To follow local data-use rules and reduce risks, IT teams are creating data systems that tell where data can and cannot go.
  • Article:Business
  • Nutanix-Newsroom:Article

February 3, 2026

That’s just one of the emerging variables that affects data center risk management, while some best practices remain unchanged. Smart CIOs continue to treat data centers as capital assets, with their own budgeting, management objectives and periodic upgrade necessities, while also bearing in mind new headwinds spurred by environmental, social and governance requirements (ESG), increased wildfire activity and new regulatory frameworks such as Digital Operational Resilience Act (DORA) that joins existing compliance obligations such as privacy legislation.

In the meantime, cloud computing, mobile applications, IOT, EUC, and remote work continue unabated, which means IT leaders still must manage external and internal risks to avoid downtime, which can result in losing millions of dollars a day.

What is Data Center Risk Management?

Since data centers in their bare form are physical facilities that house business-critical data and applications, the risks they face are immense, regardless of whether they’re built and run within the enterprise, managed by an MSP, or hosted off-site by a cloud service provider.

Those risks have only compounded in the age of AI, as hardware is increasingly more expensive and more difficult to come by, making intelligent risk management even more critical.  

RELATED Exploring Risks After Broadcom Acquired VMware
In this Tech Barometer podcast segment, IT industry analyst Steve McDowell and Lee Caswell from Nutanix discuss the challenges and risks IT operations leaders face in the wake of Broadcom’s acquisition of VMware.
  • Nutanix-Newsroom:Article, Podcast

February 2, 2024

Effective data center risk assessment requires that IT leaders identify every potential threat by assessing the role people, practices and technology play in mitigating risks to a particular data center, that can include power outages, natural disasters, and region-specific rules and even political instability.

Risk assessment is followed by planning how to minimize any threat, their consequences, mitigations and solutions, and what it will cost. Any risk mitigation strategies must be implemented without disrupting data center operations and service delivery to customers.

Data center risk management isn’t possible without a thorough assessment.

Identifying and mitigating all-pervasive risks involves a process called integrated risk management (IRM), a term coined by the research firm Gartner to describe a set of practices and processes supported by a risk-aware culture and enabling technologies. It's a process for improving decision-making and business performance.

Organizations need the right tools and processes to monitor each moving part of the data center and deal with any risks that come up at any point in time, including malicious cyberattacks. Big data and analytics are instrumental in forming an accurate and comprehensive assessment of the risks to various operations that the data center enables, such as data access, application mobility, and DevOps. They also enable the implementation and execution of dynamic disaster recovery plans.

RELATED Protecting Against Ransomware at the Data Level
Cyber threats continue to evolve but new strategies and technologies are protecting data from ransomware attacks and shortening recovery time.
  • Article:Technology
  • Nutanix-Newsroom:Article

March 21, 2024

But it’s people, processes who play the central role in creating these plans – specialists such as IT admins who are responsible for day-to-day IT operations to ensure uptime, Tuhina Goel, senior product marketing manager of business continuity and disaster recovery at Nutanix.  

“But decision makers such as the CIO, VP or Director of IT are ultimately responsible for data center risk management,” Goel said. “They own the budget and other resources to invest in right security measures, tooling and employee training.”

Any risk management plan needs to be in place before a disaster occurs. Risk assessment and auditing is the first step here. This begins with an evaluation of your existing owned and operated facilities from the point of view of facility design, IT architecture and topology, as well as operational sustainability.

It’s also important to learn from past outages by conducting a postmortem to find the root cause so that you can identify and address any inadequacies specific to the parts of the ecosystem that were affected. If the organization has a hybrid infrastructure with multiple data centers in place and there are plans for data center expansion or consolidation, each asset needs to be individually assessed for resiliency.

It helps to create a chart or sheet for handy reference that lists the major risk categories, mentions all the crucial systems each category affects, estimates the damage and recovery costs, and makes it clear what to do in case of an incident.

For organizations that need to comply with legal, contractual, or regulatory requirements, periodic data center risk assessments and disaster testing are inevitable. Not having a risk management plan in place can lead to the whole data center going down because of a single point of failure anywhere in the architecture, leading to significant disruptions to operations and consequent losses in revenue.

Assessment Comes Before Management

Any risk management plan needs to be in place before a disaster occurs. Risk assessment and auditing is the first step here. This begins with an evaluation of your existing owned and operated facilities from the point of view of facility design, IT architecture and topology, as well as operational sustainability.

RELATED AI Drives Need to Manage Multiple Databases in Parallel
Veteran database expert Ashish Mohindroo explains why complexity from the proliferation of database options is driving IT teams to seek smart ways for storing, retrieving and managing data that powers modern applications.
  • Article:Technology
  • Key Play:Enterprise Ai
  • Nutanix-Newsroom:Article
  • Products:Nutanix Database Service (NDB)

November 22, 2025

It’s also important to learn from past outages by conducting a postmortem to find the root cause so that you can identify and address any inadequacies specific to the parts of the ecosystem that were affected. If the organization has a hybrid infrastructure with multiple data centers in place and there are plans for data center expansion or consolidation, each asset needs to be individually assessed for resiliency.

It helps to create a chart or sheet for handy reference that lists the major risk categories, mentions all the crucial systems each category affects, estimates the damage and recovery costs, and makes it clear what to do in case of an incident.

Chatha said it’s critical to check in with IT teams to understand what data they are managing so it can be protected and accessible with the right mechanisms, including multi-factor authentication (MFA) and VPNs, as well as applying ISO controls as part of a holistic security mindset. 

Enduring Data Center Risks

It isn’t easy to categorize or even list out all the kinds of risks that a data center faces. Consequently, CTOs and IT teams have many uncertainties to worry about.

Geographic threats: Topological and climate risks should be evaluated at the time of choosing a data center location and then again during the facility planning phase. If areas at higher risk of natural disasters such as earthquakes, hurricanes, floods, and bushfires can’t be avoided, consider the use of stronger construction material in the buildings to offset the risk.

Luckily, many natural disasters can be forecasted and prepared for.  AI is an effective tool in both predicting and preparing for potential disasters. 

Further, data centers built in cooler climates have natural, renewable options for energy savings and cooling, which is why Nordic countries are a popular destination for building data centers.

RELATED Is Your AI Data and Infrastructure Audit Ready?
Experts from KPMG International, Carnegie Mellon University and Nutanix discuss why organizations must stand behind the output and underlying IT infrastructure powering AI capabilities.
  • Article:Business
  • Job Title:ITDM
  • Key Play:Enterprise Ai
  • Nutanix-Newsroom:Article

November 13, 2025

In addition to natural hazards, data center managers should also consider man-made dangers. Make sure airports, power grids, chemical plants, military bases, and water bodies are a safe distance away. On the other hand, it helps if there is a fire station, hospital, and police station nearby.

Power outage: Power disruption can pose an existential threat to a mission-critical data center. Organizations need to make sure there is enough resilience built in with UPS-backed power routes to each rack and cooling system. Having dual power sources with direct connection to a multi-substation power grid for the site is a minimal protection against local substation power failure. On top of that, backup generators can be on standby as a last resort.

For example, Cloudflare experienced multiple power outages at one of its Portland, Oregon facilities, an episode it named Code Orange. In that scenario, operations were temporarily moved to redundant facilities in the Portland area and were ready to shift to Europe if need be. 

Power demands at data centers are steadily multiplying and these centers are expected to grow from 3-4% of total U.S. energy consumption to 12% by 2030.

Water seepage: Water is a double-edged sword for data centers. Even a few drops on critical hardware can cause irreparable and permanent damage. At the same time, water supply and storage for cooling and fire control systems needs to be maintained at optimal levels. 

Data center’s fresh water needs are predicted to explode to four times its current levels, further straining access to a finite resource, particularly in water scarce areas like the desert. Consistent access to water is imperative to smooth data center operations.

Acoustics: Exposure to high-decibel sounds for prolonged periods of time is one of the most overlooked risks when building data centers. Hard drives and storage systems are particularly susceptible to loud sounds – high-frequency sound vibrations can significantly lower read and write performance, possibly compromising data quality and integrity.

RELATED Cloud Vendor Shakeup Puts Focus on IT Resilience
CIOs and their IT teams brace their strategies to ensure business continuity and flexibility following disruption from Broadcom’s acquisition of VMware, and experts explain how a hybrid multicloud approach can help.
  • Article:News
  • Nutanix-Newsroom:Article

August 22, 2024

It follows that the data center should be located far away from airports, arenas, and the like. Acoustic suppression technologies play a critical role in reducing equipment exposure to sonic shockwaves from high-decibel noise sources such as security and fire alarms or other apparatus on and around the premises.

Fire: Fires in data centers are mostly caused by power surges in the electrical equipment. One fire could destroy thousands of dollars’ worth of devices if not detected and put out immediately. In the early stages of a fire, the amount of smoke is so low that it can’t be detected by smoke detectors. Further, air conditioning and circulating systems disperse it quickly. The solution is Aspirating Smoke Detectors (ASD) that detect smoke at a very early stage and alert users as soon as minimum thresholds are crossed. In wildfire hazard areas, AI can help predict danger and better prepare the data center.

Recently, in South Korea, a lithium ion battery fire at a data center crashed the entire nation’s digital services, wreaking havoc on government functions and disrupting emergency services. 647 different systems reportedly came offline and many were ultimately unrecoverable. The incident is a stark reminder on the importance of fire preparedness and robust digital infrastructure.

Security: Security failures in a data center could include anything from a network breach to sabotage and damage caused by individuals present at the site. One of the biggest threats is cyberattacks that result in leakage of account data or personally identifiable information (PII) belonging to customers.

Certain application or system failures may result in security personnel being unable to verify card holders’ identity or authorize them to go to certain areas. Video cameras and doors with access control might lose their connection to the central system too.

Breaches and threats caused by ransomware can only be dealt with using a multilayered approach to data protection, which has three aspects: prevention, detection, and recovery. Specific defense mechanisms include educating end users, regular vulnerability scanning, role-based access control, and regular data backups (the proverbial last line of defense).

RELATED Game Hosting Companies Unsheathe AI and Cloud Native Tools to Combat Cheaters and Cyberattacks
In exclusive interviews, the CEO of Edgegap and CIO at i3D.net, two IT infrastructure providers behind some of the world's most popular online games, including titles from Electronic Arts, Ubisoft, and Epic Games, describe advanced technologies they use to defend against hackers, identity-based threats and botnets threatening billion-dollar gaming franchises.
  • Article:Industry
  • Key Play:Enterprise Ai
  • Nutanix-Newsroom:Article
  • Use Cases:Cloud Native

December 10, 2025

Artificial intelligence is also playing an essential role in modern data center security through threat detection. Machine learning algorithms can scan and analyze unfathomable quantities of data, identifying potential threats faster than its human counterparts.

The technology empowers IT professionals to have more proactive cybersecurity that spots patterns and anomalies and reduces response times. These tools can be both predictive and preventative and can automatically initiate potential responses in case of attack. 

Hardware should be reinforced as well with best security practices. Chip-level security means ensuring that devices are best insulated against threats whether that’s with encryption or secure boot. By protecting against tampering and other malicious acts, hardware can be a more secure root for data.

Compartmentalizing and containment is key to reducing risks in the case of a catastrophic episode, so problems can be isolated instead of becoming cascading.

Security breaches consistently fetch headlines and have infamously exposed millions of user’s most sensitive data, occurring everywhere from Facebook to Equifax.

Emerging AI Risks: As powerful a tool as artificial intelligence is in preventing cyber attacks, it can also be weaponized against a data center too. AI can be deployed to create deepfakes that target personnel with important access and credentials. As this technology begins to make more and more infrastructure decisions within a data center, human supervision and involvement is vital.

System failure: This is where the most number of things might potentially go wrong, with the highest frequency. It is important to identify and fix all the single points of failure (that might possibly affect the data center) in the entire IT infrastructure.

This starts with a resilient network architecture and connectivity. Redundant fiber optic connectivity is the gold standard for data centers. Then come servers with multiple tenants or multiple applications running on them. Clustering, mirroring, and duplication help in ensuring continuous access and delivery and minimize the possibilities of downtime.

RELATED Tech Insights
Explore articles, blogs, best practices, and research—all built to help you modernize, run your business and innovate. Whether you’re shaping strategy or solving challenges hands-on, you’ll find exactly what you need

December 30, 2025

Both security and system failures are influenced by the hardware running in the data center. “Hardware lifecycle management is one approach that we're taking to address security,” Chatha said, as gear that is near end of life isn’t supported by the latest and greatest operating systems. 

Modern HCI-powered data centers now pack everything together and deliver IT infrastructure as a resilient, secure, and self-healing platform.

Another risk is when software applications go rogue on the data center and take down systems and servers with them. IT needs to make sure that these applications can run seamlessly over the entire infrastructure without causing any glitches in servers located in the data center or any other environment.

Backing up data and files is a routine procedure for most organizations, but immediate recovery of real-time or transactional data in the event of downtime should be a priority for data centers. This is done in different ways in different companies according to the regulatory standards applicable to their industry. Again, by consolidating multiple backup solutions into a single turnkey platform such as Nutanix Mine, organizations can simplify data lifecycle management and get complete visibility and control over their data.

Having a disaster recovery plan in place is essential for data center risk management, said Chatha. “If a primary location goes down, is it going to impact the entire company or subset of the company? How critical is that?”

Poor Disaster Recovery planning: Identifying and minimizing any and all risks isn’t the end of the story. Any risk management plan worth its salt should know exactly what to do when (not if) disaster strikes and include a step-by-step recovery plan for every imaginable undesirable event. This starts with having systems in place that monitor key environmental factors and alert the concerned people when certain thresholds are crossed.

Failing this, the situation might quickly get out of hand and losses will escalate in the event of a sudden disaster.

Having a disaster recovery plan in place is essential for data center risk management, said Chatha. 

“If a primary location goes down, is it going to impact the entire company or subset of the company? How critical is that?”

Platforms that are flexible and automated are critical for non-disruptive recovery in the event of a disaster. Nutanix Xi Leap is a DR orchestration solution that is simple to deploy and manage, as well as adaptable to on-premises or cloud sites. It eliminates data silos and facilitates replication and recovery from a single user interface.

What New Complexities and Threats are on the Horizon?

The data center business is more dynamic thanks to emerging technologies and workloads, and data center risk management is now facing new challenges.

Energy demands: Costing data power consumption has never been more important, especially with rising energy costs and power hungry AI workloads. As Vince Kellen, CIO of the University of California, San Diego told EdTech’s Tom Mangan, “We’re seeing that with every wave of hardware expansion in the supercomputer center, the type of computing is much more intensive, both from an energy and a heat standpoint.”

The article notes that it’s not just AI that’s putting pressure on power – with every wave of hardware expansion, supercomputing workloads are much more intensive, both from an energy and a heat standpoint, and data-driven processes are on the rise. 

Kellen told EdTech that if energy consumption continues to climb at current rates, some areas of the United States may not have enough energy to keep the computers humming, with a state being advantaged or disadvantaged based upon how it regulates its cost of energy.

If you're doing business in Europe, your data center risk management must factor in rising energy costs across the continent.

Sustainability Demands: Despite increased power demands on data centers, owners and operators are expected to continue to focus on sustainability, which adds to risk management complexity. 

RELATED Building Scalable, Sustainable Data Centers
Nutanix director of global cloud operations Harmail Chatha on the state-of-the-art data centers designed with sustainability in mind.
  • Article:Technology
  • Nutanix-Newsroom:Article

March 1, 2024

In 2024, carbon offsets have fallen out of favor, noted Fixate.IO technology analyst Christopher Tozzi in DataCenter Knowledge, with more investment required in "green AI infrastructure" within data centers, such as processors that are designed to reduce the energy consumed by AI workloads, as well as an increase in water efficiency. 

Reporting and compliance: Sustainability and ESG demands will increasingly require more metrics reporting and disclosures, especially around water use efficiency, wrote Tozzi, which means data center risk management must include methods and processes to release hard data about their sustainability outcomes. In 2023, Amazon Web Services became the first cloud provider to release metrics related to water use, setting a precedent for other data center operators, while California has also set another precedent by implementing new regulations that lay out climate-related disclosure requirements on data centers.

Not all regulatory compliance is related to environmental concerns. Privacy legislation across different jurisdictions, whether it’s the General Data Protection Act (GDPR) in Europe or the California Consumer Privacy Act (CCPA), place demands on security if companies are to avoid the penalties for non-compliance. In 2025, DORA, which includes cloud providers because they are considered critical third-party platforms, will also need to be factored into data center risk management. 

Location, Location, Location

If there’s something all data center risk management factors have in common, it’s location – proximity to areas prone to more wildfire activity, higher energy costs and expensive real estate.

Data centers have traditionally been located close to the company’s headquarters, but it also makes sense to have them close to your company's IT staff because they will need to monitor as part of a data center risk management strategy, or meet with a third party managed services provider.

RELATED Validated Way for Moving Between Private Data Centers and Public Cloud
In this Tech Barometer podcast segment, ESG’s principal validation analyst Tony Palmer explains how Nutanix Cloud Clusters for AWS brings significant cost and productivity savings for any sized organization through orchestration and automation of IT resources and operational tasks.
  • Nutanix-Newsroom:Article, Podcast

May 24, 2022

Safety from natural disasters isn’t a new requirement, but increasing wildfire activity has the potential to encroach on areas that have previously been a safe location for a data center.

Chatha said once natural disasters are taken into account, whether it be wildfires or flood zones, data center risk management is heavily influenced by municipalities, including zoning and the local power grid – is the power available? 

Other considerations where data center risk management must consider location is proximity to connectivity and the quality of network providers in the area – you need great reliability and speed that ensure you avoid latency for your end users. 

“Connectivity used to be the biggest barrier in site selection, but that's much easier to manage and deploy these days versus the power constraint,” Chatha said.

Real estate costs are also a consideration when locating your data center, not just for a new build but when considering expansions. If you use a third party managed service provider, their real estate pressures could affect the cost of your services as well as available capacity. 

Balancing the Ecosystem with Data Center Risk Management

A data center has a thousand moving parts. It itself is a cog in the organizational wheel, so to speak. One small misalignment upsets the whole equilibrium of the organization, across departments.

Risk mitigation, therefore, is a shared responsibility. Each employee or stakeholder can help keep the facility operating at its optimal level either by following or by enforcing the rules and learning how to do both better. IT leaders should know exactly where and how much it costs to keep everyone trained and have access to resources they need to carry out any tasks where the data center is involved. The responsibility falls on the CTO or CIO to set expectations and give clarity on these operations.

Of course, data centers or the IT infrastructure itself doesn’t function in isolation. Spending money on data center risk management may not necessarily be a top priority for all managers – most departmental objectives pale in comparison to meeting revenue targets.

“Conflicting goals can be hard to address, but one of the most effective methods of doing so is to have a highly efficient process for continuously identifying where a risk resides. You also need a predictable, reliable method of updating systems without impacting the overarching business goals of the organization,” said Gavin Millard, VP of Product Marketing at Tenable.

And in a competitive seller’s market, data center risk management has become increasingly dynamic, with power now the key consideration, Chathra said. “Connectivity used to be the biggest barrier, but it’s much easier to manage and deploy these days versus the power constraint.”

Chathra has been in the data center industry for decades, and he said even the sellers are constrained by power companies. “Small guys like us are just kind of going wherever we can find a little bit of power here and there and deploy.”

Those power demands are predicted to rapidly accelerate as more of the world turns to artificial intelligence technology and the powerful tools that it has to offer.

This is an updated version of the article originally published on April 15, 2021. This article was also revised on November 7, 2024. 

Gary Hilson has more than 20 years of experience writing about B2B enterprise technology and the issues affecting IT decisions makers. His work has appeared in many industry publications, including EE Times, Embedded.com, Network Computing, EBN Online, Computing Canada, Channel Daily News, and Course Compare. Find him on Twitter.

Dipti Parmar wrote the original article. Find her on X @dipTparmar and LinkedIn.

© 2026 Nutanix, Inc. All rights reserved. For additional information and important legal disclaimers, please go here.

Related Articles