Artificial intelligence has supercharged security threats, creating a new era of enterprise AI security. One in six data breaches in 2025 reportedly occurred with the help of AI, according to IBM.
“Attackers can use generative AI … to both perfect and scale their phishing campaigns and other social engineering attacks,” IBM reported in its 2025 “Cost of a Data Breach” report, in which it revealed that over one-third of phishing and deepfake impersonation attacks — 37% and 35%, respectively — currently utilize AI.
“IBM previously found [generative AI] reduced the time needed to craft a convincing phishing email from 16 hours down to only five minutes. This year’s report shows the impact: On average, 16% of data breaches involved attackers using AI.”
There’s good news, too, however.
“Global data breach costs have declined for the first time in five years … due to faster breach containment that was driven by AI-powered defenses,” IBM continued.
In other words: While attackers’ use of AI likely will continue to grow, enterprises can thwart bad actors by fighting fire with fire, or rather, by fighting AI with AI.
AI creates both internal and external security risks for enterprises. Among the latter are bad actors using AI to launch more and better cybersecurity attacks.
Specifically, AI is accelerating traditional threats to enterprise security, including phishing and social engineering, by helping criminals produce more convincing emails and text, including grammatically perfect, context-aware messages and chats produced at scale.
“Tasks that used to take attackers days or weeks, such as crafting convincing phishing emails or developing new malware variants, now happen in minutes because generative models do the heavy work for them,” said Amit Shingala, CEO of Moatadata, which specializes in AI-driven advertising and demand generation platforms.
What’s most concerning, Shingala shared in an interview with The Forecast, is that malware no longer just hides. With the help of AI-generated polymorphic code, it actively learns and adapts.
“Attackers are using AI systems that observe how endpoint security tools behave and then modify their execution patterns to blend in with normal activity,” Shingala continued.
Polymorphism, where systems continuously change and mutate to evade detection and mitigation, has been an issue since the early days of malware, echoed Trevor Horwitz, CISO at cybersecurity and compliance company TrustNet.
“AI has just really accelerated that,” Horwitz told The Forecast.
Although threat actors are using AI to learn from repeated failures, organizations maintain a slight edge over attackers, Horwitz asserted.
“We have the ability to train our models on all the different combinations of the type of mutations coming in, and that will ultimately provide us with a better set of detection capabilities,” he said.
Aimee Simpson agrees. Because they have access to the same technology stacks as criminals, she observed, enterprise cybersecurity teams are well-positioned to fight back and ultimately win.
“AI-based threats are a threat and need to be taken seriously, but we’re not battling these threats in the industry with outdated technology,” said Simpson, director of product marketing at Huntress, a cybersecurity company founded by veterans of the National Security Agency.
“Cybersecurity defense teams also have access to AI tools, which are phenomenal at detecting these evolving threats in real time.”
In fact, leading cybersecurity organizations have already begun rolling out AI-first security systems that use AI technology within existing defense mechanisms.
“Embracing AI may be a useful approach in many organizations, but it’s only an effective choice if security teams can ensure that the products brought into a company are safe, secure and not going to present a data breach risk,” Simpson said.
If external criminals using AI are one type of risk, then internal employees using AI are another type.
Indeed, the rapid implementation of AI by enterprises also creates security risks by expanding attack surfaces, exposing sensitive data and introducing model‑specific vulnerabilities that existing controls don’t cover.
According to survey data from IT consultancy Gartner, 62% of organizations have experienced a deepfake attack involving social engineering or exploiting automated processes. Another 32% of organizations reported experiencing an attack on AI applications that leveraged the application prompt.
“As adoption accelerates, attacks leveraging [generative AI] for phishing, deepfakes and social engineering have become mainstream, while other threats — such as attacks on [generative AI] application infrastructure and prompt-based manipulations — are emerging and gaining traction,” Akif Khan, an industry analyst on Gartner's cybersecurity and AI team, said in September 2025 at the Gartner Security & Risk Management Summit in London.
Currently, only one in 10 organizations is prepared to protect itself from AI-augmented cyber threats, according to consulting giant Accenture. In a June 2025 report, it said 63% of companies worldwide lack both a cohesive cybersecurity strategy and necessary technical capabilities to defend against AI-driven threats.
Those companies need “a fit-for-purpose security governance framework and operating model … to establish clear accountability and align AI security with regulatory and business objectives,” Accenture noted.
The importance of AI governance is particularly pronounced in the growth of shadow AI, which is the unsanctioned use of AI tools and applications by employees who are looking for solutions that save time or increase productivity, suggested Simpson.
“Enterprises need to outline the risks of shadow AI in their organization and create clear compliance structures around AI usage,” she said.
Oleg Naumenko agreed. Because many companies aren’t managing AI access properly, employees often use AI tools without sufficient oversight, he observed.
“Sometimes they grant access to the company's internal data through APIs without the company even knowing,” said Naumenko, CEO at Hideez, a provider of password-free authentication and identity access management (IAM) solutions.
Naumenko recommends integrating AI access management directly into IAM systems with a zero-trust security mindset.
“Instead of letting individual employees manage AI on their own, IT admins should be the ones setting the rules,” he continued, adding that organizations should give AI systems access only to the enterprise data and infrastructure that they need, and only for a limited time.
Shingala echoes the importance of zero-trust principles, including strict access controls, visibility into what data models consume, continuous monitoring of inputs and outputs, and complete auditability.
“AI should not be treated as an isolated project or a nice-to-have enhancement. It needs to be treated as a fundamental layer of the enterprise architecture,” he said. “Because AI touches so many systems and so much data, its security model must be equally comprehensive.”
Rather than viewing it as deploying new software, organizations should consider AI — and agentic AI, in particular — as a new class of employees, suggested Horwitz.
“If you think about agentic AI and how it interacts, you have similar characteristics to employees,” he said. “Employees have intentions, they have autonomy and they need access to resources.”
Employees and AI also have something else in common: They can make consequential mistakes. Therefore, thinking of AI as a new kind of employee “will help organizations make better decisions,” Horwitz said.
In the face of external and internal risks alike, effective organizations are the ones that treat AI not as a productivity tool, but rather as a core security capability, Shingala observed.
“Adopting AI without building an AI-aware security posture does not just improve business operations. It can unintentionally strengthen the capabilities of attackers, as well,” he said, summarizing the future of enterprise security as a contest of AI against AI.
“Attackers are using AI to scale and evolve, and defenders must rely on AI to predict, identify and neutralize threats just as quickly.”
Gary Hilson has more than 20 years of experience writing about B2B enterprise technology and the issues affecting IT decisions makers. His work has appeared in many industry publications, including EE Times, Fierce Electronics, Embedded.com, Network Computing, EBN Online, Computing Canada, Channel Daily News and Course Compare.
© 2026 Nutanix, Inc. All rights reserved. For additional information and important legal disclaimers, please go here.