Podcast

Ungoverned AI Can Cause Harm

i-GENTIC AI CEO Zahra Timsah says agentic AI makes it easier to enforce guardrails for safe and responsible AI.
  • Key Play:Enterprise Ai, Thought Leadership
  • Nutanix-Newsroom:Article, Podcast

March 10, 2026

Artificial Intelligence demands strong guardrails. And yet, conventional strategies for creating and enforcing robust governance tend to be cumbersome and error-prone.

So suggests i-GENTIC AI, a company that’s exploring the intersection of compliance and innovation. According to its research, some 76% of organizations still rely on manual enforcement of AI governance, 87% can’t keep up with regulation updates and 93% detect compliance violations only after damage is done.

AI itself may offer a way forward, argues i-GENTIC AI CEO Zahra Timsah. In an interview with The Forecast, she said agentic AI, in particular, can help businesses implement effective AI governance at the scale and speed needed in today’s dynamic regulatory environment.

Podcast Ungoverned AI Can Cause Harm
i-GENTIC AI CEO Zahra Timsah says agentic AI makes it easier to enforce guardrails for safe and responsible AI.
  • Key Play:Enterprise Ai, Thought Leadership
  • Nutanix-Newsroom:Article, Podcast

March 10, 2026

This Tech Barometer podcast segment is part of a series on AI industry leaders.

The Governance Imperative

AI has dominated discourse for the past few years. After exhaustive conversation and speculation, however, companies that went blue in the face talking about AI are finally starting to use it, according to Timsah. 

“We have crossed a threshold where AI is no longer experimental,” she said. “AI models are embedded in credit scoring, healthcare, diagnostics and legal reviews.”

That makes strong governance essential. 

“When you reach that scale, governance is not optional,” Timsah continued. 

“Ungoverned AI … can create real harm. You’re talking about biased hiring systems, misinformation loops, intellectual property violations and even opaque decision paths for decision makers.”

More than 80% of all AI projects fail, RAND Corporation reported in a 2024 analysis. That’s twice the rate of failure for IT projects that do not involve AI, it said.

The reason so many AI endeavors crash, Timsah suggested, is that many organizations view governance as just another compliance box to check. Instead, governance should “function like an ecosystem of accountability” that connects technical safeguards, organizational policies and human judgment, she said.

Unfortunately, that kind of ecosystem can be difficult to construct and impractical to sustain, Timsah acknowledged. With AI potentially operating across multiple platforms, governance becomes costly and time-consuming. And humans can make mistakes, putting organizations at risk.

RELATED AI Shakes Up Software Development
Pendo CEO Todd Olson explains how AI is changing software development, teamwork and how people tackle complex, time-consuming tasks.
  • Nutanix-Newsroom:Article, Podcast
  • Use Cases:AI ML

February 6, 2026

Due to its myriad privacy regulations, that’s especially true in the healthcare arena, which is Timsah’s specialty. 

“You are dealing with patient data,” she said, adding that AI-driven applications potentially put all that very sensitive data at risk.

Many other industries face similar challenges as they ramp up their AI use cases.

“Today’s business leaders are operating in a world where every single AI decision, whether it’s talking about a model output or an automated recommendation, is carrying two things: opportunity and liability,” Timsah explained.

Good AI governance can help offset the liabilities by ensuring not only regulatory compliance, but also data privacy and ethical and unbiased outputs.

But conventional approaches, that is, manual and human-based approaches, may not be able to keep pace. With businesses “hiring armies of auditors, or maybe relying on post-incident reviews,” manual enforcement of AI governance is labor-intensive and often comes too late, said Timsah, who argues that organizations need a modernized approach. “They need situational awareness at scale.”

Agentic AI to the Rescue?

Experts urge consistency. 

“Effective AI governance requires simple, actionable frameworks that integrate data quality standards and accountability into development and deployment,” CompTIA said in an October 2025 blog post.

As an automated means to monitor and enforce an organization’s AI governance, AI agents can help execute that vision. 

“They can run 24/7 and handle a humongous amount of data points,” Timsah said. “You are getting consistent and fast results.”

RELATED Agentic AI Reconfigures Customer Service
Boost.ai Co-founder and COO Henry Iversen on how contact centers can prepare for the agentic AI revolution.
  • Article:Profile
  • Key Play:Enterprise Ai, Thought Leadership
  • Nutanix-Newsroom:Article

February 13, 2026

To that end, i-GENTIC AI has developed an AI operating system that’s tailor-made for governance. The platform delivers autonomous agents that act as compliance officers, monitoring systems and applying rules in real time. Agents can deliver intelligent oversight by making suggestions, or even by proactively enforcing policy.

With AI agents supporting governance, “you have a single source of truth and a single pane of glass that can handle regulations for you at a process level,” Timsah explained, adding that AI agents tasked with governance can help organizations “bring all those moving pieces into one place and then produce audit-ready reports and flag risks for you.”

The standard AI caveat applies: Agents shouldn’t be left to manage governance entirely on their own. Oversight remains essential. 

“You as a human are the decision maker,” Timsah said. “You need someone to continuously refine those agents and get the results that are needed.”

Looking Ahead

Given the rapid adoption of AI across myriad industries, the need for robust governance will only continue to increase. Along the way, organizations will be tasked with continuously raising the bar on how they oversee AI-informed processes.

“Best practices include cross-functional governance teams, regular risk reviews and dedicated responsible AI offices with authority to influence AI strategy, design and deployment,” CompTIA said in its aforementioned blog post

“Continuous monitoring for model drift and unexpected autonomous behavior is essential to sustain trust and alignment with ethical standards.”

RELATED FinOps Flicks on AI to Optimize IT Resources
In this Tech Barometer podcast segment, Mayank Gupta, director of product marketing at Nutanix, explains how intelligent FinOps tools automatically detect optimizations and take corrective action to manage cloud costs, track carbon footprints, and meet regulatory requirements in the AI era.
  • Industries:Financial Services
  • Nutanix-Newsroom:Article, Podcast
  • Products:Nutanix Cloud Manager (NCM)
  • Use Cases:Hybrid Multicloud

November 18, 2025

Timsah pointed to the automation inherent in agentic AI as a key enabler in that effort. Agents’ ability to deliver clear reporting will likewise help AI stay on track. 

“Imagine a boardroom: You have executives, and they can see their entire AI landscape in front of them as a living system,” she said.

Imagine that those same executives receive alerts when models are drifting out of ethical or regulatory bounds. When that happens, they can ask questions about AI behaviors and train the system to constantly improve.

“This is what agentic AI can provide,” Timsah concluded. “We’re creating a digital nervous system that’s uniting ethics, risk and intelligence into one living operating system.”

Podcast transcript:

Zahra Timsah: We have crossed a threshold where AI is no longer experimental at this point, right? It's kind of like an infrastructure. AI models are embedded in credit scoring, healthcare diagnostics, legal reviews. Even if you look at national defense, you have AI. They're no longer tools. They're decision makers in digital form. This is how you can think about that. When you reach that scale, governance is not optional anymore. It's kind of existential, so to speak.

Jason Lopez: Zahra Timsah, co-founder and CEO of iGentic AI, asserts that the deployment of AI in an organization requires a real-time governance layer to help see what an AI system is doing. This is the Tech Barometer podcast. I'm Jason Lopez. What you're about to hear from her is a part of our Thought Leader series on AI. And while there's the debate about AI in the news headlines, at The Forecast, we're going deeper, talking to technologists who are filling us in on what they're seeing in the industry and what they're working on in artificial intelligence.

RELATED Shaping the Future of Enterprise AI with Intellectual Curiosity
David Kanter, co-founder of ML Commons, explains the evolution of AI and the enterprising mindset for putting it to good use.
  • Article:Profile
  • Nutanix-Newsroom:Article

April 17, 2025

Zahra Timsah: Ungoverned AI, and this is from experience, can create harm, like real harm. You're talking about biased hiring systems, misinformation loops, intellectual property violations, and even opaque decision paths for decision makers. They are operating in a world where every single AI decision, whether you're talking about a model output, a data merge, automated recommendation, whatever, is getting two things, opportunity and liability.

Jason Lopez: She says organizations have to ensure their AI systems are transparent, accountable, and ethical. And that's why emerging regulations are rapidly shifting the conversation from optional best practices to enforceable requirements for explainability, fairness, and traceability.

Zahra Timsah: If you look at regulations that are accelerating, like look at the EU AI Act, look at the US AI executive order, look at the GCC frameworks, AI governance has really evolved. It's no longer just a compliance checkbox. It's kind of a trust infrastructure. We're seeing companies kind of form AI governance councils, I think is a very good idea. And they're including in it CEOs, CIOs, general councils, and tech leadership.

RELATED Measuring the Prime Ingredient in Enterprise AI
In this Tech Barometer podcast, MLCommons Co-founder David Kanter talks about creating the MLPerf benchmark to help enterprises understand AI workload performance of various data storage technologies.
  • Nutanix-Newsroom:Article, Podcast

April 22, 2025

Jason Lopez: Timsah says without coordination, organizations risk managing AI through fragmented tools and disconnected processes, which will struggle to keep pace with change. Bringing stakeholders together is a great step.

Zahra Timsah: You're talking about folks that specialize in GRC, governance, risk, and compliance. You're talking about legal departments, even technical experts as well. Because not only do you have people, you also have platforms. You know what they say. It's people, process, and platform. All of these are like siloed tools to manage the GRC.

Jason Lopez: This is where, she says, agentic AI can deliver. And just to highlight that agentic AI isn't so much about AI agents. Agentic AI is the operating model that sets direction, plans the work, and brings team members together to achieve a larger goal.

Zahra Timsah: With agentic AI, you have systems that are learning from human decisions and improving their governance reflexes. With an agent, they can run 24-7. They can handle a humongous amount of data points that a human cannot even imagine. You're getting consistent and fast results.

RELATED Role of CIO Expands with Enterprise AI
In this Tech Barometer podcast, Northeastern University professor Dr. Norman Jacknis explains why IT leaders must concentrate on delivering business value while understanding essential and emerging innovations such as artificial intelligence and machine learning.
  • Nutanix-Newsroom:Article, Podcast

January 22, 2025

Jason Lopez: In this model, AI becomes an active oversight layer rather than just a passive reporting tool. Instead of waiting for periodic reviews, leaders can monitor performance, detect risks, and respond to governance issues.

Zahra Timsah: Let's imagine a boardroom. You have executives, and they can see their entire AI landscape in front of them as a living system. This is what agentic AI can provide. Models that flag when, let's say, some sort of an AI system is drifting out of ethical or regulatory bounds. Privacy agents can mask PII, cyber agents detecting anomalies, and the list goes on and on and on. It's governance that's talking back to you. Leaders can ask it questions, and then they can give you back answers quickly, not generate reports like humans would do with the answers hidden in them.

Jason Lopez: It moves governance efforts from static oversight to real-time awareness. But every organization interprets risk differently.

Zahra Timsah: There is no such thing as one-size fits all. Every company has its own risk appetite, understanding, and interpretation of these regulations. Then you have a team that's going to review the regulations and try to understand what they mean for your company.

Jason Lopez: Timsah says internal expertise remains essential. Even with advanced automation, meaningful oversight still depends on people.

Zahra Timsah: You always have to have a human in the loop. Always. Just like you're driving a Tesla, and it can just drive itself. But you can also receive guidance so that you are the decision maker. You, as a human, is a decision maker. Agentic AI is powerful, and it can get rid of mundane tasks. But you will still generate errors, and there is nothing that can replace human experience.

Jason Lopez: And here's a part of the story that underlines what she says. Zahra Timsah did not arrive at agentic AI from a computer science trajectory. She came up through healthcare, studying cancer biology and drug discovery. She did postdoc work at places like the MD Anderson Cancer Center, and she discovered that personalized medicine was becoming too complex for manual analysis. Early on, she got involved in healthcare AI technologies, and one of her goals was to design and test patient-specific therapies using neural networks.

Zahra Timsah: What we're doing is creating a digital nervous system that's uniting ethics, risk, and intelligence into one living operating system.

RELATED Data Protection Gets Its Uber Moment
In this Tech Barometer podcast, HYCU CEO Simon Taylor takes listeners into the mind of a creative, disruptive entrepreneur who’s turning data protection into an AI-powered arsenal against cybercrime and ransomware attacks.
  • Key Play:Thought Leadership
  • Nutanix-Newsroom:Article, Podcast

February 17, 2026

Jason Lopez: It's about embedding governance directly into how AI systems operate, so oversight happens continuously rather than after problems arise.

Zahra Timsah: Agentic is really the world's first agentic AI operating system for governance, and it didn't take us a day or two, a month or two to build Agentic. It took us 17 years of experience to fine-tune these agents to act on our experience as founders to achieve the results. It's a platform where you have autonomous agents, but these are in reality digital chief compliance officers that can monitor in real time, which a person cannot do, enforce, and even learn, you know, like you're learning compliance, whether that relates to AI, data, privacy, and cybersecurity. So it's running 24-7, and it's instant. It's proactive. It's not reactive. So these agents think of them as intelligent layers of oversight. That's what we're doing. So everyone who's touching or benefiting from AI carries responsibility. Governance cannot be delegated to a single compliance officer, or you can't bury it inside IT.

Jason Lopez: Zahra Timsah is the co-founder and CEO of iGentic AI, a governance platform that uses AI agents to manage and enforce governance, risk, and compliance. Our story with her is part of The Forecast's reporting on thought leaders in the AI industry. You might check out some of our stories from other thought leaders, such as our profile of Greg Diamos. Go to theforecastbynutanix.com. That's all one word, theforecastbynutanix dot com. This is the Tech Barometer Podcast. I'm Jason Lopez. Thanks for listening.

Adam Stone is a journalist with more than 20 years of experience covering technology trends in the public and private sectors.

The podcast was produced by Jason Lopez, executive producer of Tech Barometer, the podcast outlet for The Forecast. He’s the founder of Connected Social Media. Previously, he was executive producer at PodTech and a reporter at NPR.

© 2026 Nutanix, Inc. All rights reserved. For additional information and important legal disclaimers, please go here.

Related Articles