Podcast

Get Laser-Focused on AI Governance

ModelOp CTO Jim Olsen explains why strong governance will build trust and deliver business value for enterprises that are implementing AI agents.
  • Key Play:Enterprise Ai, Thought Leadership
  • Nutanix-Newsroom:Article, Podcast

March 3, 2026

Agentic artificial intelligence systems act autonomously to achieve pre-defined goals. That means they can make decisions and take actions without constant human oversight. In enterprises that deploy AI agents, this promises to impact everything from customer support to software development to HR processes.

Jim Olsen is chief technical officer (CTO) at ModelOp, a software company specializing in AI operations. He helps develop software that automates the governance around agentic AI. He thinks as much about the risks as he does the opportunities. 

“What kind of information are you sending out?” he asked in an exclusive interview with The Forecast. “Is it information that’s OK to disclose outside of your company?”

In short: “What are these tools doing?”

Podcast Get Laser-Focused on AI Governance
ModelOp CTO Jim Olsen explains why strong governance will build trust and deliver business value for enterprises that are implementing AI agents.
  • Key Play:Enterprise Ai, Thought Leadership
  • Nutanix-Newsroom:Article, Podcast

March 3, 2026

This is part of the AI Thought Leaders series based on interviews with people pioneering enterprise AI. 

In this Tech Barometer podcast segment, part of a series spotlighting AI industry leaders, Olsen said that governance, the systems and policies that guide how companies deploy AI agents, will be the secret to ensuring safe and effective agentic-AI operations.

Understanding Agentic AI Risks

Agentic AI generally relies on Model Context Protocol (MCP) to determine what actions AI agents take based on the information they receive from large language models. MCP is a two-way street, according to Olsen. 

“It can absolutely just pull information, but it also makes things happen,” he said.

That means MCP needs to be handled with care. 

“We’ve seen instances in the wild of people using MCP tools that have access to potentially proprietary company information, [including] your entire hard drive,” continued Olsen, who recalled seeing in one instance a company that operationalized an MCP that promptly disclosed all user passwords.

When things like that happen, it’s plain to see: Agentic AI comes with real-life risks. 

“You need to understand, when you use these tools, or allow people to use these tools, what impact that could have, what kind of information could go out, what kind of information could come in, what kind of damage could be done,” Olsen said.

Practical Steps for Agentic AI Governance

From Olsen’s point of view, there are steps organizations can take to develop effective agentic AI governance that sidesteps the inherent risks posed by AI agents.

First, define the agentic AI use case: What business goal will the agents support? Then, understand what information will feed those processes. 

“What pieces are they using? What tools [and] what model?” asked Olsen, who said organizations need to track all that. He said they also need an approvals process to ensure that AI agents are utilizing the right data, in the right ways.

RELATED Why the Future of IT Belongs to Open Systems
Vendor lock-in is a failed strategy because flexibility is the key to long-term success, says Paul Updike, technical marketing engineer at Nutanix.
  • Article:Profile
  • Key Play:Hybrid Cloud
  • Nutanix-Newsroom:Article

February 24, 2026

In these early stages, Olsen noted, it makes sense to assemble a formal or informal review board to guide the development of agentic AI solutions. That board should cast a wide net. 

“Your IT department understands SecOps and what needs to go on there,” so they need to be at the table, Olsen said.

But governance is also shaped by business-line owners who determine whether the solution actually adds value. 

“You need all of those players working together,” Olsen stressed.

As governance structures evolve, organizations must also review the infrastructure supporting agentic AI applications. 

“Everyone is doing some form of a cloud,” noted Olsen, who said the sensitive data involved means this most often will be “an Azure- and AWS-hosted private cloud area, where there’s access to their internal networks.”

Even with a private cloud, however, agentic AI raises new security concerns. 

“What kind of guarantees are there about the data and what they’re sending out there, so that they know it’s not being used to train the next model?” asked Olsen, who said agentic AI governance ensures that “you’re not taking your secret sauce and giving it to competitors through a retraining effort.”

Watch for Warning Signs

In these early days of agentic AI, organizations looking to implement AI agents may decide to forgo agentic-specific governance, or they may try to build the rudimentary guardrails themselves. If that describes your enterprise, there are warning signs you can watch for to determine when it’s time to up your governance game.

First and foremost are “disasters,” where agents do something wrong like giving customers an erroneous discount. “Those are fairly obvious,” Olsen said.

More subtle is the potential for organizational disconnect. 

“People are rightfully a little paranoid about letting an autonomous agent that doesn’t have a clear, reproducible path loose in their organization,” Olsen said. “They want to have the trust that there are safeguards in place, that they can see how it’s behaving and can judge how it’s performing.”

RELATED Shaping Data Sovereignty Strategies that Scale
Telekom Deutschland CTO of Cloud and Infrastructure Andreas Eisenreich helps IT teams create high-level digital and data sovereignty strategies. In an interview with The Forecast, he explains how IT teams can scale capabilities while complying with regulations without losing control over their data, operations and free will.
  • Article:Business
  • Key Play:Thought Leadership
  • Nutanix-Newsroom:Article
  • Use Cases:AI ML, Digital Sovereignty, Hybrid Multicloud

February 25, 2026

When an agentic application is launched absent strong governance, it undermines that trust. 

“Maybe IT pushes back because they can’t understand it, or don’t have the information necessary to feel comfortable with it,” Olsen continued.

On the flip side, IT may move too fast for the comfort of business-line leaders who don’t yet see the added value. “If you don’t have a clear process in place that everybody can understand and follow, then you lose that trust and the solutions just don’t happen,” Olsen said.

The Case for Automation

ModelOp’s solutions help to automate key aspects of governance, ensuring agentic AI adheres to both internal policies and regulatory requirements.

Absent automation, Olsen has witnessed organizations spend up to a year ramping up their agentic AI applications. 

RELATED How Kubernetes Catalyzes Enterprise IT
Dan Ciruli, vice president and general manager of cloud native technologies at Nutanix, explains how Kubernetes started as a tool for managing containers but has rapidly evolved into the foundation for modern cloud-native computing across data centers and the edge.
  • Article:Technology
  • Key Play:Enterprise Ai
  • Nutanix-Newsroom:Article
  • Products:Nutanix Kubernetes Platform (NKP)
  • Use Cases:Cloud Native

January 8, 2026

“If you don’t have an automated way of doing that, it becomes a very manual, laborious process and it really slows down the effort,” he said, adding that automated governance tools can accelerate innovation and improve time to market. 

“It’s a clear, well-defined process. You’re not bogged down in paperwork.”

Although robust agentic AI governance requires some organizational commitment, it’s well worth the effort, according to Olsen, who said agentic AI backed by strong governance can deliver business value today, especially around customer support, where it can reduce costs while simultaneously increasing customer satisfaction.

Even more importantly, agentic AI that’s bolstered by strong governance can help future-proof organizations. 

“If you have a truly resilient agentic system, it can adapt to new business needs,” concluded Olsen, who cited use cases like pivoting the supply chain and shifting to address evolving customer demands. “If you can actually get agents to just automatically do that stuff, and do it reliably, it obviously increases the success of your business.”

This is part of the AI Thought Leaders series based on interviews with people pioneering enterprise AI. Explore the series and related podcasts:

AI Shakes Up Software Development

Agentic AI Reconfigures Customer Service

Data Protection Gets Its Uber Moment

Creating AI to Give People Superpowers

MLPerf Scientist at the Intersection of Healthcare and AI

Quest to Improve Cancer Therapeutics with AI and Computer Science

Guiding Enterprise IT Hardware Buyers into the AI Future

Swarms of AI Agents Powering Businesses and Daily Life

Podcast transcript:

Jason Lopez: How do companies control, govern, and deploy AI safely at scale? When Jim Olsen, the Chief Technology officer at ModelOp, talks about AI adoption, the theme of what he says rests on the idea of restraint. 

Jim Olsen: Try not to just build things for the heck of it. That's why you have to tie all this back to a use case to understand, what is my goal and what does success look like?

If you don't have a clear process in place everybody can understand and follow, then you lose that trust, and the solutions just don't happen. That's, of course, a missed opportunity. 

If you have a truly resilient agentic system, it can adapt to new business needs, new things that just pop up. Given that autonomy, how do you actually control and make sure it doesn't disclose all your user passwords? 

Jason Lopez: This is the Tech Barometer podcast I’m Jason Lopez. This story is part of our ongoing thought leader series with people at the cusp of AI technological development, like Jim Olsen of ModelOp, a platform that helps govern, monitor, and manage AI and machine learning models to ensure they are compliant, reliable, and aligned with regulatory and ethical standards. Agentic AI is the operating model that sets direction, plans the work, and brings team members together to achieve a larger goal. When you talk to Jim Olsen, before he gets into the AI conversation, he’s laser focused on why you need it in your organization. What’s the use case? 

Jim Olsen: If you can actually get agents to automatically do that stuff and do it reliably, it obviously increases the success of your business at its core. That's why it really comes down to what is your business value, what is your use case, and the success is going to look very different based on that. Ultimately, my business is successful, everyone's happier, and I've reduced my overall costs. 

Jason Lopez: But that promise only holds if the system performs as expected. The models need to be trusted and safe.

Jim Olsen: The nature of generative AI is that it does go out and perform differently based on very minor changes or even sometimes no changes at all. If you don't have some insight into that, naturally, people are concerned and paranoid about what could happen. You need that clear, transparent process in place in order to build that trust that we can see what's going on. We do know we've put the research in behind this to make sure it's going to behave okay.

Jason Lopez: Olsen says that while a clearly defined use case is the bedrock of an organization’s deployment of AI, another critical part of the strategy has to be managing AI’s behavior.

Jim Olsen: When you use these tools or allow people to use these tools, what impact that could have? What kind of information could go out? What kind of information could come in? What kind of damage could be done? So you need an approval mechanism in place to actually do that. 

You do need some automated process in order to scale this in an appropriate manner, because what you really need to know is, okay, what are the use cases out there that need to use agentic AI? Is it appropriate for them to be using agentic AI? Then what pieces are they using? What tools? What model? How many tokens are they actually using? Are you getting your value back out of your investment in these areas? You do need to track all of this information. You do need some approvals in place to make sure you have that process to do that. You can try to do it on a spreadsheet or something like that. We see that quite often, but we find that really gets lost. 

Jason Lopez: One of those mechanisms is MCP, or Model Context Protocol, an open protocol designed to let AI systems securely connect to external tools, data sources, and services in a standardized and predictable way.

Jim Olsen: But now you introduce these MCP tools and true agents that can act autonomously. How do you ensure that they only have access to the tools and data that they're supposed to? That's where one piece we're providing that we saw a lot lacking that ties directly into our overall automated approval process is an MCP gateway slash proxy that only allows specific use cases to use specific tools and blocks access to tools otherwise, so that way you know what they're being used. As A2A gets in there, we're going to see even more pieces that need to go in there and monitor and understand what's going on. 

Jason Lopez: Olsen says he’s seen cases of MCP tools that have access to a company’s proprietary information. What ModelOp does is essentially act as a control plane AI across an organization. It provides insight into what MCP tools are doing, where they’re enabled with agents, AI models, workflows, and governance.

Jim Olsen: What we've done at ModelOp is we've actually created that kind of a gateway or a proxy where you can deploy approved tools and actually monitor what use cases in your organization use those, and are they approved to use those tools, and block them if they aren't, and put protections in place that can do things like detect PII and say, hey, you can't send PII out of our company, these kinds of things. You can get some control around these MCP tools and understand their usage.

Jim Olsen: Reducing costs, increasing customer satisfaction, increasing accuracy of processes, et cetera, are all kind of standard business goals that a true agentic system can deliver on by being adaptable.

Jason Lopez: Jim Olsen is the CTO of ModelOp, a software platform that helps organizations govern, manage, and scale AI systems responsibly. This is the Tech Barometer podcast, I’m Jason Lopez. We’ve got some other great stories in our thought leader series on AI. Check out our profile of David Kanter, co-founder of ML Commons. That’s at theforecastbynutanix dot com.

Adam Stone is a journalist with more than 20 years of experience covering technology trends in the public and private sectors.

The podcast was produced by Jason Lopez, executive producer of Tech Barometer, the podcast outlet for The Forecast. He’s the founder of Connected Social Media. Previously, he was executive producer at PodTech and a reporter at NPR.

© 2026 Nutanix, Inc. All rights reserved. For additional information and important legal disclaimers, please go here.

Related Articles