Industry

Solving the Public Sector’s AI Agent Safety Puzzle

Automation expert and industry consultant Scott Robohn, CEO of Solutional and a consulting CIO for Carahsoft, explains how government agencies can advance agentic AI initiatives, minimize risk and build trust.
  • Article:Industry
  • Nutanix-Newsroom:Article

January 29, 2026

Stymied by byzantine processes and red tape, the public sector has much to gain from agentic AI. AI agents could help automate inefficient manual workflows that government entities regularly perform, saving time and money.

AI agents can be thought of as a team of employees. They are autonomous systems that work together to achieve goals by reasoning over data, making plans, and calling on tools to execute functions. They are expected to automate many high-level government processes, including permit issuance, procuring goods and services, policymaking and review, and more.

But challenges stand in the way. At present, the technology is overly complex, known to malfunction, requires long incubation periods, and, without the appropriate guardrails and blueprints, could jeopardize the essential services and operations on which the public depends, according to Scott Robohn of the IT consultancy Solutional. 

“We're so early in the development and adoption of agents for all sorts of tasks. There's a lot of experimentation going on, which is great. But that doesn't mean the technology is enterprise-hardened and ready for wide-scale adoption,” Robohn told The Forecast

An automation maven with a prestigious background in networking and operations, Robohn serves as consulting CTO for the government IT solutions provider Carahsoft. 

As an IT consultant, he advises organizations on how to generate long-term value with agentic AI. At present, he recommends proceeding at a measured trot. 

“We see the advantages for our clients in moving toward agentic AI,” he said, but at the same time, “we want to help them do it safely in ways that build trust” in the technology. 

Current Challenges to Agentic Gov

In Robohn’s estimation, the challenges facing public sector IT leaders include a lack of clarity about which workflows to automate and how to proceed safely.

“From our client interaction and industry engagement, our observation is that agentic AI is so early that there is no typical deployment,” he said. 

“Most agencies are just getting started and figuring out what to do with agentic tools. We see use case identification, proof of concept testing around specific use cases, and training people on how to use agentic tools as the bigger issues.”

Related Swarms of AI Agents Powering Businesses and Daily Life
In this Tech Barometer podcast, disruptive technology investor and analyst Jeremiah Owyang explains the rise of AI agents and a future shaped by a multiplying AI-first mindset.
  • Job Title:ITDM
  • Key Play:Enterprise Ai, Thought Leadership
  • Nutanix-Newsroom:Article, Podcast

June 4, 2025

To his point, only 7 percent of organizations, including enterprises and the public sector, are currently implementing agentic solutions, according to a recent industry report from Deloitte, a fraction of those leveraging GenAI and other types of AI in their workflows. 

As a result, details on real-world deployments are scarce. 

“It’s a challenge to get vendors to talk about real customer examples, which is a gate on assessing the state of real deployments,” said Robohn.

“Also, when seeing reports of large agents deployments, I would advise anyone to ask questions that go below the surface,” he advised. “Look for the difference between, one, interest and investigation, and two, actual testing, trials, and deployments.”

The lack of transparency into plans and designs of agentic systems obfuscates the path forward, as many organizations are unfamiliar with agentic tooling and the automation and orchestration layers in which those tools live. Muddling the issue further still, many of the agentic solutions currently on the market require extensive configuration to align them with internal workflows and processes, according to Robohn.

“Smaller munis will need shrink-wrapped platforms and tools that work out of the box with minimal customization. We’re not there yet,” he said.

Related Ready for AI Agents?
Forward-looking enterprises are preparing for the latest advancement in generative artificial intelligence: agentic AI.
  • Article:Technology
  • Nutanix-Newsroom:Article

April 10, 2025

Furthermore, public sector solutions often need to be tailored and, in some cases, forked to meet security and data compliance requirements native to federal and state workloads. Administrations may mandate private instances, jurisdictional compliance standards, and offline operation in air-gapped environments.

In the 2025 Global Public Sector Enterprise Cloud Index (ECI) report, 99% of industry respondents identified data privacy as a priority for their organization when implementing GenAI. The concern is equally pressing for agentic technologies.

“The federal government is not monolithic, so it's a heavier lift for vendors,” he said. “They're being asked to make a copy of [their product], a smaller copy for just a single agency. More cost and more complexity goes along with that.”

Solving the Safety Puzzle

Since the early days of dial-up internet, automated systems have been a major area of focus for Robohn. In his experience, it is essential to have guardrails in place before deploying any type of automated processes in an IT environment. 

“This is especially true with AI agents,” he said. “They have agency. They can do things. And you are enabling them to do so within a limited scope without human intervention.”

Related What Agentic AI Means for IT Operations
Business trend analyst Scott Steinberg reports on how autonomous and artificial intelligent agents are shaping modern enterprises.
  • Article:Business
  • Job Title:ITDM
  • Key Play:Enterprise Ai
  • Nutanix-Newsroom:Article
  • Products:Nutanix Enterprise AI (NAI)

August 13, 2025

Absent the appropriate safety measures, AI agents can cause significant damage to vital analytical and operational systems.

“We've already seen some interesting examples of agents without the proper guardrails doing things like deleting production databases unintentionally,” he said.

He says the IT industry is still coming to grips with how to deploy AI agents reliably and safely. The answer it has gravitated towards is through a combination of policy and oversight. 

He gives the example of an agentic system for government procurement, where a contractor is up for review for an award. One agent would verify that the contract is eligible to receive the award. A second agent would confirm that the contract does not violate any legal requirements. 

To do this safely, each agent should be assigned strict permissions in much the same way as human employees are administered privileges, limiting their permitted functions and the data sources that they can access.

As for oversight, the results of the process should be reviewed by an intermediary, “either a software process or a human. Today it's probably a human, and it's probably a contract lawyer,” he said. 

Varying approaches to agentic systems can cloud the picture. Organizations may choose to develop their own AI agents or license an agentic platform from a software vendor. 

“That's part of the big puzzle here,” he said. “I think platforms are going to be earlier in the ability to limit agent functionality based on policy.” 

Related Empowering Government with AI: Policies, Challenges and Solutions
Local, state and federal governments are harnessing the power of AI for the public good.
  • Article:Industry
  • Nutanix-Newsroom:Article

July 11, 2024

While most third-party platforms include administrative controls, APIs to enforce policies across open, modular systems are still a work in progress.

“It'll happen eventually,” he said. 

“I'm not an naysayer, but it gets trickier when you're crossing company lines, or technology lines, or open source boundaries. More work is involved in getting that to work.”

Steps to Building Trust in Agentic AI

When deploying agentic technology, Robohn preaches the importance of assiduous planning and monitoring outcomes. 

“Organizations need to identify specific use cases and build trust,” he said. “Do a proof of concept, take first steps in read-only mode, and have humans review the results.” 

He recommends not granting the agents any write functionality at the outset. Only after an agent’s reasoning has been proven reliable should the read-only restrictions be lifted. 

Related University of Canberra Puts AI Infrastructure at Researchers' Fingertips
A self-service portal slashed IT resource provisioning from weeks to hours, giving researchers and students secure, cost-efficient access to on-premise data center resources for AI and computational projects.
  • Article:Industry
  • Nutanix-Newsroom:Article
  • Products:Nutanix Cloud Manager (NCM)

November 20, 2025

Autonomy, he believes, should be doled out piecemeal. Give the agent limited capacity to make changes, such as writing data to a database. Run it for a few months to half a year to verify that the results are repeatable.

“You better test and make sure you're satisfied that you can trust what the agent is doing,” he said. 

Part of the reason for the prolonged testing phase, he says, is to account for corner cases. When authoring policy, an anomalous set of conditions can get overlooked, allowing an agent to bypass a stricture. 

“Make sure it doesn’t cause any trouble before trusting it with even more autonomy,” he said. “It really is a matter of building trust over time.”

Jason Johnson is a contributing writer. He is a longtime content and copywriter for tech and tech-adjacent businesses. Find him on Linkedin.

© 2026 Nutanix, Inc. All rights reserved. For additional information and important legal disclaimers, please go here.

Related Articles