For more than two years, large language models (LLMs) have captured the public imagination, and companies have raced to adopt AI applications that will make their employees more efficient and productive. But the conversation is already shifting from LLMs to AI agents that can do more than answer questions and create content.
“An AI agent is an application that can sense the world around it, make decisions on its own, and complete actual tasks with execution,” explained Jeremiah Owyang, a partner at Blitzscaling Ventures and host of the Llama Lounge series of AI startup events, in an interview with The Forecast.
“As they become more advanced, they operate in sequence, in combination with each other – called agent ‘crews’ or ‘swarms.’ In many cases, they are like low-level human employees.”
As the face of AI evolves from GenAI into AI agents, business leaders must grapple with a new set of opportunities, challenges, and even philosophical questions. Who is responsible, for example, when an AI agent makes a mistake or causes harm? And what do companies gain or give up when they deploy AI agents in the place of humans?
Deloitte chief futurist Mike Bechtel spoke with The Forecast about the impact of agentic AI, which he sees completely reshaping how people will think about the technology in the coming years.
“For the first couple years of the generative AI movement, the paradigm felt like one chatbot to rule them all,” he said. “But most business processes don’t require super intelligence. Teams are built to have role players, and AI is becoming a bit of a team sport.”
As companies look to the future of agentic AI, Bechtel said, they will need to consider these key questions:
Bechtel likened the future of AI agents to the cast of domestic servants on the British historical drama Downton Abbey. “The Earl on the show wouldn’t tell his staff to shine his shoes, make his lunch, gas the car, and straighten his cravat,” Bechtel noted.
“He would say, ‘I’m going to town,’ and then the staff would negotiate and orchestrate between each other, based on their understanding of their scope and their roles. To me, the practical instantiation of agents is digital Downton Abbey. It’s saying, I’ve got this group of domain-specific intelligences that can coordinate and meet my needs, without me having to be overly prescriptive.”
Owyang said he recently spoke with a large videoconferencing company that is building AI agents to serve almost as “colleagues” during calls.
“We could have four or five assistants that are not just recording like they are now, but are actually talking, collaborating and taking actions in real time,” he said.
Owyang noted that big tech companies tend to predict that people will only need one “super” AI agent: theirs. Meanwhile, some startup founders predict that people will soon be using thousands of different agents. Owyang himself thinks that people will tend to use the same number of AI agents as their current number of email accounts, which, for many people, is around three.
One challenge, Bechtel warned, will be the “walled garden” problem of vendor-specific tools that may not work together seamlessly, at least at first.
“Are people going to be caught in one environment?” he wondered. “Or is there going to be a layer north of that, that allows for interoperability and federation?”
RPATech, a company specializing in robotic process automation (RPA) and intelligent automation services, warns that AI systems may not be equipped to make ethical decisions in fields like healthcare or criminal justice, and that it may be difficult to assign accountability in scenarios involving multiple AI tools.
“For instance, if an autonomous vehicle causes an accident, it may not be clear whether the fault lies with the manufacturer, the software developer, or the owner of the vehicle,” the company wrote in a 2024 article. “This ambiguity can delay or prevent appropriate legal action.”
It’s not a new question, Bechtel noted. He cited a widely circulated quotation that is often attributed to a 1970s IBM slide deck: “A computer can never be held accountable; therefore, a computer must never make a management decision.”
For trust, risk and ethics reasons, Bechtel said, humans should continue to be “in the loop” on important decisions as AI agents begin to take on more tasks.
“The ultimate responsibility lies with the human and the organization deploying the agent,” he said.
“But there’s a chain of accountability. You have developers who work for the employer, and they may be indemnified. These are decisions that are going to require thoughtfulness on regulation and employment law, and that’s one reason companies are mostly starting with internal agents that are overseen by humans.”
One of the immediate benefits of AI agents, Owyang said, will be the ability for workers to offload their “busy work” and turn their focus to higher-value activities.
“The average enterprise user is dealing with hundreds of emails and Teams and Slacks and Zooms and a million apps and data everywhere,” he said.
“An executive assistant is typically only provided to those at the top of the pyramid. Now imagine a world where every worker has one assistant, or two assistants, or 10 assistants, to help with data or meetings or scheduling. Imagine the level of productivity that can increase for those knowledge workers. I think we’re on the cusp of that.”
However, these efficiencies come at a cost. Bechtel noted that outsourcing tasks to AI can result in a loss of both control and precision, an issue already seen with LLM use.
“If you ask the same question six times, you get six slightly different answers,” he said. “I’m not talking about hallucinations or deeply problematic things. I’m talking about nuanced divergence. When you embrace probabilistic models, you’re giving up that nanometer-level control.”
Christian Buckner, senior vice president of data analytics and IoT at Altair, told IoT World that AI agents will be “crucial for helping organizations unlock deeper understanding and connections within their data.”
However, he also stressed the importance of systems and processes to validate the work of AI tools. “Businesses need to establish guardrails to control AI-driven suggestions and maintain trust in the results,” he said.
Often, conversations about the rise of agentic AI quickly become about how many human workers the technology will be able to replace, and whether those replaced humans will be able to re-find their footing in a rapidly changing economic landscape. However, Bechtel said companies that focus on shrinking their workforce are poised to miss the opportunity provided by agentic AI.
“The companies that say, ‘Oh great, we don’t need Toby anymore, because we can do today’s work with fewer people…those are the ones that are going to have a harder time,” Bechtel said. “Our clients who say, ‘Oh, good news, we freed up Toby to work on higher-stakes tasks’ – they’re the ones who are going to win. We see automation as a license for elevation.”
Of course, it remains to be seen how quickly, and to what extent, AI tools will be able to take over tasks that humans currently perform.
“Agentic AI will continue to be hyped as the next big thing that will replace the majority of human tasks, but that won’t happen in 2025, if ever,” Kjell Carlsson, head of AI strategy at Domino Data Lab, told IoT World.
“Instead, organizations are starting to set their sights on a more practical variant of agentic AI where AI automates narrow, highly controlled tasks. In short, AI will transition from being amazing and impractical to being boring and impactful.”
Owyang said that Silicon Valley AI leaders are already adopting what he calls an “AI first” mindset.
“If you have a problem in your life, you first see if there's an AI tool that exists off the shelf – whether it’s an app you download or enterprise software,” he said. “If it doesn’t exist, then you try to build it. And step three, if you can’t build it, then you hire somebody to build it. It’s always about leveraging AI first.”
With AI agents just beginning to come to market, some companies are scrambling to find ways to implement whatever is available. For example, tools from OpenAI allow developers to build agents that can perform tasks like web searches and file navigation, and Salesforce’s Agentforce platform enables organizations to build autonomous agents that can handle some basic customer service, sales, and marketing tasks.
Bechtel said that companies will have to overcome an “imagination gap” if they want to deploy AI agents in a way that will transform their business.
“You’ve got a genie lamp, and you can conjure up anything you wish,” he said. “People kind of pause, and that demonstrates that their imagination muscle, or their curiosity muscle, has atrophied a bit. That’s partly due to a cycle where things have been all about cost reduction and efficiency strategies, as opposed to pioneering strategies.”
Bechtel said he views the AI space as an archery target with three rings.
“The outermost ring is efficiency plays: doing today’s tasks with less time, talent, and treasure,” he explained. “The middle ring is effectiveness plays: doing today’s tasks better than we used to, thanks to AI. And then there’s the inner ring, the gold, which is enabling tomorrow’s work for the first time.”
Calvin Hennick is a contributing writer. His work appears in BizTech, Engineering Inc., The Boston Globe Magazine and elsewhere. He is also the author of Once More to the Rodeo: A Memoir. Follow him @CalvinHennick.
© 2025 Nutanix, Inc. All rights reserved. For additional information and important legal disclaimers, please go here.