Since the debut of public large language models (LLMs) in late 2022, many have predicted the imminent burst of the AI “bubble.” However, the rapid evolution of AI technology is creating an innovator's paradise, where continuous micro-cycles of change are opening the door to new techniques and better business models, rather than crashing the global economy.
According to Daemon Behr, field chief technology officer for Nutanix, the technology is changing so fast that miniature bubbles are popping constantly.
“It’s not the same as the dot-com bubble,” Behr told The Forecast at the Nutanix .NEXT event in Chicago.
“Things are happening in a much shorter time span. You might be at the peak point of expectations, and that could pop the next day. But then a whole new thing pops up, so the cycle of innovation is a lot more frequent. The bubbles are popping on a very regular basis, but the bubbles are smaller.”
Behr believes the best way to keep up with AI’s dizzying pace of change is by actively using it. He leans on AI agents to constantly research the space and curate custom daily reports.
“I also have a swarm of agents that I use to deploy new software that I come across,” he said. “I’ll tell one of my agent teams to go download it, to deploy it in a sandbox, and tell me if it’s valuable to me. If it is, then I will adopt it. If not, then I can just kill it.”
If it’s challenging for a single tech expert to stay current on AI, it may seem impossible for enterprises to plan their infrastructure investments and hiring cycles to hit this moving target. But by putting flexible frameworks in place, Behr said, organizations can innovate both rapidly and securely.
“The technical limitations of the past are gone now,” he said. “The constraints have to do with creativity: What do you think is possible? What can you strive for, and how can you learn things along the way?”
Behr pauses then posits: “It’s an innovator’s paradise.”
“The biggest thing I’m seeing right now is that there’s a huge fear of missing out with AI,” Behr said.
“There’s a certain amount of time that’s required for operations to catch up with best practices, security practices, and governance. It’s a real challenge for organizations.”
Cross-functional AI centers of excellence can help accelerate adoption, moving teams from simply working with chatbots to using agentic AI to develop software and other assets, Behr said. As organizations mature, their roles and teams are likely to evolve. Behr pointed to the shift from DevOps to DevSecOps to platform engineering: DevOps unified development and operations teams for faster, more reliable software releases; DevSecOps embedded security throughout the entire development lifecycle; and platform engineering created “golden paths” that reduce cognitive load for software engineering teams.
The next step, Behr said, is “ChatOps,” or the ability for developers to talk directly with their IT environments using natural language prompts.
Similarly, he said, prompt engineering (which guides how users engage with LLMs and agents) has quickly evolved into context engineering (which frames entire AI working spaces). Now, organizations are adopting “harness engineering,” which applies these principles to AI harnesses like Claude Code and Codex.
While this is an extraordinary amount of change to manage, Behr stressed that the potential benefits are commensurate with the effort.
“If you have an idea of something that you’d like to see in the world that doesn’t exist yet, now, you’re able to have a conversation with an agent to develop a minimum viable product,” he said.
“Once you create a spec, you can feed it into a team of agents, and they can do the front-end development, the back-end development, the testing, the evaluation. At the end, you end up with a fully baked application that can go into production.”
Behr has an expansive view of how AI will continue to change the enterprise. Although many teams are still getting their arms around changing LLM models and designing chat workflows that yield more predictable outputs and limit token use, Behr predicted that agentic AI will soon change the way enterprises interact with software. And recent developments in the space, he said, hint at the emergence of artificial general intelligence (AGI), which could match or surpass human cognitive capabilities and even solve novel problems without task-specific instructions.
“I think that AI agents will be a lot quicker to spin up and destroy,” Behr said.
“It’ll be more ephemeral. You may have a manager agent that will create an instant swarm of different agents that can do specific tasks. That way, you’re able to parallelize the workflows and complete things a lot more quickly, compared to a serial process.”
This would essentially result in “on-demand” software, Behr said: applications and dashboards created for a specific need, connected to enterprise data, and then destroyed when they are no longer needed. “It’s very different from the way that we’ve previously approached things.”
Behr noted that Anthropic, the maker of Claude, recently delayed the release of a new model called Mythos due to security concerns. Reportedly, the model can identify zero-day vulnerabilities and execute multi-stage attacks against vulnerable networks.
“There’s a lot of concern from security researchers that this is going to change the entire industry in ways that nobody understands,” Behr said.
“We’ve reached a point where we’re pre-AGI in terms of the capabilities of the models, and Anthropic isn’t the only one. They’re all getting to that same point.”
Editor's note: Get more insights from Daemon Behr at his Designing Risk in IT Infrastructure site and his podcasts available on YouTube.
Calvin Hennick is a contributing writer. His work appears in BizTech, Engineering Inc., The Boston Globe Magazine and elsewhere. He is also the author of Once More to the Rodeo: A Memoir. Find him on LinkedIn.
Ken Kaplan contributed to this story. He is Editor in Chief for The Forecast by Nutanix. Find him on X @kenekaplan and LinkedIn.
© 2026 Nutanix, Inc. All rights reserved. For additional information and important legal disclaimers, please go here.