IT companies that manage online games face a hostile threat landscape. They must contend with a brood of bad actors attempting to knock servers offline, run software hacks that compromise the competitive integrity of the game, and prey upon paying customers.
The attacks are diverse, relentless, and can be incredibly disruptive to live game environments, alienating the player base if they are not intercepted.
“The most important factor is player satisfaction. You absolutely want to keep all your players and make sure the player experience is well,” said Stefan Ideler, co-founder and CIO at i3D.net, a game hosting and infrastructure provider based in Rotterdam.
A chipper industry veteran who got his start hosting multiplayer matches in the early 2000s, Ideler manages the infrastructure that runs some of the world’s most-played online games, including offerings from Electronic Arts, Ubisoft, and Epic Games. These games can cost hundreds of millions of dollars to create and are often expected to generate billions in revenue over their lifetimes. Keeping them secure is a top priority, as cyberattacks, cheating, and criminality greatly imperil a game’s chances of success.
“If [the player experience is compromised] once, okay, they will return the next day. If that happens twice in a week, they’re likely going to consider switching to another game. If it happens more often, you've lost that person,” Ideler explained.
Online games have protections in place to mitigate predictable threats. At the same time, attacks have gotten larger, sneakier, and easier for the average person to carry out. Compounding the issue, the attack surface to defend has ramified as games add new services and features.
“Live-ops events, ranked ladders, cross-play, and community-hosted servers all expand the number of possible areas that can be attacked,” said Mathieu Duperré, founder and CEO of Edgegap.
Editor’s note: Read the full Q&A with Mathieu Duperré published on the Edgegap blog.
As cyberattacks escalate and evolve, game hosting providers are relying on “default deny” software protections, self-healing infrastructure, and artificial intelligence (AI) and machine learning (ML) to defend their data centers and servers.
Online multiplayer games face an onslaught of distributed denial-of-service (DDoS) attacks. These attacks flood services with malicious traffic, consuming bandwidth and resources and making the game inaccessible.
While old-school DDoS attacks are readily circumvented with basic filtering, newer types of attacks have become a menace as attackers change tactics to charge past firewall barriers. Multi-vector attacks, for instance, strike at the network and application layer simultaneously, making them harder to guard against.
These attacks are often executed by botnets, large networks of hijacked cloud instances, proxies, and local devices (such as smart refrigerators or hacked Ring doorbells). They are capable of inflicting widespread disruptions, as witnessed in a major, coordinated attack on Steam, AWS, PlayStation, Riot Games, and Xbox in October 2025.
“The volumes these attacks can generate is truly massive. We had one last week, which was more than 10 terabits of traffic per second. It broke a record on our score listing,” said Ideler.
“We managed to deal with it. But a lot of companies would have had serious trouble … It was really unusual. We think that there must have been a significant payment out there to cause this attack.”
Ideler ascribes the rise in attacks to young people becoming well versed in cryptocurrencies while isolated at home during the COVID-19 pandemic. Attacks can be purchased on the dark web starting at around ten dollars. The accessibility of these attacks is creating big headaches for game hosting providers.
“A single annoyed person with enough capabilities can disrupt the game for complete continents, impacting hundreds of thousands, or even millions, of concurrent users. One disgruntled person. That's the unfortunate reality that we have to deal with,” he said.
The most advanced attacks use cunning means to evade detection. Attackers are learning to analyze game code and reverse engineer real network packets, such as those sent when a player presses a key or clicks a mouse button. Disguised as legitimate traffic, malicious data passes through open ports incognito, defying security measures such as filters and byte matching.
“Of all the stuff we do, the cat-and-mouse with cheaters, the multi-terabyte DDoSes, that's the kind of thing that keeps me awake at night,” said Ideler.
To keep the bad guys out, game hosting providers are rethinking their approach to cybersecurity, building out custom protections and adopting stateless, cattle-style architecture.
i3d.net, for example, built a custom software tool in house to counter increasingly sophisticated flooding attempts. Called Warden, the tool creates a dynamic whitelist that is updated in real-time across four privately-owned data centers and over sixty points of presence (PoP). The application denies unauthenticated traffic by default. Only verified players gain access to the server.
Engineers conceived of the idea after a DoSSing war between two feuding clans shut down their services across much of Latin America. When a player signs in to their account, either on the Steam platform or the game’s backend, the warden tool captures their IP address. The digits are pushed to distributor nodes that disseminate them throughout the network over a high-speed backbone.
“It talks to many devices, many routers, and many data centers to open a little pathway from the internet to the specific game server where the player wants to play,” said Ideler.
Others address the problem with cloud native technologies. Edgegap Technology, a platform-as-a-service (PaaS) provider, mainly for mobile and indie titles, uses containerized game instances and a custom orchestration engine to automatically steer players away from embattled infrastructure.
“By redeploying game servers across hundreds of edge locations, we can sidestep attacks in real time, effectively turning the attack surface into a moving target,” said Duperré.
When monitoring tools detect anomalous traffic patterns or declining server health, the target is quarantined, and new matches are spun up at other eligible locations.
“Instead of trying to heal compromised or degraded resources, we simply terminate them and respawn fresh instances elsewhere across our network of cloud, edge, and private data center partners,” he said.
By redirecting players to the next fastest, safest nodes, downtime is minimized, and the game remains accessible.
Along with fighting cyberattacks, hosting providers grapple with rampant hacking, cheating, and cybercrime. AI/ML tools are instrumental in enforcing order.
In online games, cheaters and spoilsports commonly use questionably legal third-party software to modify game code, files, and packets. Though seemingly harmless, this mischief is a major source of player defection. To keep competition fun and fair, hosting providers run anti-cheat software, often enhanced with advanced machine learning monitoring capabilities.
i3d.net built an automated detection system to help nab cheaters. It captures server-side screenshots of the active game state. A cheater might be using a program that lets them see through walls. Fire shots at impossible angles. Or view an overlay that exposes where other players are hiding. A computer vision model analyzes the images for irregularities, assigning points that accrue into a player’s cheat score.
“If you're suddenly, like, five times better during a session, that generates one point. That might lead us to request a few more screenshots to see what's going on,” said Ideler.
Similarly, they use an AI-enabled solution to thwart identity-based attacks. Publishers typically keep their customers’ personal data on a separate server, according to Ideler, as that data is too sensitive to store on potentially porous dedicated game servers. What they tend to encounter is social engineering.
“We've seen that social engineering takes place in in-game chats,” he said. “We've had chat logs captured where people are trying to sell drugs in a game server. People trying to [get someone to reveal personal information]. People trying to get into conversations with minors.”
To sniff it out, they use a natural language understanding (NLU) model to analyze every exchange within the game environment. Whereas automatic filters can only detect fixed patterns of keywords, their solution can look at complete sentences, interpreting the context and intent of what is said, and if it is nefarious.
“It's really easy to filter out if you have the right components and the right analysis,” he said.
Editor’s note: Learn about Nutanix Enterprise AI and Cloud Native technologies.
Jason Johnson is a contributing writer. He is a longtime content and copywriter for tech and tech-adjacent businesses. Find him on Linkedin.
© 2025 Nutanix, Inc. All rights reserved. For additional information and important legal disclaimers, please go here.