The Deep Shift in Ad Fraud: AI Bots vs AI Defenders

Only a few years ago, digital ad fraud often meant relatively crude tricks (think click farms or simple bots generating bogus clicks). Today, however, the battle has escalated dramatically. Digital advertising is facing a new breed of fraud at an unprecedented scale. Industry research indicates that about 22% of online ad spending (roughly $84 billion) was wasted on fraudulent or non-human traffic in 2023. That figure is on track to reach around $100 billion by the end of 2024, underscoring how quickly the problem is growing. Much of this surge is being driven by artificial intelligence. And in this case, it’s the fraudsters, not the defenders, who are wielding these AI tools. Scammers are increasingly leveraging AI to generate fake websites, create botnets, and mimic human behavior in clicks and impressions. The result is an arms race in ad fraud, pitting AI bots against AI-powered defenders across the digital marketing ecosystem.

This deep shift is especially evident in markets like India, which have become hotbeds for such schemes. “We’re seeing an avalanche of AI-led fake traffic,” says Dhiraj Gupta, founder of the fraud detection firm mFilterIt. He notes that generative AI tools now enable bad actors to spin up thousands of websites in hours, and black-market services offer millions of bot hits for relatively little cost. The barrier to entry for ad fraud has plummeted. What once required organized click-farm operations can now be automated by a single operator armed with open-source AI models. In fact, analysis by DoubleVerify’s fraud lab observed a 23% jump in new ad fraud schemes worldwide in 2023 compared to the previous year, a rise the company partly attributes to fraudsters’ use of generative AI to falsify online activity patterns.

These trends are reflected in internet traffic patterns: for the first time ever, automated bots now outnumber humans online, accounting for over half of all web traffic. A large share of that is malicious. Security research shows that roughly 37% of total internet traffic in 2024 was driven by “bad bots” (automated programs with harmful or deceitful intent) up from about 32% the year before. In the advertising world, this translates to a deluge of fake ad impressions and clicks that look legitimate. AI has made these bots far harder to spot, as they behave more and more like real users. “With AI, attackers can generate thousands of seemingly authentic user agents and mimic human behavior, making the pattern of bot traffic more difficult to detect,” explains Roy Rosenfeld, head of DoubleVerify’s fraud lab. In other words, fraudulent traffic today can blend seamlessly with genuine audiences, often slipping past traditional filters.

Beyond bots, fraudsters are also using generative AI to create fake ads and personas. Cybersecurity experts have witnessed deepfake video ads featuring famous celebrities promoting scams – convincingly realistic to viewers and even to automated vetting systems. “There have been fake betting ads using AI-generated videos of Sachin Tendulkar, Shah Rukh Khan, and Virat Kohli which look alarmingly real, even fooling detection systems,” warns Pratim Mukherjee, a senior engineering director at McAfee. In another instance, McAfee’s team detected over 36,000 fraudulent websites impersonating Amazon and more than 75,000 scam text messages blasted out around Amazon’s Prime Day sale: an orchestrated scheme to dupe shoppers during the retail rush. Such examples show how AI is being used to fabricate not just fake clicks, but fake people and bogus brands to lend fraudulent schemes an air of legitimacy.

The impact of these AI-fueled fraud schemes is being felt by advertisers and publishers across the board. Insiders say even major companies have been caught off guard. In one case, a popular streaming service in India saw its platform overwhelmed by a surge of bot-driven traffic; in fact, 100% of the views on a particular online ad campaign turned out to be fake, causing the site to crash under the load. Such incidents highlight that ad fraud is no longer just about quietly siphoning ad budgets; it can directly disrupt businesses and damage reputations. Industry leaders are accordingly changing how they view the problem. “While some brands still absorb these hidden losses, there’s growing recognition that ad fraud is not just a marketing inefficiency but a cybersecurity concern,” observes Tarun Wig, co-founder and CEO of Innefu Labs. In other words, bot fraud isn’t seen merely as wasted ad spend; it’s being treated as an active threat that demands the same vigilance as a cyberattack. This mindset shift is pushing companies to invest more in preventive measures and to scrutinize their digital ad buys far more closely.

On the defensive side, the advertising industry is deploying its own AI arsenal to counter the bots. Fraud detection firms, media agencies and ad platforms are increasingly using machine learning models to analyze traffic patterns and weed out illegitimate activity in real time. “You need AI to fight AI,” says Gupta, whose company’s systems look for clusters of ad traffic that behave identically, patterns no human analyst could easily catch. By training on vast datasets of ad impressions, these systems can flag anomalous behavior that indicates bots, such as groups of devices all clicking the same way or generating uncanny levels of activity at odd hours. Advanced verification tools now scan for everything from mismatched device IDs and geolocation signals to unnatural spikes in clicks. And the effort is paying off: according to DoubleVerify, advertisers that implemented verification and AI-based safeguards have seen tangible improvements. In India, for example, overall ad fraud rates dropped by roughly 36% in 2023 after many advertisers began using verification platforms, even though new fraud tactics continue to emerge. The message is clear: machine-driven oversight can dramatically reduce a brand’s exposure to fraud.

Major ad platforms themselves are also ramping up AI-based defenses. Google, for instance, has integrated large language models into its advertising safety systems, which helped it preemptively suspend a vast number of fraudulent advertisers. In 2024 alone, Google blocked over 39 million advertiser accounts for policy violations and suspected fraud, more than triple the number it suspended the previous year. Many of these bogus accounts were detected and taken down before they could spend a dollar on ads, thanks to AI models that flag telltale signs of fraud like business impersonation or illegitimate payment details. Beyond account bans, platforms are using AI to catch malicious ads and fake content. Google claims that after deploying new AI-driven countermeasures and policies, reports of deepfake scam ads on its services dropped by 90% within a year. This kind of progress comes from pairing automated algorithms with human oversight. “These AI models are very important to us and have delivered impressive improvements, but we still have humans involved throughout the process,” notes Alex Rodriguez, Google’s general manager for Ads Safety, emphasizing that expert teams continue to monitor and refine the results. In practice, AI can handle the heavy lifting of scanning billions of ad impressions and clicks, while human experts investigate complex fraud patterns and adapt the defenses as needed.

From the advertiser’s perspective, the rise of AI-driven fraud has made vigilance a top priority. Major brands are starting to audit their campaign traffic and demand greater transparency from ad partners. Many are also turning to specialized anti-fraud solutions. For example, Indian companies like Swiggy have partnered with cybersecurity firms to deploy AI-based systems that can detect fake accounts, bogus app installs, and other abuses on their platforms in real time. Global advertisers such as Unilever and others, through industry groups, have likewise been advocating for stricter standards and investing in technology to ensure their marketing spend isn’t being siphoned off by fraud. The overarching trend is that marketing teams are now working more closely with IT and security departments, treating ad fraud as a board-level issue that warrants constant monitoring.

As Akshay Mathur, co-founder of marketing AI startup Unpromptd, aptly put it, “AI is both the accelerant and the antidote” in this fight. The same technologies enabling an explosion of bot-driven scams are also powering the solutions to detect and defeat them. It’s a technological arms race, and the hope is that by leveraging advanced AI tools and collaboration, the digital advertising ecosystem can stay one step ahead of the fraudsters’ ever-evolving playbook. With global ad spend continuing to grow and fraud tactics evolving, the stakes have never been higher, but so is the resolve to fight back using the latest technology.

Disclaimer: All data points and statistics are attributed to published research studies and verified market research. All quotes are either sourced directly or attributed to public statements.