Google claims that its security teams work around the clock using its Gemini AI models to detect and stop harmful ads.
“Bad actors are using generative AI to create deceptive ads at scale, and Gemini helps us detect and block them in real time” Keerat Sharma, VP and GM, Ads Privacy and Safety, Google, said.
“Our models analyze hundreds of billions of signals — including account age, behavioral cues and campaign patterns — to stop threats before they reach people,” added Sharma.
Malvertising remains an ongoing issue on Google’s ad network, with attackers abusing paid ads to pose as legitimate brands and lure users into malware downloads or phishing sites.
Google serves billions of ads on millions of websites. Even with automated checks, some malicious ads slip through due to the sheer volume.
“In 2025, we blocked or removed over 8.3 billion ads and suspended 24.9 million accounts, including 602 million ads and 4 million accounts associated with scams,” the company stated in its 2025 Ads Safety Report.
(Source: Google)
In regional data, the company reported removing 1.7 billion ads and suspending 3.3 million advertiser accounts in the US, 675.7 million ads and 593,000 accounts in the UK, and 1.6 billion ads and 2 million accounts in the EU.
According to Google, its latest AI models go beyond older keyword-based systems by focusing on intent. This allows the company to detect more complex and evasive threats, including ads designed to bypass automated checks.
The company said Gemini has improved how it processes user feedback, helping teams take action on more than four times as many reports in 2025 as in the previous year.
This faster response helps stop threats sooner when they slip through, and lets safety teams spend more time on cases that need human judgment.

