Fake adverts featuring celebrities and public figures remain the most common type of scam adverts appearing online, according to new figures from the Advertising Standards Authority (ASA).
Data from the watchdog’s Scam Ad Alert System found that scam ads containing famous figures made up the “vast majority” of the alerts it sent to platforms in 2024.
The scam adverts often contain doctored or deepfake images of celebrities or public figures, with the ASA noting it saw scam ads depicting Prime Minister Sir Keir Starmer and Chancellor Rachel Reeves last year, as well as ads using the likeness of Stacy Solomon and Strictly Come Dancing judge Anton Du Beke.
Many experts have warned that the rise of AI-powered deepfakes is now making these adverts more convincing and therefore more dangerous to consumers.
In its latest update, the ASA said it sent 177 Scam Ad Alerts to social media and other online platforms to remove ads and further act on – up from 152 sent in 2023.
Jessica Tye, regulatory project manager at the ASA, told the PA news agency that a key factor for scammers was honing in on the public desire for celebrity gossip, as well as who is currently in the news.
“I think the public is very interested in stories about celebrities, fake stories about their downfall or bad things happening to them. They’re also interested in endorsement,” she said.
“Though it’s a long-running trend, we see new celebrities being used in these scam ads all the time, depending on perhaps who is in the news or just who scammers have alighted on.”
“Scam ads online are not a new problem, and scams themselves are not a new problem either – for as long as people have been selling something, scams have existed.
“Obviously, they develop and evolve over time, and we know that scammers use very sophisticated techniques to avoid detection.”
The ASA’s figures showed that in 2024, X failed to respond to 72% of the 22 alerts it was sent by the ASA.
However, the watchdog said a discrepancy was identified and resolved between it and X over contact arrangements, which resulted in a response rate of 80% within 48 hours of reporting for the final three months of the year.
Ms Tye said scammers were using sophisticated digital techniques to make it hard for platforms to respond.
Those techniques included one known as “cloaking” where scammers present different content to platforms or their moderation system than they do to potential victims, enabling them to evade detection unless reported.
“This can make it quite difficult for platforms and others involved in the ad ecosystem to stop these ads appearing,” she told PA.
“And we do know from our work with platforms that they have all the measures in place to stop these ads appearing.
“Ultimately, that is not always successful, and that’s one reason why we run the Scam Ads Alert System so that consumers can report to us when they do see these ads, and so that we can play a small part in helping to disrupt these scam ads.”
She added that it was important the public did report scam ads to platforms and the ASA – which can be done via a quick reporting form on the watchdog’s website.
“Public reporting doesn’t solve scam ads, and it’s not the public’s responsibility to solve scam ads, but they can play their part,” she said.
“So we would definitely encourage the public, if they see a scam ad in paid-for space, to report it to us, because they can be confident that we will assess those reports swiftly and do everything we can to stop similar ads appearing in the future.”
Rocio Concha, director of policy and advocacy at Which?, said: “A flood of celebrity deepfakes and other scam adverts reinforces why the current slow, reactive and toothless system for tackling fraud online is woefully inadequate.
“The biggest online platforms have shown they’re unwilling to take effective action to stop scam ads appearing in the first place, which is why specific requirements in the Online Safety Act for platforms to stop scam ads from appearing are so desperately needed.
“It is extremely disappointing that these protections have been kicked into the long grass and may not take effect until 2027. Ofcom must speed up the full implementation of the Online Safety Act so platforms that fail to tackle fraudulent ads can be hit with tough penalties – including multimillion-pound fines.”