Scam adverts: are you still being targeted?

Despite major legislative changes to make the internet safer, scam advertisers continue to escalate online

Last year the law toughened up on scams with two major pieces of new legislation, yet a Which? investigation has found that scam adverts continue to litter social media feeds and online search results.

In November and December 2023, after the passing of the Online Safety Bill, we combed the biggest social media sites; Facebook, Instagram, TikTok, X (formerly Twitter) and YouTube, as well as the two biggest search engines, Google and Bing, and found blatant fraudulent advertising.

Here, we explain our findings and why, when it comes to scams, there are simply no more excuses.

Sign up for scam alerts

Our emails will alert you to scams doing the rounds, and provide practical advice to keep you one step ahead of fraudsters.

Sign up for scam alerts
Sign up

The law on scams

Last year was momentous in the fight against fraud with the passing of two key pieces of legislation:

  • The Online Safety Act finally makes online platforms, including social media networks and search engines, responsible for the hosting of harmful content, such as scam adverts.
  • The Financial Services and Markets Act, will force banks and payment providers to, in most cases, reimburse people tricked into sending their money to scammers.

Both of these achievements were the culmination of several years of exhaustive Which? campaigning against the scourge of scams, beginning with our 2016 super-complaint to the Payments Systems Regulator (PSR) on behalf of victims of bank transfer scams and continued via our dogged research, investigations and lobbying in the intervening years.

The fight against fraud is far from over. These new laws are not yet in force and we continue to wait for practical guidelines from the regulators tasked with enforcing the new powers, which could take months.

key information

How Which? has campaigned on scams

  • 2016 Which? used its super-complaint powers to call on the regulators to ensure banks better protect customers who are tricked into transferring money to a fraudster.
  • 2019 Which? journalist Faye Lipson investigated Action Fraud, the UK’s fraud reporting centre, and found that scam reports have no guarantee of ever being read by a person.
  • May 2021 The Online Safety Bill (OSB) was first published, however scam adverts weren’t included.
  • September 2021 Which? launched a public petition demanding that online platforms are held responsible for removing harmful content on their platforms.
  • February 2022 Which? alongside other groups called on Nadine Dorries, the culture secretary at the time, to include scam adverts in the OSB.
  • March 2022 The government confirmed that it would include scam ads in the OSB. In May, Which? gave evidence in Parliament to make the OSB tougher against tackling scams.
  • July 2022 The OSB suffered a setback and was withdrawn from Parliament following the appointment of Prime Minister Liz Truss.
  • June 2023 The Financial Services and Markets Bill, which requires reimbursement for scam victims, becomes law.
  • September 2023 The OSB passed in Parliament and in November it became law.

Fraudulent adverts

In November and December last year, we used a variety of methods including setting up fresh social media accounts for the purposes of our investigation. Tailoring these accounts to interests frequently targeted by scammers, such as shopping with big-name retailers, competitions and money saving deals, investments, weight loss gummies and help to recover money after a scam.

We also scoured the ad libraries, the searchable databases of adverts – which are available for Facebook, Instagram and TikTok – and investigated scams reported by some of the 26,000 members of our Which? Scam Action and Alerts community on Facebook. 

Last, researchers also captured scams we came across in the course of everyday browsing and scrolling for personal use.

Our research collected more than 90 examples of potentially fraudulent adverts. Whenever we were confident of something being a scam and in-site scam reporting tools were available, we reported the adverts.

Most platforms didn’t update us about the outcome of these reports, with the exception of Bing parent company Microsoft, which confirmed an advert had violated its standards and said it would act, but didn’t specify how.

Social media ads

A scam advert impersonating Currys promoting a non-existent sale
A scam advert impersonating Currys promoting a non-existent sale

On Meta’s ad library, we found Facebook and Instagram hosting multiple copycat adverts impersonating major retailers around the time of the Black Friday sales, including electricals giant Currys, plus clothing brands River Island and Marks & Spencer.

Each advert attempted to lure victims to bogus sites in a bid to extract their payment details.

On TikTok and YouTube, we found sponsored videos in which individuals without Financial Conduct Authority (FCA) authorisation gave often highly inappropriate investment advice.

While these aren’t necessarily scam videos and wouldn’t come under the remit of the new laws, they are extremely concerning and we reported these examples to the platforms.

A scam website impersonating the BBC and falsely using Martin Lewis' image
A scam website impersonating the BBC and falsely using Martin Lewis' image

On X, an account posted: ‘People are excited to hear about this opportunity for the new year’, alongside an advert featuring a video where Martin Lewis supposedly shared his top financial tips.

Beneath the advert was a note added by the platform with some context added by other site users, known as readers’ notes.

It warned that: ‘This is yet another crypto scam using celebrities’. Astonishingly, despite the warning the advert remained live. However the account which posted the advert was later suspended for violating the X Rules. 

This advert led to a fake BBC website and featured an article falsely using Martin Lewis to endorse a dodgy company called Quantum AI, which promotes itself as a crypto get-rich-quick platform.

This is a name that has been repeatedly flagged as a crypto scam, and is known to circulate an AI-generated deepfake video of Martin Lewis advertising its services. Martin Lewis has warned multiple times that he doesn’t do adverts.

Search engine adverts

When we posed as drivers searching on Google for the 'paybyphone app' to pay for parking, we were confronted with two adverts for impostor websites – onlytelephone.com and homeautomationinnovators.com – appearing at the top of search results using PayByPhone's logo without permission.

Both websites claimed to offer a 'free download', but included identical small print at the bottom of their websites revealing a monthly charge of £24.99. 

We reported both adverts and PayByPhone confirmed that the advertisers had nothing to do with the genuine parking app.

Weight loss gummy scam site

A large collection of images displayed on this page are available at https://www.which.co.uk/news/article/scam-adverts-are-you-still-being-targeted-a4jj68Q8cQcm?utm_medium=email&utm_source=engagingnetworks&utm_campaign=Supporters&utm_content=Scam+Alert+220224+-+B+-++Beware+of+AI+scams

Over on Microsoft-owned Bing, a search for ‘weight loss gummies’ turned up a sponsored result for ‘official-comparison.com’, a site blocked as a security risk.

On a different occasion, the same search term served up foryourwell.com, a website mocked up to look like a Daily Mail news article, complete with fake product endorsement by celebrity Dragon's Den entrepreneur Deborah Meaden.

What the platforms said

  • Google (also the parent company of YouTube) told us that protecting users is a top priority and it has strict policies that govern ads and advertisers on its platform. It added that it removes ads that violate its policies. It also explained that it invests significant resources to stop bad actors. In 2022, it removed more than 5.2bn ads, restricted more than 4.3bn ads and suspended 6.7m advertiser accounts.
  • Meta, which owns both Facebook and Instagram, told us that scams are an industry-wide issue and increasingly sophisticated. It explained that it has systems in place to block scams and that financial services advertisers are now required to be authorised by the Financial Conduct Authority to target UK users. Fake or fraudulent accounts and content can be reported in a few clicks, with reports reviewed by a trained team with the power to remove off ending content. Meta also said it will work with the police and support their investigations.
  • TikTok explained that its Community Guidelines prohibit fraud and scams, as well as content that coordinates, facilitates or shares instructions on how to carry out scams. It added that it had removed all the videos we shared with it for violating its Community Guidelines, plus the related accounts. It told us that it employs 40,000 safety professionals dedicated to keeping TikTok safe, using technologies and moderation teams to identify, review and remove content or accounts that violate its Community Guidelines. According to TikTok, it removed 88.6% of videos that violated its fraud and scams policy before the content was reported, with 83.1% removed within 24 hours. Users who encounter suspicious content are encouraged to report it under ‘Frauds and Scams’.
  • Microsoft, owner of Bing, told us that its policies prohibit advertising content that is deceptive, fraudulent or that can be harmful to users. It confirmed that the content we reported had been removed and that multiple advertisers were blocked from its networks. It added that it will continue to monitor its ad network for similar accounts and will take action to protect its customers.

  • X: 'Our teams work around the clock to safeguard the platform and enforce our policies consistently and quickly. Our financial scam policy prohibits the use of X's services in a manner intended to artificially amplify or suppress information or engage in behavior that manipulates or disrupts people’s experience on X. We want X to be a place where people can make human connections and find reliable information. For this reason, you may not use X's services to deceive others into sending you money or personal financial information via scam tactics, phishing, or otherwise fraudulent or deceptive methods. Furthermore, X's advertising onboarding process requires UK regulated financial services advertisers to be authorised by FCA prior to serving financial services adverts on their sites. We are proud to be a signatory of the UK Government's Online Fraud Charter and recently announced our support for the UK Government's National Campaign Against Fraud.'

The Payment Systems Regulator and official-comparison.com were also contacted for comment. Foryouwell.com could not be reached.

What the regulators are doing

The communications regulator Ofcom is responsible for implementing and overseeing the new rules on online fraud.

When we spoke to Ofcom about our findings it said that it has already recruited and trained expert teams to hold tech firms to account. It added that it will set new standards to make sure websites and apps are safer by design, and that it is ready to meet the scale and urgency of the challenge.

At the time of writing, Ofcom is consulting on what the rules should look like in reality, but has published proposals and guidance in the interim.

Platforms that fail to protect their users will face fines of 10% of global yearly revenue or £18m, whichever is greater. Ofcom says that it expects to publish the new rules in early 2025. 

Meanwhile, the PSR confirmed that banks must start reimbursing bank transfer scam victims under its new rules from October this year.



This article was originally published on 20th February 2024 and was updated on 27th February 2024 to include a response from X.