Overlay
Fraud prevention

How AI is changing the game for financial scams

Online scams have changed, fast

Until a couple of years ago, with a little knowledge it was relatively easy to spot many of the scam emails, texts and social messages sent by criminals.

The spelling would often be poor. The text was muddled, with odd syntax. The English on mails purporting to be from your bank in Britain would sound strangely American.

Stranger still, scammers offering too-good-to-be-true investments often included such spelling errors deliberately. The strategy was clever: the recipients who couldn’t spot them were also unlikely to spot other warning signs later, and would invest more.

It wasn’t that such scams were easy to spot. But if you looked critically, there were clues. 

AI enters the chat

AI has changed that.

The explosion of cheap AI applications like ChatGPT has put the tools for convincing English into the hands of investment scammers worldwide.

Suddenly, scammers with little grasp of language specifics, tone or formats can send large volumes of flawless, fluent, and official-sounding emails to British accounts. These emails recreate the ‘feel’ for the tone and style of well-known, trusted institutions – from investment houses, advisors, banks and governments to HMRC – with little effort.

Whole investment brokerages, staff teams, video meetings, even convincing fake online trading platforms for fictitious investments, can be spun up with little outlay.

No wonder 86% of British adults are concerned that “rapid developments in AI will give criminals new ways to con people”. 

"Suddenly, scammers with little grasp of language specifics, tone or formats can send large volumes of flawless, fluent, and official-sounding emails to British accounts. These emails recreate the ‘feel’ for the tone and style of well-known, trusted institutions"

 

Matt Potter is a cybersecurity consultant and journalist whose work appears in the Washington Post and BBC. He lectures on disinformation, cybercrime and conflict, and is author of We Are All Targets: How renegade hackers invented cyber war and unleashed an age of global chaos.

The age of Deepfakes

Scammers using generative AI can create images, sound and video that appears to be people you trust, but is a fake, manipulated to voice the scammers’ script.

These are known as Deepfakes. They are often convincing enough to fool even users’ own family and friends. They are one of the fastest-growing scams in Britain this year.

Consider what that means.

One classic approach was the ‘Hi Mum’ scam. Scammers would send out messages claiming to be from a relative – say, one of your children – who had lost their own phone abroad and urgently needed money wired to them. They would count on your worry, and sense of urgency. You wouldn’t call to check because – per the message – their phone had been stolen.

Only there never was a stolen phone. Your child was fine, and unaware of the drama. The cash had gone the scammers’ own account.

All that from one message.

Imagine how much more effective it becomes using AI to clone the voice of that relative.

The same techniques are increasingly being used to lure people into far deeper financial commitments under the guise of investment opportunities. As the cost to scammers of generated Deepfakes has come down, but the potential rewards are higher than ever. 

Rise of the bot armies

AI allows investment scammers to create not just one, but whole armies of fake people.

‘Bot networks’ of LinkedIn connected profiles may all seem to work for a large, global investment company, and their recommendations are great. Only the company does not exist, and nor do they.

These profiles are AI-generated. They post, respond, send out connection requests, just like regular users. They may reach out to you about promising investments and opportunities. Their references check out, of course. They are from other AI profiles.

In the past, this would have been a complex operation. Yet AI tools mean these networks can be created swiftly and at low cost by scammers.

In the second half of 2023 alone, some 46 million fake accounts were removed from LinkedIn during registration, with 17 million more proactively restricted, and 232,000 more reported by users for being fake. But for every one reported and deleted, dozens persist.

What can you do?

As a NatWest Premier customer, you always have a single source of truth to turn to.

Cutting communication until you have spoken to us is a useful card to play if you find yourself being pressured. We’re here for you, whenever you want to talk.

Inbound callers can disguise their numbers, so even a call purporting to be from your bank is worth treating with caution. 

If you’re ever in any doubt about an approach, no matter how real it seems, just take a moment, call NatWest Premier24, and talk to us.  Lines are open 24/7’

It’s always the same number, and Premier 24 lines are open 24/7.

Telephone: 0333 202 3330

International: +44 161 933 7239

Relay UK: 18001 0333 202 3330 

Why not add Premier 24 to your contacts?

Set up your security profile in the NatWest app

Meanwhile, a great start is to set up your security profile on the NatWest app. Your profile has up-to-date tips that will help you stay up to date with the changing nature of scams, and set up protective measures and verification that's unique to you. 

 

Our app is available to personal and business banking customers aged 11+ using compatible iOS and Android devices. You'll need a UK or international mobile number in specific countries.

scroll to top