Advance fee fraud campaigns are using generative AI in both text and video to speed up responses, evade filters, and make scams more convincing.
Large Language Models and other forms of Generative AI (GenAI) promise to make many people more productive, and cybercriminals are no exception. Fraudsters are using GenAI to enhance all kinds of scams, from consumer-focused bulk campaigns to highly targeted business email compromise attempts. High-profile cases have involved losses of tens of millions of dollars.
Over the last six months, Netcraft has noticed an increase in advance fee fraud emails with signs of ChatGPT-generated text, as well as a new pattern of deepfake videos designed to convince would-be victims and evade existing filters used to block scams, including examples impersonating the FBI, UN, and World Bank.
Advance fee fraud is a long-popular paradigm among scammers: they typically start with an unsolicited message indicating that the recipient is due money—for example, that a businessman has offered money to them or that they have won a competition such as an “internet lottery”—and that by paying a comparatively small fee, they will gain access to that fortune. However, the supposed fortune does not exist: once paid, the scammer will make off with the fee and either invent more roadblocks to continue extracting funds or move on to the next victim.
Another type of advance fee fraud we often see are compensation scams. Playing on the frequency of the previously mentioned lures, fraudsters claim that they can reunite the victim with their supposed funds. This is to further capitalize on people who had already fallen for other forms of scams, as the fraudster will themselves ask for a fee to be paid, with the fund never having existed.
“Sandra Steven” Deepfakes used to evade email filters
Netcraft recently received a deepfake video as an initial lure for a compensation scam. The email simply said, “Try to open this important video and comply.” By removing typical scam indicators from the text, this tactic may evade some spam filters, which rely primarily on text classification for filtering.
When watching the deepfake video, there are other indicators of fraud, most notably the content of the script, with “Mrs. Sandra Steven, Federal Bureau of Investigation” asking the listener to “transfer the sum of dollar one hundred and twenty” to gain access to their supposed fund “approved by the United Nations” and now controlled by the “World Bank.” The video indicates that “The total fund is here with me in my office,” suggesting the poorly edited pile of cash on the desk represents the fund in question.
An additional indicator of fraudulent activity is the watermark from the commercial AI avatar generator Wondershare Virbo. File metadata indicates that the video was originally generated on March 1st 2024, showing criminals actively adapting to new products and technologies. If videos generated include watermark data, it may be possible to identify the abusing user, and hence criminal infrastructure.
The underlying image for the deepfake is not of “Sandra Steven,” or even an FBI officer: it is instead based on a photo of Satu Koivu, Assistant Police Commissioner and Senior Police Advisor of the UN Peacekeeping Force in Cyprus, from an image originally posted on Facebook in September 2021. It’s notable in this case that her identity was not relied on, replaced with a generic advance fee fraud narrative: anyone in a position of authority could be impersonated in this way.
By interacting with the fraudster claiming to be “Sandra Steven,” Netcraft gained further details of the scam, including information about the supposed “fund” and soliciting bank account and Bitcoin address details.
While this video included numerous fraud indicators, as these scammers become more familiar with producing deepfakes, we expect they will only get more convincing in the future.
ChatGPT-assisted crime
Netcraft has also found numerous indicators of other advance fee fraudsters using Large Language Models to make their messages more convincing, eliminating common tells such as poor grammar and including bespoke information beyond what could be included through a traditional mass-email campaign.
While there are specific generative AI tools marketed to criminals, including FraudGPT or WormGPT, there are also ChatGPT “jailbreaks” that unlock a so-called “Do Anything Now” mode that can bypass some of the moderation constraints put in place by OpenAI. However, as ChatGPT can be legitimately used for drafting emails, it is trivial to use the tool to generate these emails without requiring any criminal-specific infrastructure.
Errors are sometimes thought to be deliberately included by fraudsters in initial lure emails as a way of filtering out people who may realize they are being scammed later on. By correcting them using an LLM, there is a benefit to the fraudster of not losing victims who are already on the hook.
How Netcraft can help
Netcraft processes hundreds of millions of spam emails every month, identifying all kinds of malicious sites, attachments, and conversational scams being sent to victims around the world.
When your brand is impersonated in advance fee fraud emails, Netcraft can take down the impersonating email address and other infrastructure, such as compromised servers used for sending the emails. This intervention prevents the same infrastructure being used to send out new lures and disrupts any ongoing conversations to prevent ongoing fraud.
Another common venue is impersonation on social media, where fake versions of your executives can be used to both act as the initial stage of a scam and carry them out once people are hooked. Netcraft searches for and takes down these impersonating profiles for our customers.
Book a demo and see for yourself how Netcraft can protect your business and brand from cybercrime.