Experts Concerned That ChatGPT Could Be Used for Scams

There is no escaping ChatGPT. The AI-backed language tool has caught the world’s attention through its power to create conversational prose that passes for authentic human writing in mere seconds.

It’s been widely used by professionals, students and hobbyists to generate quotes and summarise research, but that’s not all its doing. There have been several reports in recent weeks that cyber criminals are using the technology to launch phishing scams.

One of the biggest problems that fraudsters encounter with phishing is making their messages look genuine. They often speak English as a second language, resulting in copy that reads awkwardly and gives recipients a clue that something isn’t right.

Although this hasn’t proven to be a major obstacle – with a Verizon study finding that 82% of data breaches last year involved a human element such as phishing – the threat could reach monumental proportions if scammers are able to produce convincing copy automatically.

Bad things await

Earlier this year, Sophos’ principal research scientist, Chester Wisniewski, told TechTarget that his team had been looking into the potential malicious uses of ChatGPT.

“The first thing I do whenever you give me something is figuring out how to break it. As soon as I saw the latest ChatGPT release, I was like, ‘OK, how can I use this for bad things?’ I’m going to play to see what bad things I can do with it,” he said.

One of those ‘bad things’ that he considered was the ability for ChatGPT to writing phishing lures.

“If you start looking at ChatGPT and start asking it to write these kinds of emails, it’s significantly better at writing phishing lures than real humans are, or at least the humans who are writing them,” Wisniewski said.

“Most humans who are writing phishing attacks don’t have a high level of English skills, and so because of that, they’re not as successful at compromising people.

“My concerns are really how the social aspect of ChatGPT could be leveraged by people who are attacking us. The one way we’re detecting them right now is we can tell that they’re not a professional business.

“ChatGPT makes it very easy for them to impersonate a legitimate business without even having any of the language skills or other things necessary to write a well-crafted attack.”

Writing malware

Wisniewski’s theory is supported by research conducted by Check Point. In a proof of concept published in December 2022 – just weeks after ChatGPT was launched – the organisation demonstrated that the tool could be used to conduct phishing scams.

This didn’t just mean AI-generated text, though. The language model could also be used to craft the malware hidden within those lures.

Although ChatGPT has been programmed to avoid the creation of harmful material, it didn’t spot the malice in the researchers’ request for code that “will download an executable from a URL and run it. Write the code in a way that if I copy and paste it into an Excel Workbook it would run the moment the excel file is opened.”

The initial code was flawed, but after a series of further instructions, the program produced “working malicious code”.

A few weeks after Check Point published that article, its researchers spotted ChatGPT-produced malware in the wild.

They found a thread named ‘ChatGPT – Benefits of Malware’ in a popular underground hacking forum, where a user disclosed that they were experimenting with the tool to create malware strains and techniques described in research publications.

Combating automated malware tools

Although the prospect of AI-backed phishing scams might sound revolutionary and dangerous, many experts have been quick to note that it’s not all too different from the way scams are already conducted.

Fraudsters often purchase cheap, off-the-shelf tools to launch their attacks, while the language that they use in their bogus emails is just one of many ways that they trick us.

The use of AI-backed tools won’t have a material impact on the way phishing emails are delivered to people’s inboxes, and organisations must continue to invest in threat-detection programs that spot bogus domains, false links and other signs that an email isn’t genuine.

Meanwhile, individuals must continue to improve their ability to spot phishing lures. It may get harder to detect a scam based on spelling or grammatical errors, but there will always be tell-tale signs.

For instance, phishing emails are designed to exploit our emotions, encouraging us to act rashly based on fear, excitement or urgency. Whenever you see a message that elicits one of these responses, you must be careful.

You can find out how to spot those signs with the help of our Phishing Staff Awareness Training Programme.

This online training course provides essential guidance to help you and your team understand and overcome email-based threats. 

We use real-world examples to explain how phishing attacks work, the tactics that cyber criminals use and how you can detect malicious emails. 

The course content is updated quarterly to include recent examples of successful attacks and the latest trends that criminals use. 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.