Skip to main content

Cybercriminals are exploiting AI tools like ChatGPT to craft more convincing phishing attacks, alarming cybersecurity experts

If you’ve noticed a spike in suspicious-looking emails in the last year or so, it might be partly due to one of our favorite AI chatbots - ChatGPT. I know - plenty of us have had intimate and private conversations where we’ve learned about ourselves with ChatGPT, and we don’t want to believe ChatGPT would help scam us. 

According to cybersecurity firm SlashNext, ChatGPT and its AI cohorts are being used to pump out phishing emails at an accelerated rate. The report is founded on the firm’s threat expertise and surveyed more than three hundred cybersecurity professionals in North America. Namely, it’s claimed that malicious phishing emails have increased by 1,265% - specifically credential phishing, which rose by 967% - since the fourth quarter of 2022. Credential phishing targets your personal information like usernames, IDs, passwords, or personal pins by impersonating a trusted person, group, or organization through emails or a similar communication channel.

Malicious actors are using generative artificial intelligence tools, such as ChatGPT, to compose polished and specifically targeted phishing messages. As well as phishing, business email compromise (BEC) messages are another common type of cybercriminal scam, aiming to defraud companies of finances. The report concludes that these AI-fueled threats are ramping up at breakneck speed, growing rapidly in volume and how sophisticated they are. 

The report indicates that phishing attacks averaged at 31,000 per day and approximately half of the surveyed cybersecurity professionals reported that they received a BEC attack. When it comes to phishing, 77% of these professionals reported that they received phishing attacks. 

small business security

(Image credit: Getty Images)

The experts weigh in

SlashNext’s CEO, Patrick Harr, relayed that these findings “solidify the concerns over the use of generative AI contributing to an exponential growth of phishing.” He elaborated that AI generative tech enables cybercriminals to turbocharge how quickly they pump out attacks, while also increasing the variety of their attacks. They can produce thousands of socially engineered attacks with thousands of variations - and you only need to fall for one. 

Harr goes on to point the finger at ChatGPT,  which saw momentous growth towards the end of last year. He posits that generative AI bots have made it a lot easier for novices to get into the phishing and scamming game, and have now become an extra tool in the arsenal of those more skilled and experienced - who can now scale up and target their attacks more easily. These tools can help generate more convincing and persuasively worded messages that scammers hope will phish people right up.

Chris Steffen, a research director at Enterprise Management Associates, confirmed as much when speaking to CNBC, stating, “Gone are the days of the ‘Prince of Nigeria’”. He went on to expand that emails are now “extremely convincing and legitimate sounding.” Bad actors persuasively mimic and impersonate others in tone and style, or even send official-looking correspondence that looks like it’s from government agencies and financial services providers. They can do this better than before by using AI tools to analyze the writings and public information of individuals or organizations to tailor their messages, making their emails and communications look like the real thing.

What’s more, there’s evidence that these strategies are already seeing returns for bad actors. Harr refers to the FBI’s Internet Crime Report, where it’s alleged that BEC attacks have cost businesses around $2.7 billion, along with $52 million in losses due to other kinds of phishing. The motherlode is lucrative, and scammers are further motivated to multiply their phishing and BEC efforts. 

Person writing on computer.

(Image credit: Glenn Carstens-Peters / Unsplash)

What it will take to subvert the threats

Some experts and tech giants push back, with Amazon, Google, Meta, and Microsoft having pledged that they will carry out testing to fight cybersecurity risks. Companies are also harnessing AI defensively, using it to improve their detection systems, filters, and such. Harr reiterated that SlashNext’s research, however, underscores that this is completely warranted as cybercriminals are already using tools like ChatGPT to enact these attacks.

SlashNext found a particular BEC in July that used ChatGPT, accompanied by WormGPT. WormGPT is a cybercrime tool that’s publicized as “a black hat alternative to GPT models, designed specifically for malicious activities such as creating and launching BEC attacks,” according to Harr. Another malicious chatbot, FraudGPT, has also been reported to be circulating. Harr says FraudGPT has been advertised as an ‘exclusive’ tool tailored for fraudsters, hackers, spammers, and similar individuals, boasting an extensive list of features.

Part of SlashNext’s research has been into the development of AI “jailbreaks” which are pretty ingeniously designed attacks on AI chatbots that when entered cause the removal of AI chatbots’ safety and legality guardrails. This is also a major area of investigation at many AI-related research institutions.

Workers at computers in an office

(Image credit: Unsplash / Israel Andrade)

How companies and users should proceed

If you’re feeling like this could pose a serious threat professionally or personally, you’re right - but it’s not all hopeless. Cybersecurity experts are stepping up and brainstorming ways to counter and respond to these attacks. One measure that many companies carry out is ongoing end-user education and training to see if employees and users are actually being caught out by these emails. 

The increased volume of suspicious and targeted emails does mean that a reminder here and there may no longer be enough, and companies will now have to very persistently practice putting security awareness in place among users. End users should also be not just reminded but encouraged to report emails that look fraudulent and discuss their security-related concerns. This doesn’t only apply to companies and company-wide security, but to us as individual users as well. If tech giants want us to trust their email services for our personal email needs, then they’ll have to continue building their defenses in these sorts of ways. 

As well as this culture-level change in businesses and firms, Steffen also reiterates the importance of email filtering tools that can incorporate AI capabilities and help prevent malicious messages from even making it to users. It’s a perpetual battle that demands regular tests and audits, as threats are always evolving, and as the abilities of AI software improve, so will the threats that utilize them. 

Companies have to improve their security systems and no single solution can fully address all the dangers posed by AI-generated email attacks. Steffen puts forth that a zero-trust strategy can help fill control gaps caused by the attacks and help provide a defense for most organizations. Individual users should be more alert to the possibility of being phished and tricked, because it has gone up.

It can be easy to give into pessimism about these types of issues, but we can be more wary of what we choose to click on. Take an extra moment, then another, and check out all the information - you can even search the email address you received a particular email from and see if anyone else has had problems related to it. It’s a tricky mirror world online, and it’s increasingly worthwhile to keep your wits about you.

You might also like



Comments

Popular posts from this blog

Windows Copilot leak suggests deeper assimilation with Windows 11 features

Key Windows 11 features may soon be customizable as Microsoft further integrates its Windows Copilot AI assistant into the operating system. This tidbit comes from tech news site Windows Latest , which claims to have discovered new .json (JavaScript Object Notation) files within recent preview builds of Windows 11. These files apparently hint at future upgrades for the desktop AI assistant. For example, a “TaskManagerService-ai-plugin.json” was found which is supposedly a “plugin for Task Manager integration”. If this ever comes out, it could give users the ability to “monitor or close running apps using” Copilot. In total, six are currently tested and they affect various aspects of Windows 11. Next, there is an “AccessbilityTools-ai-plugin.json” that gives Copilot a way to “control accessibility [tools]. This would make it "easier for those with [a] disability to navigate through the system.” Third is “ai-plugin-WindowsSettings.json” for controlling important Windows 11 set...

Google Chrome releases security fix for this major flaw, so update now

Google says it has fixed a high-severity flaw in its Chrome browser which is currently being exploited by threat actors in the wild.  In a security advisory , the company described the flaw being abused and urged the users to apply the fix immediately.  "Google is aware that an exploit for CVE-2023-2033 exists in the wild," the advisory reads. Automatic updates The zero-day in question is a confusion weakness vulnerability in the Chrome V8 JavaScript engine, the company said. Usually, this type of flaw can be used to crash the browser, but in this case it can also be used to run arbitrary code on compromised endpoints.  The flaw was discovered by Clement Lecigne from the Google Threat Analysis Group (TAG). Usually, TAG works on finding flaws abused by nation-states, or state-sponsored threat actors. There is no word on who the threat actors abusing this flaw are, though. Read more > Patch Google Chrome now to fix this emergency security flaw > Emergency...

Samsung's ViewFinity S9 may be the monitor creatives have been searching for

Originally revealed during CES 2023 , Samsung has finally launched its ViewFinity S9 5K monitor after nine long months of waiting.  According to the announcement, the ViewFinity S9 is the company’s first-ever 5K resolution (5,120 x 2880 pixels) IPS display aimed primarily at creatives. IPS stands for in-plane switching , a form of LED tech offering some of the best color output and viewing angles on the market. This quality is highlighted by the fact that the 27-inch screen supports 99 percent of the DCI-P3 color gamut plus delivers 600 nits of brightness.  Altogether, these deliver great picture quality made vibrant by saturated colors and dark shadows. The cherry on top for the ViewFinity S9 is a Matte Display coating to “drastically [reduce] light reflections.”  As a direct rival to the Apple Studio Display , the monitor is an alternative for creative professionals looking for options. It appears Samsung has done its homework as the ViewFinity S9 addresses some of...