Update, Dec. 25, 2024: This story, originally published Dec. 23 now has additional insight into how attackers are using AI along with expert advice on how to tackle these threats, as well details of newly published research from the Palo Alto Networks Unit 42 security group regarding the use of innovative large language model adversarial AI tactics to protect Gmail and other users from such attacks involving LLM-at-scale JavaScript malware production and obfuscation.
The single most popular free email platform on the planet is under attack from hackers wielding AI-driven threats. With 2.5 billion users, according to Google’s own figures, Gmail isn’t the only target of such attacks, but it sure is the biggest. Here’s what you need to know and do to protect yourself. Right now.
The AI Threat To Billions Of Gmail Users Explained
Gmail is most certainly not immune to advanced attacks from threat actors looking to exploit the treasure trove of sensitive data that is to be found ion the average email inbox. As I recently reported, there’s an ongoing Google Calendar notification attack that relies upon Gmail to succeed, and Google itself has warned about a second wave of Gmail attacks that include extortion and invoice-based phishing, for example. With Apple also warning iPhone users about spyware attacks, an infamous ransomware gang rising from the dead and claiming Feb. 3 as the next attack date, now is not the time to be cyber-complacent. Certainly not when a giant of the security vendor world, McAfee, issued a new warning that confirmed what I have been saying about the biggest threat facing Gmail users: AI-powered phishing attacks that are frighteningly convincing.
“Scammers are using artificial intelligence to create highly realistic fake videos or audio recordings that pretend to be authentic content from real people,” McAfee warned, “As deepfake technology becomes more accessible and affordable, even people with no prior experience can produce convincing content.” So, just imagine what people, threat actors, scammers and hackers with prior experience, can produce by way of an AI-driven attack. Attacks that can get within a cat’s whisker of fooling a seasoned cybersecurity professional into handing over credentials that could have seen his Gmail account hacked with all the consequences that could carry.
The Convincing AI-Powered Attacks Targeting Gmail Users
In October, a Microsoft security solutions consultant called Sam Mitrovic went viral after I reported how he had so nearly fallen victim to an AI-powered attack. So convincing, and typical of the latest wave of cyberattacks targeting Gmail users that it is worth mentioning briefly again. It started a week before it started, let me explain:
Mitrovic got a notification about a Gmail account recovery attempt, apparently from Google. He ignored this, and the phone call also pertaining to come from. Google that followed a week later. Then, it all happened again. This time, Mitrovic picked up: an American voice, claiming to be from Google support, confirmed that there was suspicious activity on the Gmail account. To cut this long story short, please do go read the original, it is very much worth it, the number the call was coming from appeared to check out as being Google from a quick search, and the caller was happy to send a confirmation email. However, being a security consultant, Mitrovic spotted something that a less experienced user may well not have done: the “To” field was a cleverly obfuscated address that wasn’t really a genuine Google one. As I wrote at the time, “It’s almost a certainty that the attacker would have continued to a point where the so-called recovery process would be initiated,” which would have served to capture login credentials and quite possibly a session cookie to enable 2FA bypass as well.
Sharp U.K. research has also concluded that “AI is being weaponized for cyber attacks,” and pointed to six specific attack methodologies that account for much of this weaponization. “While AI offers great benefits in various fields,” the report stated, “its misuse in cyber attacks represents a significant and growing threat.” Those threats were:
- The Use Of AI In Password Cracking—AI is taking over from brute-force password cracking strategies for good reason, as machines better learn the patterns used in password creation, the report stated, “AI algorithms can analyze millions of passwords and detect common trends, allowing hackers to generate highly probable password guesses.” It’s far more efficient than bog-standard brute-forcing, allowing hackers to complete this stage of an attack process far quicker and at less cost in terms of time and resources. “AI-driven password-cracking tools are also capable of bypassing two-factor authentication,” the report claimed, “by learning from failed attempts and improving their chances of success over time.”
- Cyberattack automation—anything that can be automated will be automated when it comes to determined hackers and cybercriminals looking for ways into your network and data; from vulnerability scanning to attack execution at scale. By deploying AI-powered bots to scan thousands of websites or networks simultaneously, the Sharp U.K. report said, weaknesses can be found to be exploited. And that exploitation process can also be automated with the help of AI. “AI-powered ransomware can autonomously encrypt files, determine the best way to demand ransom, and even adjust the ransom amount based on the perceived wealth of the target,” the researchers said.
- Deepfakes—as already mentioned, these are being used in attacks targeting Gmail users. “In one high-profile case,” the report said, “a deepfake audio of a CEO’s voice was used to trick an employee into transferring $243,000 to a fraudster’s account. As deepfake technology continues to evolve, it becomes increasingly difficult for people and organizations to distinguish between real and fake, making this a powerful tool for cyber attackers.”
- Data mining—because AI can enable an attacker to not only collect but also analyze data at scale and at speeds that would have been considered impossible just a couple of years ago, it’s hardly surprising that this is a resource that’s being used and used hard. “By using machine learning algorithms, cybercriminals can sift through public and private databases to uncover sensitive information about their targets,” the report warned.
- Phishing attacks—the methodology that is most applicable to the Gmail attack threat, the use of AI in constructing and delivering authentic and believable social engineering attacks. “AI tools can analyze social media profiles, past interactions, and email histories,” the report warned, “to craft messages that seem legitimate.”
- The evolution of malware, at scale—AI-powered malware is a thing in its own right, often coming with the ability to adapt behavior in an attempt to evade detection. “AI-enhanced malware can analyze network traffic to identify patterns in cyber security defenses,” the report said, “and alter its strategy to avoid being caught.” Then there’s the small matter of code-changing polymorphism to make it harder for security researchers to recognize and, as we’ll explore in a moment, the use of large language models to create these subtle malware variations at speed and scale.
“The findings of Sharp’s recent study highlights the need for organizations to take a different approach to cybersecurity awareness training,” Lucy Finlay, director of secure behavior and analytics at ThinkCyber Security, said, “this shift is crucial to protecting people from emerging threats like deepfake phishing designed to very effectively manipulate employees.” “The findings of Sharp’s recent study highlights the need for organizations to take a different approach to cybersecurity awareness training,” Lucy Finlay, director of secure behavior and analytics at ThinkCyber Security, said, “this shift is crucial to protecting people from emerging threats like deepfake phishing designed to very effectively manipulate employees.” Finlay warned that the notion one-in-three workers claim to feel “confident in spotting cyber threats,” without quoting a source for that statistic, and noted that it was a self-reported metric. I don’t doubt the numbers, to be honest, if anything, my experience suggests it could even be higher as people tend to overestimate their own capabilities. “In reality,” Finlay concluded, “it is likely they would struggle to recognize a sophisticated deepfake scam if confronted with one.”
Unit 42 Researchers Develop New Adversarial Machine Learning Algorithm That Could Help Gmail And Other Users Defend Against AI-Powered Malware
Newly published research coming out of the Unit 42 group based at Palo Alto Networks, has detailed how by developing an adversarial machine learning algorithm to employ large language models in order to generate malicious JavaScript code at scale, detection of these AI-powered threats in the wild can be reduced by as much as 10%. One of the big problems facing both users and those who work to defend them against cyber threats, is that while “LLMs struggle to create malware from scratch,” Unit 42 researchers Lucas Hu, Shaown Sarker, Billy Melicher, Alex Starov, Wei Wang, Nabeel Mohamed and Tony Li, said, “criminals can easily use them to rewrite or obfuscate existing malware, making it harder to detect.” It’s relatively easy for defenders to detect existing off-the-shelf obfuscation tools because their fingerprints are well known, their actions already cataloged. LLMs have changed the obfuscation game, swinging the odds in favor of the attackers as using AI prompts they can “perform transformations that are much more natural-looking,” the report stated, “which makes detecting this malware more challenging.” The ultimate aim is, with the use of multiple layers of such transformations, fooling malware classifiers into thinking malicious code is, in fact, totally benign.
Unit 42 managed to create an algorithm using LLMs themselves to rewrite malicious JavaScript code, continually applying a number of rewriting steps to fool static analysis models. “At each step,” the researchers said, “we also used a behavior analysis tool to ensure the program’s behavior remained unchanged.” Why is this important? Because, given the availability of generative AI tools for attackers, as we’ve seen in various attacks against Gmail users for example, the scale of malicious code variants and the difficulty in detecting them will continue to grow. The Unit 42 work shows how defenders “can use the same tactics to rewrite malicious code to help generate training data that can improve the robustness of ML models.” Indeed, Unit 42 said that by using the rewriting technique mentioned, it was able to develop a new deep learning-based malicious JavaScript detector, which is currently “running in advanced URL filtering detecting tens of thousands of JavaScript-based attacks each week.”
What Gmail And McAfee Recommend You Do To Mitigate Ongoing AI Attacks
When it comes to mitigation advice, some can be more relevant than others. Take the recent advice from the Federal Bureau of Investigation, of all people, which suggested verifying phishing emails by checking for spelling errors and grammatical inconsistencies. This, as I have pointed out, is very outdated advice and, as such, pretty pointless in the AI-driven threatscape of today.
McAfee’s advice is to “protect yourself by double-checking any unexpected requests through a trusted, alternate method and relying on security tools designed to detect deepfake manipulation,” and is much better.
Best still, however, is the advice from Google itself when it comes to mitigating attacks against Gmail users and can be broken down into these main points:
- If you receive a warning, avoid clicking on links, downloading attachments or entering personal information. “Google uses advanced security to warn you about dangerous messages, unsafe content or deceptive websites,” Google said, “even if you don’t receive a warning, don’t click on links, download files or enter personal info in emails, messages, web pages or pop-ups from untrustworthy or unknown providers.”
- Don’t respond to requests for your private info by email, text message or phone call and always protect your personal and financial info.
- If you think that a security email that looks as though it’s from Google might be fake, go directly to myaccount.google.com/notifications. “On that page,” Google said, “you can check your Google Account’s recent security activity.”
- Beware of urgent-sounding messages that appear to come from people you trust, such as a friend, family member or person from work.
- If you click on a link and are asked to enter the password for your Gmail, Google account or another service: Don’t. “Instead, go directly to the website that you want to use,” Google said, and that includes your Google/Gmail account login.
Read the full article here