The fundamentals of social engineering attacks – human manipulation – may not have changed much over the years. What is evolving is vectors – how these methods are deployed. And like most industries these days, AI is accelerating its evolution.
In this article, we will explore how these changes will affect your business and how cybersecurity leaders can respond.
Impersonation Attack: Using Trustworthy Identity
According to Thomson Reuters, traditional forms of defense have already struggled to solve social engineering. This is the cause of most data breaches. Next-generation AI-powered cyberattacks and threat actors can now launch these attacks at unprecedented speed, scale and realism.
Older Method: Silicon Mask
Two fraudsters were able to withdraw more than 55 million euros from multiple victims, impersonating French government ministers. Wear a Jean Yves Le Dorian silicone mask during a video call. To add a layer of faith, they also sat at his ministerial office recreation along with a photo of then-President François Hollande.
More than 150 prominent figures reportedly contacted and sought money for ransom payments or counter-terrorism projects. The biggest transfer was 47 million euros, prompting Target to act for the two journalists held in Syria.
New Method: Video Deepfake
Many of the money requests have failed. After all, silicone masks cannot perfectly replicate the appearance and movement of a person’s skin. AI video technology offers a new way to enhance this form of attack.
I saw this in Hong Kong last year. There, the attacker created a CFO video deep-fark and carried out a $25 million scam. I then invited my colleagues to a video conference call. So Deepfake CFO persuaded the employees to make millions of billions of transfers to the fraudster’s account.
Live Call: Voice Phishing
Voice phishing, often known as Vising, builds on the power of traditional phishing using live audio. There, people are persuaded to provide information that compromises the organization.
Old method: Fraudulent phone
An attacker could potentially impersonate someone from an authoritative person or another trustworthy background and call the target.
They add a sense of urgency to the conversation and require you to make payments immediately to avoid negative consequences such as losing access to your account or lack of deadlines. The victim lost a median $1,400 for this type of attack in 2022.
New Method: Voice Cloning
Traditional Vissing defense recommendations include asking people not to click on the links that come with requests and calling them back with their official phone number. This is similar to the zero trust approach of never trusting, always check it out. Of course, it is natural that trust bypasses validation concerns when the voice comes from someone you know.
This is a major challenge for AI, and attackers are currently using voice cloning technology, where they often talk about targets for a few seconds. The mother receives a call from the person who cloned her daughter’s voice and is lured in, saying the attacker wants a $50,000 reward.
Phishing email
Most people with email addresses were lottery winners. At the very least they received an email informing them that they had won millions. Perhaps in exchange for a prepaid fee with reference to a king or prince who may need help in releasing the funds.
Old Method: Spray and Prayer
Over time, these phishing attempts are far less effective for multiple reasons. They are mostly sent in bulk with personalization and many grammatical errors, and people are more aware of “419 scams” using requests to use certain remittance services. Other versions, such as using fake login pages for banks, can often be blocked using web browsing protection and spam filters, and can educate people to check URLs closely. can.
However, phishing remains the biggest form of cybercrime. The FBI Internet Crime Report 2023 found that phishing/spoofing was the cause of 298,878 complaints. To give it to some context, the second highest (personal data breaches) registered 55,851 complaints.
New Method: Large-scale Realistic Conversations
Rather than relying on basic translation, AI is leveraging LLMS, allowing threat actors to leverage LLMS to access complete words tools. You can also use AI to launch these to multiple large recipients. Customization allows for more targeted forms of spear fishing.
Additionally, these tools can be used in multiple languages. These open doors to more areas where targets may not be aware of traditional phishing techniques and what to check. Harvard Business Review warns that “the entire phishing process can be automated using LLMS, achieving equal or greater success rates while reducing the cost of phishing attacks by more than 95%.”
Reinvented threat means reinventing defense
Cybersecurity has always been in the arms race between defense and attack. However, AI has added another dimension. Now, the target has no way of knowing what is realistic and what is fake when an attacker is trying to manipulate himself.
Trust, impersonate coworkers, order employees to bypass security protocols, pretend to be employees’ CFOs, and complete the fear of emergency financial transactions by creating a sense of urgency and panic; It means asking employees to complete the fear of emergency financial transactions. I think you’ll think about whether the person they’re talking to is real or not.
These are important parts of humanity and instinct that have evolved over thousands of years. Naturally, this cannot evolve at the same speed as the malicious actor’s methods or advances in AI. The traditional form of awareness with online courses and questions and answers is not built for this AI-powered reality.
So, part of the answer is to let your workforce experience simulated social engineering attacks, especially while technical protections are still catching up.
Your employees may not remember what you say about defending against cyberattacks when it happens, but they remember how it feels It must be. When an actual attack occurs, they know how to respond.
Source link