![Social engineering attack Social engineering attack](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghHeUeBlQPIEk7bAhBzs6BcCykCAr_3hjM4hWS-nkV5Piu1HLjkTeUsPlDb3NPUSfQ7ltZE_JS6shegXyyDKtDXRrTIlerMDIpx-HuwMieFvy9gVwzZPSC9fZOvHRhVHc84JQByjiKGjrxXLWWqXOT8OpWZtBgFxJBtVy0Kh13u5cj9yNRLpp_AoXZLji9/s728-rw-e365/arsen.png)
Social engineering has been an effective tactic for a long time because it focuses on human vulnerabilities. There is no speculation of the Blue Force’s “Spray and Plastic” password. There is no unprofitable software schooling system. Instead, it is usually dependable to operating emotions such as trust, fear, and respect for authority, with the goal of accessing confidential information and protected systems.
Traditionally, it was a research of individual targets, meaning manual involvement, and spent time and resources. However, with the advent of AI, social engineering attacks have been launched in various ways, and in many cases without psychological expertise. This article describes five ways that AI supplies power to new waves of social engineering attacks.
Audio Deep Fake that may have influenced the Slovakia election
Prior to the Slovak’s parliamentary election in 2023, a conversation with famous journalists, Monica Todowa, appeared to feature candidates Mikal Simecca. The two -minute audio included discussions on voting purchases and rising beer prices.
After spreading online, the conversation was revealed that the conversation was fake, and there were words spoken by AI trained in speakers.
![Cyber security](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifIwPoAln2qWUbYcn9JBOEK_LE_AG5rsCwzy9mnmMDSfk5fpMyklov-ACfTc7FAOyXjsUEpq5u4OD_zTW3yOTFvtfUh8jzJWLzpqDsy5iyWDXrjofimwAhbySYJ4DyEfQhT-2ZoWWqcv93vwCY3x-AG7I_F-6cDW1FoqBLLhBs127r7ox0dukMACupZErT/s728-rw-e100/GartnerMQ-d-v1.jpg)
However, Deep Fur was released a few days before the election. As a result, I was wondering if AI had affected the results, contributing to Michal Simecka’s advanced Slovakia party.
Not, $ 25 million video calls
In February 2024, a report of social engineering attacks equipped with AI for multinational ARUP financial workers was revealed. They attended an online meeting with those who thought they were CFOs and other colleagues.
During VideoCall, financial workers were required to transfer $ 25 million. Believing that the request was from the actual CFO, the workers completed the transaction according to the instructions.
Initially, they were told that they had invited a meeting by e -mail, so they suspected that they were targeting a phishing attack. However, after seeing something that looks like a CFO and a colleague, trust has recovered.
The only problem was that workers were the only real person. All other attendees were created digitally using DeepFake technology and sent money to scammers’ accounts.
Demand of $ 1 million for a mother for her daughter
Many of us have received a random SMS that starts with the variations of “Hello Mama/Daddy”. This is my new number. Can you transfer some money to my new account? If you receive it in text format, it is easy to take a step back and think, “Is this message real?” But what if you receive a phone call, listen to people and recognize their voices? And what if they sounded like they were invited?
It happened to a mother who testified the risk of crime generated by AI in the US Senate in 2023. She received a phone call that sounded like a 15 -year -old daughter. After answering, she hears that she will act based on a series of terrible threats, unless she is paid for $ 1 million, after hearing the word “mom, these villains have me.” The voice continued.
The mother, who was overwhelmed by panic, shock and urgency, believed she was listening until she was told that the call was made with the voice of the AI clone.
Fake Facebook chatbot that harvests username and password
Facebook states: “If a suspicious email or message from Facebook is displayed, do not click on the link or attachment,” but the social engineering attacker uses this tactic to get results.
They may cope with the fears of people who lose access to the account, click on malicious links and ask for fake prohibition. They may send a link to the question, “Is this the video?” It causes curiosity, concerns, and a natural sense of desire to click.
Attackers are now adding another layer to this type of social engineering attack in the form of chatbots equipped with AI. The user pretends from Facebook and receives an email that is threatening to close the account.[ここにアピール]After clicking the button, you will open a chatbot that requests username and password details. The support window is a Facebook brand, and live interaction has a request to act now, adding urgency to the attack.
“Please put your weapon,” says Deep Fake Zelensky.
As the proverb says, the first victims of the war are true. In the case of AI, the truth can be re -malated digitally. In 2022, a fake video shows that Zelenceky surrendered to Ukrainians and urged him to stop fighting in a war with Russia. The recording was released on Ukraine24, a hacked TV station, and later shared online.
There is a difference in the color of the face and neck from the video of ZELENSKY DEEPFAKE
Many media reports emphasized that videos have a wide and incredible number of errors. In them, the president’s head is too large for the body and is placed at an unnatural angle.
We are still relatively early for social engineering AI, but these types of videos are at least adding an element of suspicion to the other party’s beliefs. You need everything you need to win.
AI raises social engineering to the next level: how to respond
The big challenge for the organization is to act on the ideas of social engineering targeting emotions and making us a human. After all, we are used to trusting our eyes and ears, and we want to believe what we are saying. These are all natural instinct, cannot simply be inseparable, downgraded, or placed behind the firewall.
It is clear that when the rise of AI is added, these attacks will appear, diversity, and speeds, evolving, and expanding.
![Cyber security](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2DhAEcfZPomMkFjg_PBGRtXcqSQWz21i5YgcBHDXAjhJz4KVuiPktjD7s23mDT7Lwg5ksNAz_1NiUuj1W-8eE8etOwr48VJxkeQo0bgmcJs5BOnWwOJg2onaXTzXPrZNlczStGVo4Cya1_B4i3-R_PaYRch5wRxJ9FjH4KKLewchcG72H04aGgIR7jPTK/s1600/per-d.png)
Therefore, after receiving an unusual or unexpected request, employees need to consider control and manage the reaction. We encourage people to stop and think before people stop and complete what they are thinking. The most important thing is to show them what AI -based social engineering attacks look like. No matter how fast AI develops, you can turn the labor into a first -system defensive line.
This is a 3 -point action plan that you can use to start:
Talk to your employees and colleagues about these cases, specifically train deep -fark threats, raise awareness, and explore how to respond. Set some social engineering simulation for employees. That way, you can experience general emotional operation technology and recognize natural instinct like an actual attack. Confirm the defense of the tissue, the permission of the account, and the role of the role, and understand the potential threat actor movement when the initial access is obtained.
Source link