Cybercriminals have a new target: Your mind.

In January 2024, a financial employee of a multinational based in Hong Kong received an email from the company’s Chief Financial Officer (CFO) in the UK. The message mentioned the execution of confidential transactions, something unusual, but a video call would clarify the situation.

The call involved several high-ranking officials of the organization, so the worker in Hong Kong proceeded to make 15 payments, totaling 200 million Hong Kong dollars (approximately 25.6 million US dollars), to five local bank accounts. It was only when mentioning the transactions to the headquarters that the deception was uncovered.

The CFO had never requested the transfers. The people on the call weren’t even real. Everything had been orchestrated by a cybercriminal.

Hong Kong Police Superintendent Baron Chan Shun-ching explained to Radio Television Hong Kong that the scammer likely pre-downloaded the videos and used artificial intelligence to add fake voices to the video conference.

This is not an isolated case. Hong Kong police have found at least 20 cases where machine learning has been used to create deepfakes and obtain money through deception, as reported by CNN. Experts warn that this trend is just beginning.

“It’s escalating,” says information security expert Todd Wade. “Criminal gangs are setting up call centers around the world and running them like businesses. And they’re growing.”

Technological advancements and the use of artificial intelligence (AI) have allowed for the development of scams that evade traditional defenses and target the weakest link in any cybersecurity strategy: humans.

Nick Biasini, head of outreach at Cisco Talos, states that “social engineering is taking up an increasing part of this landscape. We are seeing more and more threat actors who are not necessarily technically sophisticated, but are good at manipulating people.”

This increase in the sophistication of AI-based threats is another driving factor. In the past year, technological advancements have reached a point where it is increasingly difficult to distinguish a deepfake from reality.

While it used to be easy to detect a deepfake by strange speech patterns or poorly drawn hands, these issues are being quickly overcome. Even more concerning, AI can now create realistic deepfakes with tiny training sets.

“There are many call centers that call you just to record your voice,” says Luke Secrist, CEO of the ethical hacking firm BuddoBot. “The phone calls you receive with no answer are trying to record you saying ‘Hello, who is this?’ They only need a snippet.”

According to cybersecurity expert Mark T. Hofmann, “thirty seconds of raw material, whether voice or video, are enough to create deepfake clones in a quality that not even your wife, husband, or children could distinguish from you. No one is safe.”

In many cases, a cybercriminal doesn’t even need to call you. Social media is filled with audio and video material. Additionally, “there are also a ton of data breaches that include personal information like your address, phone number, email, social security number… For social engineering attacks, they can use this information to impersonate someone with authority,” Wade says.

Once a social engineering attack is initiated, cybercriminals play on mental weaknesses to get what they want. They can make you believe your child has been kidnapped or that your job is in jeopardy if you don’t do a favor for your boss.

Standard cyber defenses have little to do to prevent this. Therefore, “when we talk about social engineering and deepfakes, the human firewall is more important than ever,” says Hofmann. “We need to educate people about the new risks, without scaring them.”

A good rule of thumb in the world of deepfakes is to be wary of any unusual requests, no matter who they seem to come from. Hofmann suggests that families agree on a keyword to use in case of doubt.

In corporate environments, “asking security questions or calling back to the real number is good advice,” he adds. “They can steal your voice, but not your knowledge.”

Biasini agrees that the best way to defeat the threat of deepfakes is through education, at least until authentication technology finds a way to distinguish real identities from fake ones. “When we find these types of activities, we make sure they are exposed,” he says.

“One of the most important things that can be done is to bring this knowledge to the masses, because not everyone is aware of these types of threats.”

Source: Cisco.

Scroll to Top