A sophisticated operation reveals how synthetic identities generated with artificial intelligence are being used to bypass global hiring processes.
Palo Alto Networks’ Unit 42 group has unveiled a concerning cyber infiltration campaign orchestrated by IT workers linked to the North Korean regime, who are using real-time deepfake technologies to secure remote jobs in companies around the world. The ultimate goal: to bypass international sanctions and gain privileged access to corporate environments.
A new frontier in digital social engineering
According to the report, these North Korean operators create false identities using face-swapping techniques during online job interviews, managing to simulate multiple candidates from a single device. Utilizing affordable hardware—in one case, an NVIDIA GTX 3070 GPU on a five-year-old laptop—a user with no prior experience was able to set up a functional fake identity in just over an hour.
This technological accessibility raises the risk level for organizations, which may face security breaches, data leaks, and even internal sabotage by state-sponsored actors.
Technical weaknesses of deepfakes: an opportunity for detection
While deepfake technology has rapidly evolved, it still exhibits certain flaws that can be leveraged for detection. The Unit 42 team identified some key indicators:
- Temporal inconsistencies: fast head movements cause visual artifacts due to issues in tracking facial features.
- Occlusion handling: if a hand passes in front of the face, the system fails to reconstruct the hidden part.
- Lighting issues: sudden changes in light can reveal that the face is generated.
- Lip-sync issues: slight delays between audio and lip movements can indicate manipulation.
Strategies to mitigate risk
Experts recommend a layered defense to protect hiring processes from these emerging threats. Some key recommendations include:
For Human Resources teams:
- Record interviews (with prior consent) for subsequent forensic analysis.
- Incorporate identity verification processes with liveness detection.
- Train interviewers to identify signs of visual manipulation or desynchronization.
For Security teams:
- Monitor suspicious IP addresses or unusual locations.
- Verify the origins of phone numbers (VoIP or virtual).
- Block the use of virtual cameras or unauthorized manipulation software.
- Establish information-sharing alliances with other companies and government agencies.
Additionally, implementing gradual access controls for new employees and strengthening internal policies against identity theft, including response protocols and cybersecurity awareness campaigns, is advised.
A new front in the cyber warfare
This phenomenon not only represents an advancement in social engineering techniques but also a new avenue for North Korea to evade economic restrictions and obtain funding through fraudulent remote employment. Manipulating interviews using deepfakes adds an unprecedented layer of complexity to the talent selection process.
In an increasingly digital and decentralized environment, HR and cybersecurity departments must collaborate closely to establish advanced defense mechanisms that ensure the integrity of hired personnel. Deepfake technology, once a technological curiosity, is consolidating as an attack tool in the hands of state actors. And the best defense begins with knowing how to detect it.
Image source: Daniel Grek Sanchez Castellanos and Bettina Liporazzi.
Source: GBhackers and Palo Alto.