For years, the biggest threat on the Internet had a name and a surname: people like Brett Johnson, one of the most notorious cybercriminals of the early 2000s. Today, that same former hacker, who went from leading criminal forums to collaborating with authorities, issues an uncomfortable warning: the real danger coming isn’t human anymore, it’s artificial intelligence.
His diagnosis isn’t a flashy phrase. Johnson describes a very near future in which scams are no longer designed by isolated individuals in front of a keyboard, but by automated systems capable of writing, talking, imitating faces, and managing thousands of victims simultaneously. An organized crime industry where people are increasingly dispensable… except as targets.
From the Dark Web to the “fraud factory”
To understand the weight of his warning, it’s important to remember who Brett Johnson is. He was one of the founders and administrators of ShadowCrew, a pioneering forum where thousands of criminals exchanged stolen identities, credit cards, and fraud techniques. That ecosystem is considered one of the precursors of the modern Dark Web.
For over a decade, Johnson thrived on digital crime: he stole identities, resold banking data, and even earned over $100,000 per month, with peaks exceeding $500,000 when he was on the US’s most wanted list. After his downfall, he served federal prison time and eventually collaborated with authorities, including the U.S. Secret Service.
From that dual perspective — as a former criminal and now as a cybersecurity expert — he argues that the problem has scaled. While formerly the threat was an skilled criminal armed with a handful of tools, today it’s an industry combining scam farms, generative artificial intelligence, and new forms of identity theft.
Deepfakes: when you can no longer trust what you see or hear
The first major sign of this future is deepfakes. Johnson agrees with many experts that this technology is shifting from a viral curiosity to a central tool for fraud.
Criminals are already using AI-generated videos and audio to impersonate executives, family members, or employees. A clear example is a financial employee who authorized transfers of over $25 million after a video call with “colleagues” that were actually AI recreations. This exemplifies how corporate fraud is evolving: it’s no longer necessary to breach a system if you can deceive the legitimate authorized person.
The risk isn’t just in the increasing realism of the imitations but also in their speed. AI models can:
- Learn voice patterns from few recordings.
- Generate personalized responses in real time.
- Adjust speech based on the victim’s reactions, as if they were another person in the conversation.
Practically, this means criminals can instantly “buy trust,” impersonating someone the victim already considers reliable. Instead of spending weeks or months building credibility with a target, they can digitally replicate a boss, a child, or a bank representative.
In the future Johnson describes, the phrase “don’t trust what you see online” stops being a cautious advice and becomes a survival rule for the digital age.
Scam farms: from lone wolves to large criminal enterprises
The second key element of this future are scam farms: organizational setups that look like businesses, with dozens or hundreds of people working in shifts to commit fraud.
Such structures already exist: buildings filled with workers — many victims of trafficking or labor exploitation — conducting mass-scale deception campaigns. Some specialize in pig butchering: long-term romance scams where the scammer gains the victim’s trust over weeks or months before convincing them to invest all their savings in cryptocurrencies or fake financial products.
What changes with AI is the ability to upscale these models:
- The same script can be automatically adapted to thousands of victims based on their language, age, or economic profile.
- Advanced chatbots can handle multiple conversations simultaneously with minimal human oversight.
- Generative tools can create fake websites, documents, screenshots, and professional-looking investment dashboards within minutes.
Johnson notes that, unlike the 90s and early 2000s when criminals operated in more informal networks, today cybercrime increasingly resembles a multinational enterprise: hierarchies, supervisors, sales targets… and in the near future, AI systems automating much of the “emotional” work of scamming.
Synthetic identities: non-existent personas and nearly invisible fraud
The third element in this emerging landscape is synthetic identities: constructing fictitious persons by combining real data (like ID numbers or partial information) with fabricated details.
These digital identities may have:
- A credit history built from small, paid transactions.
- Bank accounts and cards registered in their name.
- Credible social media traces and online presence.
According to Johnson, fraud involving synthetic identities has become the leading form of identity theft worldwide, responsible for about 80% of new account frauds, along with significant portions of chargebacks and credit card debts.
The problem for banks and merchants is that the “victim” never actually existed. Usually, fraud is detected late—once the institution realizes that certain loans or credit lines will never be recovered.
AI plays a dual role here:
- Automating the creation of large-scale fake identities.
- Maintaining and evolving these identities over time with AI-generated activity, photos, and credible online appearances.
Within a few years, a single criminal group could manage thousands of “people” who don’t exist but live fully in the financial and digital worlds.
Where is this future heading? AI vs. AI in fraud fighting
Johnson’s story isn’t just a warning; it anticipates how the balance of power on the Internet might shift in the coming years. If criminals use AI to industrialize fraud, the response can’t be human-only.
Financial institutions, tech platforms, and public agencies are already working — with varying success — on several fronts:
- AI-powered detection systems: capable of identifying anomalous behavior patterns even when identity data appears consistent.
- Enhanced identity verification: combining biometrics, liveness detection, and cross-checks across multiple sources.
- Risk limits on new accounts: especially in credit products and digital payment methods.
- International collaboration: to identify scam farms’ infrastructure, malware distribution networks, and criminal tools.
Simultaneously, experts like Johnson stress that the “analog” part remains crucial: freezing credit histories when possible, setting transaction alerts, using unique passwords and password managers, enabling multi-factor authentication, and being extremely cautious with what’s shared on social media.
The future Johnson envisions isn’t fixed, but plausible: an ecosystem where machines craft fraudulent emails, generate fake faces of executives, support video calls, create synthetic identities to open accounts… and where the first line of defense remains a vigilant user or an alert system that detects something “off” in the data.
The question is whether defenses—both human and algorithmic—will arrive in time and at sufficient scale to stop an industry of fraud that, for the first time, is driven by AI as the main engine rather than just a supporting tool.
Frequently Asked Questions about New AI-Driven Fraud Threats
What exactly is a deepfake, and why is it so dangerous in financial scams?
A deepfake is content generated by AI that imitates a real person’s voice, face, or gestures in video or audio. In finance, it’s especially dangerous because criminals can impersonate executives, bank employees, or relatives to persuade victims to authorize transfers, share passwords, or bypass security protocols they normally would follow.
How does fraud with synthetic identities differ from traditional identity theft?
In classical identity theft, a criminal uses nearly all the data of a real person to impersonate them. Synthetic identity fraud combines real data (like ID numbers) with invented details to create a fictitious “customer” that doesn’t exist in the physical world. This false identity can open accounts, request credit, and rack up debts that are usually discovered late.
What role do scam farms play in this new scenario?
Scam farms are organized operations functioning like illegal companies: many workers, shifts, supervisors, and financial goals. AI allows these farms to scale operations, automate conversations, and personalize messages to thousands of victims simultaneously. Instead of a lone scammer, we see entire teams supported by AI-powered tools.
What can ordinary people do to protect themselves against these emerging threats?
Experts recommend combining technical and behavioral measures: freeze credit when possible to prevent new accounts, activate alerts on banks and cards, use unique passwords and a password manager, enable multi-factor authentication always, and remain cautious about urgent requests for money or sensitive data—even from someone who “sounds” familiar via voice or video. Always verify through a separate channel if in doubt.

