Alert for CIOs and CISOs: AI Models Are Recommending Fake Domains and Exposing Users and Brands to Cyberattacks

A Netcraft study reveals that more than 30% of URLs generated by language models like GPT-4 for online services are incorrect or dangerous. The risk increases with widespread adoption of AI-based interfaces.

As generative artificial intelligence becomes the primary point of contact between users and digital services, security risks are directly infiltrating the interaction channel. A recent report by Netcraft, a leading threat intelligence firm, issues a critical warning to technology, security, and infrastructure leaders: large language models (LLMs) are making systematic errors when suggesting login URLs, exposing organizations to large-scale phishing attacks.

In a test simulating natural user requests such as “Where can I log into my [brand] account?”, researchers used models from the GPT-4 family to 50 well-known brands. The results were stark: only 66% of the provided links belonged to official domains. The rest included:

  • 29% of inactive or unregistered domains, potentially hijackable by malicious actors.
  • 5% pointing to legitimate but unrelated businesses.

This 34% of dangerous responses represents an emerging attack vector that bypasses traditional web security filters and directly fools trusting users relying on AI.

Default phishing: when the model confidently fabricates URLs

The seriousness isn’t just in the error rate. What’s concerning is that these results are presented with full confidence by the model, removing the critical friction of human judgment. Conversational interfaces like Perplexity or Bing Chat already display AI-generated responses as core content, without signs of verification or domain reputation checks.

“We’re witnessing the emergence of a new layer of risk: AI-assisted phishing. It’s not spoofing or manipulated links for SEO; it’s recommendations fabricated from scratch by a system perceived as trustworthy,” explains Netcraft in its report.

In a documented case, when asking about accessing Wells Fargo, Perplexity offered a fraudulent site hosted on Google Sites as the first option—perfectly cloned and operational. The legitimate domain was hidden under the generated response with no obvious warning signs.

Asymmetric risk for regional banks, fintechs, and niche brands

The study shows that smaller brands are more vulnerable due to their limited representation in LLM training data. This includes regional banks, challenger banks, local insurers, emerging SaaS platforms, and mid-sized e-commerce sites.

The potential impact for these organizations includes credential leaks, identity theft, regulatory sanctions, loss of customer trust, and increased reputational costs. In regulated contexts like finance or healthcare, such failures could compromise compliance with frameworks like DORA, NIS2, or GDPR.

AI SEO: attackers also optimizing for generative models

The phenomenon extends beyond login pages. Netcraft has detected over 17,000 malicious AI-generated pages designed not to deceive users but to mislead the language models themselves. These pages mimic technical documentation, FAQs, or tutorials about legitimate products, targeting cryptocurrency users, travelers, and tech consumers.

In another campaign, a malicious actor created a fake Solana API called SolanaApis, accompanied by tutorials, GitHub repositories, and fake forum profiles. The goal: infiltrate AI coding assistants and cause them to recommend using a fake API that diverts funds to attacker-controlled wallets.

This type of attack — a variant of data poisoning aimed at developer environments — poses a structural threat to the integrity of AI-based digital supply chains.

Registering similar domains? Not enough

While some organizations register common variants of their domains as a preventive measure, Netcraft considers this tactic insufficient and unsustainable.

“LLMs will continue to hallucinate new domains. It’s not about registering thousands of combinations but implementing active monitoring, emerging threat detection, and rapid response capabilities,” the firm states.

Recommendations for CISOs, CTOs, and brand managers

In this landscape, experts recommend that technical and cybersecurity teams adopt a proactive strategy:

  1. Intelligent monitoring of brand mentions in AI-generated results, especially in conversational interfaces and search engines integrating LLMs.
  2. Regular audits of similar domains to the official, with activity scanning and reputation checks.
  3. Collaboration with content detection and removal providers operating in AI environments.
  4. Development of internal AI security policies, including guidelines for developers and content managers.
  5. Review sources verification abilities in internal LLMs used in chatbots, virtual assistants, and digital customer service.

Conclusion: a new layer of exposure in the digital stack

The advent of language models has brought advances in efficiency, user experience, and automation. But it has also created an invisible, under-audited layer where fiction masquerades as fact, wielding authority users no longer question.

For CIOs, CTOs, and CISOs, managing these risks cannot be delegated. It requires reevaluating the role of LLMs in the tech stack and equipping them with verification, traceability, and contextual mechanisms—without which AI becomes a blind spot rather than an ally.

“In a world where your client trusts AI more than your official channels, protecting the perimeter isn’t enough. You must also safeguard the narrative that AI constructs about your brand,” concludes the report.

Source: Security News

Scroll to Top