Lovable: When Artificial Intelligence That Builds Websites Becomes a Cybercrime Tool

What began as a tool designed to democratize web creation has become a new front in cybersecurity battles. Lovable, an AI-powered website generator, allows anyone—without technical skills—to create an attractive and functional website in just minutes. However, this same ease of use is being exploited by cybercriminals who use it to run phishing campaigns, fraud, and malware distribution with a professionalism that, a few years ago, would have seemed unthinkable.

According to a report published by cybersecurity firm Proofpoint and validated by Guardio Labs, abuse of Lovable has surged in recent months. Attackers aren’t just creating fake websites; they’re using them to impersonate companies like Microsoft, UPS, or cryptocurrency platforms. The result: thousands of users deceived and a lowering barrier to entry for cybercrime.

The appeal of Lovable is clear: just describe what kind of website you want—such as a corporate page for a restaurant or an online store—and the AI generates the design, texts, and even images needed. The service includes hosting, security certificates, and a professional finish that’s hard to distinguish from a custom-made site.

This is great for small businesses, freelancers, or users launching personal projects quickly. But the problem arises when criminals realize they can use the same tools to create “official-looking fraudulent sites” and deploy them globally within minutes.

Proofpoint recently documented four large-scale campaigns where Lovable was the main tool used to deceive users and companies.

1. Phishing against Microsoft and Okta
Thousands of emails reached employees with a link. Clicking it led to a Lovable page protected by a CAPTCHA, which appeared legitimate. After solving it, users were redirected to fake Microsoft Azure or Okta portals, where they were asked for their corporate usernames and passwords. In reality, attackers collected credentials, MFA codes, and session cookies, enabling them to access company systems as if they were legitimate employees. The campaign targeted over 5,000 organizations.

2. UPS Branded Package Scam
In another operation, attackers impersonated UPS, sending around 3,500 emails asking recipients to provide personal and credit card details to “manage a pending shipment.” The Lovable-designed portal requested SMS verification codes, which were forwarded to a Telegram channel controlled by scammers. With this information, they could siphon funds directly from victims’ bank accounts.

3. Cryptocurrency Targeting: Attack on Aave
More than 10,000 emails sent via SendGrid directed users to Lovable sites mimicking the DeFi platform Aave. The trick was simple but effective: persuade users to connect their digital wallets. Once connected, attackers could drain funds in seconds. This reflects how digital crime follows the money—first credit cards, then PayPal, and now crypto assets.

4. Malware Disguised as Invoices
In this campaign, the goal was to install a remote access Trojan (RAT) on victims’ computers. Scammers sent links to fake billing portals created with Lovable, from which users could download ZIP files from Dropbox. The package contained a legitimate executable and a malicious file that installed zgRAT, giving attackers complete control over the device—allowing them to spy, steal files, or spread more malware.


Why is Lovable so attractive to hackers?
The case of Lovable isn’t isolated. Any simple, fast web creation platform could be misused similarly. But Lovable’s features make it a particularly perfect target:

  • Speed: Full websites ready in minutes.
  • Professional look: Modern, convincing templates.
  • Secure infrastructure: HTTPS certificates and hosting on trusted clouds make fraudulent sites harder to detect.
  • Low or no cost: Attackers can easily create multiple accounts.
  • Difficult to detect: An HTTPS site with appealing design appears more trustworthy than a decade-old amateur site.

Limitations of Lovable’s current safeguards
Lovable has implemented real-time detection of malicious sites and runs daily automatic scans that remove fraudulent projects. The company claims to have eliminated over 300 illegal websites and blocks around 1,000 suspicious projects daily. However, researchers at Guardio Labs proved it’s still possible to publish false sites undetected. In an experiment, they created a fake retail site and published it without restrictions.

While the company pledges ongoing policy reinforcement, the issue runs deeper: each new AI-powered tool that simplifies legitimate processes can also be exploited maliciously.


Impact on users and companies
Lovable’s case illustrates how the line between helpful and harmful blurs with AI advancements.

  • For users: Overconfidence in web appearance makes it harder to distinguish legitimate sites from fake ones.
  • For companies: Employees may fall victim to attacks mimicking their vendors or internal tools.
  • For society: The barrier to engaging in cybercrime lowers; sophisticated scams no longer require years of expertise.

In essence, technology that fosters innovation also facilitates deception.


What can users do?
Although platform providers like Lovable bear responsibility, users aren’t powerless. Key precautions include:

  • Always verifying URLs before entering credentials or payment data.
  • Remaining suspicious of urgent emails requesting personal info.
  • Not downloading attachments from suspicious sources—regardless of how legitimate they appear.
  • Using strong, unique passwords and enabling multi-factor authentication.
  • Keeping software and antivirus programs updated to mitigate risks from malicious downloads.

Looking ahead: what’s next?
Lovable’s experience serves as an early warning. As more AI-driven content creation tools emerge, cybercriminals will have even more sophisticated weapons. Experts predict the rise of fully automated malicious campaigns—from emails to websites and chat messages—potentially integrating deepfake voice and video to increase credibility.

If discerning fake websites is already challenging, what will happen when AI generates convincing phone or video calls that seem legitimate?


Conclusion
Lovable was created to make online presence easier for everyone. But its misuse highlights a recurring dilemma with AI: every useful technology can become dangerous if security isn’t properly managed. The solution lies in balancing innovation with safeguards. Meanwhile, both companies and users will need to learn to navigate an internet where distinguishing truth from deceit grows ever harder.

Scroll to Top