Meta and Artificial Intelligence: Manipulated Consent in Times of Digital Opacity

As artificial intelligence advances by leaps and bounds, so does the ambition of major tech companies to feed their models with as much data as possible. The concerning aspect is not the technology itself, but the way companies like Meta—owner of Facebook, Instagram, and WhatsApp—instrumentalize users and dilute their rights with strategies that flirt with, if not cross, the boundaries of digital ethics.

The Mirage of Consent

Meta has begun informing its European users that it will use their public data to train its generative AI models, such as Meta AI. It does this by invoking its “legitimate interest,” underpinned by the General Data Protection Regulation (GDPR). But there’s a problem: consent is not real if it is not informed, free, and explicit.

Instead of offering a direct question like, “Do you allow us to use your public posts to train our AI? Yes / No,” Meta chooses a more convoluted route: an email that leads to a hard-to-find form with unnecessary steps, where you must actively oppose it. To make matters worse, that opposition is not immediate: you have to confirm later via email. This is a deliberately discouraging design.

This is Not Privacy, It’s Consent Engineering

This practice is an example of what is known as a dark pattern, a design strategy aimed at confusing or making it difficult for the user to exercise their rights. This is not new; major platforms have been refining these mechanisms for years so that, in practice, consent seems to be given… even when it hasn’t truly been.

Although Meta claims to comply with the law, the reality is that it is circumventing the spirit of the GDPR, which protects informational self-determination. Using personal data to train AI models is not a technical detail; it is a crucial decision and should require clear, active, and transparent consent—not one concealed under administrative pretexts.

A Goldmine of Invisible Data

The company states that it will use posts, comments, and any other public information shared since the start of each account. There are no filters or time limits. If you have been using its platforms for years, you have likely already provided—a hefty amount of data unknowingly—that Meta will use to train systems that it will then monetize.

Yes, you can modify the audience of your posts. But that does not invalidate the fact that most users are not fully aware of how their content is being used, nor do they know how to stop it.

Who Controls the Controller?

Another underlying issue is the lack of guarantees and technical transparency. Even if you oppose it, how can you verify that Meta has stopped using your data? Is there an independent audit? Are there accessible mechanisms to verify that your rights have been respected? Recent history—from Cambridge Analytica to the leaks of internal documents—demonstrates that trust cannot be taken for granted.

The Solution is Simple… But Uncomfortable for Them

Meta and other platforms could resolve this issue with a simple dialog box in their app:
👉 “Do you want to allow the use of your data to train our AI? Yes / No.”

But they do not do this because many users would say no. And that would directly affect their business. That’s why they choose a formula where, even though they claim to respect your decision, the entire system is designed so that you either do not make it, or make it too late.

Innovation or Abuse of Position?

Technological innovation does not justify an abuse of power. If Meta wants to continue leading the development of artificial intelligence, it must do so with clear rules, without hiding its intentions or making it difficult for users to access their fundamental rights.

This is not about being against the advancement of AI, but about ensuring that this advancement does not rely on covert data mining and manipulative practices. Consent cannot be a legal illusion. It must be a free, informed, and respected act.

Scroll to Top