Machines don’t think, they just calculate: why AI still doesn’t understand the world like you

A new study by AI experts uncovers a troubling truth: models like ChatGPT or Gemini don’t actually understand what they say. They just predict words. And that changes everything.

AI may have already written you a letter, answered a medical question, or even given you relationship advice. You might be surprised by its clarity, human tone, or seeming wisdom. But the lingering question remains: does it truly understand what it’s saying?

According to a recent study authored by top researchers like Dan Jurafsky, Yann LeCun, and Ravid Shwartz-Ziv, the answer is clear and straightforward: no. Large language models—such as ChatGPT, Gemini, or Claude—don’t think. They don’t reason. They only compress information.

🤖 AI: impressive on the surface, empty at its core

The research, titled “From Tokens to Thoughts: How LLMs and Humans Trade Compression for Meaning”, compares how humans organize knowledge versus how language models do. The unsettling conclusion: while these systems can generate coherent and even brilliant answers, they lack genuine understanding.

While humans grapple with contradictory ideas, intuition, context, and ambiguity, AIs simplify the world based on statistical patterns. They excel at predicting what word comes next, but not at understanding why a duck and a penguin—both birds—don’t fly the same way or live in the same environment.

Digital communicator Corti (@josek_net) explained it clearly in a viral X post:

“LLMs are optimized for compression, not understanding. They lack intuition. They don’t live in ambiguity like humans do.”

🧠 Humans: chaotic but brilliant

By contrast, humans aren’t computationally efficient. Our minds are a whirlwind of fuzzy associations, memories, emotions, and exceptions. We understand that not all birds fly, that a word can have double meanings, or that a glance can say everything without words.

This mental disorder, as the study authors note, is precisely what makes us unique. While AIs try to reduce everything to predictability, humans embrace nuance, exception, and unexpected details.

The study demonstrates this with metrics: LLMs form broad categories similar to human ones but fail at discerning what’s typical or atypical. They can’t prioritize context. They don’t understand what “normal” or “rare” means in a conversation because they lack experience, intention, and emotions.

🎯 What does this mean for you?

It means an AI can write you a poem but doesn’t understand love. It can describe an existential crisis but has never felt anxiety. It can give you a recipe but can’t taste the dish. It imitates, but doesn’t live.

And that’s important, especially if you work with or rely on AI for key decisions. As Corti warns:

“Don’t fall into the trap of thinking a good output equals understanding.”

🌍 A lesson in humility… and humanity

This research not only puts limits on AI but also celebrates human richness. Our imperfect minds aren’t a design flaw—they’re an evolutionary marvel.

Not having a clear answer. Changing opinions. Perceiving the invisible in a conversation. Intuiting when someone’s lying. Crying without knowing why. All of this can’t fit into an algorithm. It can’t be compressed.

That’s why, in a world where algorithms learn quickly, our advantage may not be knowing more but understanding better. And for now, that’s still where we win.

📖 You can read the full study here: arxiv.org/abs/2505.17117

Machines calculate. Humans understand. And that difference, though technical, is profoundly human.

Source: Noticias inteligencia artificial

Scroll to Top