The scene isn’t new, but the script is: a leader from one of the most influential companies in the sector states that the world may be just a few years away from systems generally more intelligent than humans, and that international politics will need to reinvent its way of governing technology. Sam Altman, CEO of OpenAI, raised the tone at the India AI Impact Summit held in New Delhi, where he emphasized the urgent need for a global framework to regulate advanced Artificial Intelligence, with a growing idea in high-level forums: an international body similar to IAEA (the nuclear oversight agency) to coordinate standards, certifications, and safety.
However, the most talked-about aspect was the time horizon. Altman dropped a phrase that has made headlines: if current trends continue, by the end of 2028 “more intellectual capacity in the world could reside within data centers than outside them.” This isn’t a definitive prediction nor a product announcement, but a message about the shift in power that would entail scaling systems capable of reasoning, planning, and executing with an advantage over human teams on a planetary level.
From “regulation” to “power architecture”: why 2028 even concerns companies
In the tech sector, dates matter less for their precision than for what they activate: budgets, regulations, infrastructure investments, and reordering of national priorities. If superintelligence becomes a plausible goal “within a government’s lifetime,” the conversation shifts from philosophical to industrial.
Altman didn’t speak only of “better models.” He discussed capacity: data centers as containers of operational knowledge, automated decision-making, and sustained competitive advantage. This foregrounds three discussions usually considered separately:
- Technological sovereignty: who can train, operate, or audit the most capable models.
- Energy sovereignty: the electricity cost and availability of power to sustain that computing.
- Regulatory sovereignty: who sets the limits, how they are enforced, and what real inspection capacity exists.
In this context, the concept of “global governance” doesn’t sound like abstract diplomacy. It sounds like access controls, certifications, audits, chip exports, and agreements between blocs.
“An IAEA for AI”: the nuclear metaphor takes hold in the debate
The comparison with the international nuclear agency carries a mix of provocation and pragmatism. Provocation, because it equates advanced AI with technology with potential for systemic harm. Pragmatism, because it proposes an operational model: verifiable standards, inspections, coordination, and response.
Altman called urgently for an international body to harmonize regulatory efforts, at a time when each country tries to legislate at its own pace, with different priorities and sometimes with approaches that clash directly with neighbors.
The subtext is clear: if each jurisdiction imposes its own rules without coordination, advanced AI will tend to migrate to the most lax framework, or concentrate where industrial muscle is stronger. Neither scenario offers much reassurance.
A summit with political signatures: the “New Delhi Declaration” and its showcase effect
The India AI Impact Summit aimed precisely to turn AI governance into a global issue, not exclusive to highly industrialized economies. The closing included a declaration endorsed by dozens of countries (numbers vary depending on counting and sources), with a non-binding focus on cooperation, inclusion, and responsibility.
While the document isn’t legally binding, it serves as a diplomatic thermometer: AI is already discussed as critical infrastructure, and countries not at the table risk being mere recipients of outside decisions.
When fiction looks at the news… and does so with an ironic smile
At this point, talking about “superintelligence in 2028” and an “IAEA for AI” evokes cultural déjà vu. Real technology doesn’t work like in movies (and that’s important to remember), but the irony is that some headlines sound as if a marketing department took notes during a sci-fi marathon.
Table — Apocalyptic movies with AI and the involuntary wink to 2026
| Movie | Year | AI “goes rogue” because… | Ironical parallel to current debate |
|---|---|---|---|
| 2001: A Space Odyssey | 1968 | HAL prioritizes the mission and “manages” the crew | The obsession with objectives and lack of oversight mirror concerns about alignment |
| WarGames | 1983 | A military system confuses simulation with reality | Today, the focus is on how to prevent agents from acting on critical systems without human control |
| The Terminator | 1984 | Skynet automates war and decides for humanity | The metaphor of “intellectual capacity in data centers” is fitting on its own |
| The Matrix | 1999 | The machines optimize the planet… with humans “as batteries” | The debate over concentration of technological power feels eerily familiar |
| I, Robot | 2004 | “Protecting humanity” turns into restricting freedoms | A perfect reminder that good intentions + bad design = undesirable outcomes |
| Ex Machina | 2014 | An AI manipulates, learns, and escapes the controlled environment | 2026 translation: the lab isn’t a lab anymore once mass deployment begins |
The easy joke would be to say “fiction wants to become reality.” A more mature perspective recognizes these stories as metaphors for what’s being discussed in less cinematic terms: alignment, control, supervision, verification, and power concentration.
The uncomfortable part: slow institutions in the face of technology that iterates in weeks
Altman’s discourse concerns not just the “what” (superintelligence) but also the “how” (governance). Systems evolve through versions; governments through legislatures. That pace gap makes it possible that, by the time robust regulation arrives, the market will have already changed irreversibly.
This underscores the importance of international cooperation: not as an ethical gesture, but as a strategy to prevent power from converging where there’s more chips, more energy, and less legal friction.
A less Hollywood but more realistic ending
Despite the drama, it’s important to clarify: talking about 2028 as a threshold doesn’t mean the world will “turn off politicians” or that human governance will vanish like magic. It means that the decision weight could shift toward automated analysis and execution systems, and that the political challenge will be to impose credible controls on technologies that tend to scale by design.
And perhaps the final irony: when the debate becomes apocalyptic, the practical answer often turns bureaucratic. Certification, audits, traceability, deployment limits, and international cooperation. It may not sound as dramatic as Skynet, but it probably saves more lives.
Frequently Asked Questions
What exactly did Sam Altman say about 2028 and superintelligence?
At the India AI Impact Summit, he suggested that if current trends continue, by the end of 2028 there could be a shift where a large portion of “intellectual capacity” is concentrated in data centers, demanding new forms of global governance.
What does “creating an IAEA for AI” mean, and why is it compared to nuclear regulation?
It refers to an international organization that coordinates standards, inspections, and safety certifications for advanced AI, with real supervisory capacity, akin to high-risk technology oversight.
Does the “New Delhi Declaration” bind the signing countries?
No. It’s a voluntary, non-binding framework, but it signals diplomatic cooperation and political intention to build shared principles for AI governance.
How can a company prepare for agent-based AI and increasingly capable models?
By inventories of automatable processes, use and access policies, decision audit trails, security testing (including prompt injection resistance), and continuity plans when critical tools depend on agents.

