At Davos, during a public conversation with Larry Fink at the World Economic Forum, Satya Nadella didn’t unveil a new chip or a spectacular partnership. Instead, he did something more uncomfortable for the “fast product” ecosystem: he set the bar for what will be considered a competitive advantage in the AI era.
And when the bar shifts, many business models are effectively left without a foundation.
For years, a segment of the market has relied on a simple pattern: wrapping APIs of foundational models, adding a nice interface, a layer of prompts, and selling it as a “solution.” That wave worked because the novelty was enormous, adoption friction was low, and business urgency was high. But in Davos, the implicit message was different: the value is no longer in “using AI,” but in turning it into own infrastructure, governable and amortizable.
From “Data Sovereignty” to “Enterprise Sovereignty”
It’s not that Nadella literally used the expression “firm sovereignty” as an official slogan. What matters is the shift in focus emerging from his arguments: the discussion no longer boils down to where the data resides, but to who controls performance, costs, availability, and organizational learning of AI systems.
Put less abstractly: if a company depends on an external API for every decision, query, and automation, it is renting its new layer of intelligence. That may be acceptable for experimentation, but it becomes fragile when AI moves from “pilot” to “critical process.”
And that’s where the silent blow to wrappers appears: if your product doesn’t build its own assets, your differentiation evaporates.
“Token factories,” energy, and the reminder that AI is physical
Another point discussed at Davos—and explaining why platforms will eat a large part of wrappers—is that AI is no longer treated as ethereal software but as a physical utility: data centers, energy, power grid, efficiency per watt, and scalability without the economy of use becoming a permanent tax.
When Microsoft’s CEO talks about “token factories” and connects growth to energy and infrastructure, he’s describing a world where the winners are those who:
- control the unit cost (per query, agent, flow),
- dominate operations (changes, resilience, continuity),
- and own the platform (integration, identity, data, observability).
This terrain is hostile to products that simply “resell” the magic of an external model.
Why wrappers don’t die… but cease to be “easy” business
An important nuance: not every startup using external models disappears. What sharply reduces is the space for companies whose core value is essentially:
- “I have the same model as everyone else, with similar prompts, and a better UI.”
- “I add a chatbot layer over your documentation.”
- “I connect to your CRM and call it a copilot.”
By 2026, that will tend to become a feature, not a standalone business.
The pressure comes from three fronts:
- Giant companies integrate from the top down
If AI resides within productivity suites, operating systems, corporate directories, and security layers, competing from outside becomes more expensive and slower. - Companies demand control
Not ideologically but for operational survival: costs, compliance, latency, vendor dependency, business continuity. - Monthly model improvements cut margins
What seems like a full product today may soon be a native capability of the model or cloud provider. Current “shortcomings”—as Andrej Karpathy reflected in his thoughts on assisted programming—are often just a list of what will improve in a few months. And that’s devastating for proposals without a moat.
The technical strategy that survives
If the framework of “enterprise sovereignty” (control over learning, costs, and process) is accepted, the defensible strategy leans less toward “wrapping” and more toward building assets. Many companies are shifting toward combinations like these:
- Context engineering: organizing and versioning internal knowledge (unstructured), with traceability and quality.
- Multi-model architectures: using different models for different tasks (reasoning, extraction, classification, coding) without sticking to one.
- Distillation and specialization: using large models to train/tune smaller models that you can run with more control and predictable costs.
- Tacit knowledge capture: turning senior team practices (criteria, heuristics, decisions) into evaluations, datasets, tools, and reproducible policies.
- Observability + continuous evaluation: measuring quality, biases, drift, hallucinations, and cost per flow as if it were a critical service.
This isn’t marketing: it’s what differentiates a “friendly chatbot” from a system a company can deploy seriously.
So, did Nadella “kill” the wrappers?
More than killing them, he set an expiration date on the lazy model: one that doesn’t produce proprietary intellectual property, operational control, a distribution advantage, a unique dataset, or such deep integration that replacing it is difficult.
AI is entering its less romantic, more industrial phase: governance, unit cost, resilience, and infrastructure. In this world, the lingering question—and what should concern every builder—is very simple:
If tomorrow, OpenAI or Anthropic change their prices, cut capabilities, or turn off their APIs, does your product retain its “intelligence”… or does it lose its heart?
That’s the border between creating value and creating dependence.

