OpenAI has released its first State of Enterprise AI 2025, a report that puts real figures behind what many CTOs, CIOs, and data teams already suspected: Artificial Intelligence is shifting from being just another “app” to becoming an infrastructure layer upon which products, workflows, and even new business models are built.
The snapshot is especially relevant for a tech-savvy audience: it discusses tokens, APIs, agents, connectors, continuous evaluations, and, most importantly, the growing divide between organizations that are industrializing AI and those still experimenting.
From Proof of Concept to Infrastructure: AI as a New Layer of the Stack
In just three years, over 1 million enterprise clients are using OpenAI tools, with more than 7 million ChatGPT “seats” in work environments. In the past 12 months, the weekly volume of enterprise customer messages has increased eightfold, and the consumption of reasoning tokens per organization has grown roughly 320 times.
Translated into infrastructure language: AI models are no longer used sporadically but as core services integrated into products, backends, and internal tools. More than 9,000 organizations have processed over 10 billion tokens through OpenAI’s API, with nearly 200 surpassing the one-trillion token threshold.
For the tech world, this signifies a clear message: AI is establishing itself as another “tier” of the stack (alongside databases, message queues, or observability systems), complete with its own metrics for consumption, performance, and governance.
GPTs, Projects, and API: Where AI Truly Integrates
One of the most revealing data points in the report is the explosive growth of Custom GPTs and Projects (configurable interfaces over ChatGPT with instructions, context, and personalized actions). The weekly user count for these capabilities has multiplied by 19 this year, now accounting for around 20% of all enterprise customer messages.
Practically, this means many companies are no longer just “asking” models but:
- Encoding internal knowledge into reusable assistants.
- Connecting GPTs with corporate systems via APIs and tool calling.
- Automating multi-step workflows (agentic workflows) within their applications.
On the API side, uses mainly focus on building embedded assistants, advanced search, workflow automation, and developer tools. Adoption is no longer exclusive to the tech sector: non-tech companies’ API usage has grown fivefold year-over-year, with use cases like customer support and content generation now representing nearly 20% of activity.
Measurable Productivity: From Minutes Saved to New Types of Work
The report also seeks to address a key business question: does AI really save time and money?
Internal OpenAI data indicates that 75% of surveyed workers say AI has improved their work speed or quality. ChatGPT Enterprise users report an average savings of 40 to 60 minutes per active day, increasing to 60–80 minutes daily in profiles such as data science, engineering, or communications.
Most interesting for technical teams is that AI not only accelerates known tasks but broadens the scope of what users can do:
- 75% say they can complete tasks previously out of reach: programming and code review, spreadsheet analysis and automation, developing technical tools, troubleshooting, or designing custom agents and GPTs.
- Messages related to code have increased across all departments; outside engineering, IT, and R&D, these messages have grown an average of 36% over the past six months.
In other words: AI is blurring the line between “technical” and “non-technical” roles, forcing a reevaluation of who has permission—and the capacity—to access data, automate processes, or prototype internal tools.
The Gap Widening: Frontier Workers vs. the Rest
The report introduces a concept that should make many tech leadership teams reflect: that of frontier workers, the top 5% of users who use AI most intensively within their organizations.
These workers:
- Send 6 times more messages than the median user.
- In data analysis tasks, they use the tool 16 times more than their peers.
- In coding, the gap is even larger: frontier workers send 17 times more programming-related messages than the median.
Looking at the company level, the pattern persists: firms in the 95th percentile generate twice as many messages per seat as median companies, and seven times more messages to GPTs, indicating a much deeper integration into their systems and processes.
The report also reveals an uncomfortable fact: even among active ChatGPT Enterprise users, 19% have never used data analysis tools, 14% have never engaged with advanced reasoning features, and 12% have never used integrated search. Among daily users, these figures are lower but still present.
For tech media and product leaders, the conclusion is clear: the issue is no longer a lack of models or features, but inadequate widespread and systematic adoption.
Industries, Geographies, and Use Cases: AI Moves Beyond the Lab
Across sectors, technology remains the driving force behind adoption (client growth 11× in a year), but healthcare and manufacturing are emerging as some of the fastest-growing areas (8× and 7× respectively). Most sectors have increased their client bases by over six times, with even the slowest doubling their adoption year-over-year.
Geographically, the expansion is global: Australia, Brazil, Netherlands, and France lead in the number of paying clients, with year-over-year increases above 143%, while the US, Germany, and Japan account for the highest message volumes.
The report also cites real-world examples illustrating diverse uses:
- Intercom builds Fin Voice on the Realtime API for phone support, reducing latency by 48% and resolving around 53% of end-to-end calls with a voice agent.
- BBVA automates over 9,000 legal inquiries annually with a chatbot, freeing up the equivalent of 3 full-time employees for higher-value tasks.
- Moderna reduces key steps in creating Target Product Profiles from weeks to hours, accelerating product planning and clinical decision-making.
These examples are especially relevant for technical roles because they combine data integration, internal APIs, security, compliance, and impact measurement.
What Leading AI Teams Are Doing Differently
Beyond the numbers, the report attempts to distill a “playbook” of the organizational and technical practices of companies leading the way:
- Deep system and data integration: active connectors to core tools (CRM, internal systems, knowledge repositories) so models work with real context, not just generic prompts.
- Standardization and reuse of workflows: fostering a culture of creating, sharing, and versioning GPTs, agents, and internal templates instead of relying on ad hoc prompts for each team.
- Continuous evaluations and MLOps for LLMs: implementing evaluation batteries tied to specific business cases (response quality, speed, critical errors) with feedback loops for ongoing improvement.
- Governance and organizational change: combining a central framework for security, compliance, and training with distributed champions promoting use cases across departments.
The report concludes with a message many CTOs/CIOs are echoing: models and tools are evolving so rapidly that, at this stage, organizational factors are now the bottleneck rather than technical capabilities.
Frequently Asked Questions for Tech Readers
How does using ChatGPT “manually” differ from building on the API or with GPTs/Projects?
ChatGPT is helpful for one-off tasks, but the API, Custom GPTs, and Projects enable direct AI integration into applications, backends, and internal tools, with control over data, permissions, tool calling, and usage metrics.
What are the implications for the corporate data stack?
AI requires exposing data more structurally (via APIs, connectors, feature stores) and thinking about governance, auditing, and continuous evaluation as integral to the model’s lifecycle, not just the data’s.
How can we measure whether AI adoption is on the right track?
Look at usage intensity (messages, different tasks, use of advanced tools), time savings, and business results (conversions, resolution times, revenue), always comparing the frontier workers against the average to understand internal gaps.
Is everything decided, or is there still room to catch up?
Despite obvious disparities, OpenAI emphasizes that enterprise AI is still in its “early innings”: models can do much more than most organizations are currently leveraging. For many tech companies, the game is just beginning.

