Artificial Intelligence Is Already in the Company… But It’s Not Yet Showing in Productivity

Over the past two years, many companies have moved from merely “testing” Artificial Intelligence (AI) to declaring it a central part of their strategy. The story is familiar: copilots for employees, task automation, customer service assistants, faster analytics, and an almost inevitable promise of increased efficiency. However, when asked about tangible results, the picture is quite different from the marketing enthusiasm.

A new study from the National Bureau of Economic Research (NBER) quantifies that feeling of “a lot of noise and few results.” The authors present international data at the company level, based on a survey of almost 6,000 CFOs, CEOs, and executives in the United States, United Kingdom, Germany, and Australia. The most striking data point is the contrast: around 70% of companies claim to actively use AI, but more than 80% report no observable impact on productivity or employment over the last three years.

A surprisingly low usage among decision-makers

The key may lie in a detail that challenges many assumptions. Although “more than two-thirds” of senior executives say they use AI regularly, the average declared usage is only 1.5 hours per week, and 25% say they do not use it at all.

In other words: adoption exists, but intensity is limited. And if those with the power to redesign processes and drive change use it sparingly, AI is likely to remain superficial (quick queries, drafts, summaries) without truly transforming how things are produced, sold, served, or operated.

The “microcomputer paradigm” reemerges

Economics has seen similar paradoxes before: technologies with enormous potential that take years to reflect in aggregate metrics. Historically, productivity growth has shown periods where tech investment coexists with modest improvements, partly because benefits depend less on hardware and more on reorganizing processes, training staff, and rethinking workflows. According to historical data cited by the BLS, output per hour grew at an average of 2.9% annually from 1948 to 1973, then slowed markedly in the following decade before making some gains again in the 1980s.

The analogy is not perfect—AI is not a PC—but it’s useful: when technology is used as an “add-on” to existing systems, its impact tends to be diluted. When it forces a redesign of the system (workflow, tools, incentives, measurement), then significant jumps can occur.

Why AI isn’t “banking” gains (yet)

In business environments, productivity rarely improves simply because someone installs a tool. It usually improves when several conditions are met simultaneously:

  • Clear use cases: someone is responsible for the outcome (time, cost, quality), not just for “implementing AI.”
  • Integration with actual work: AI within CRM, ERP, ticketing, CI/CD, knowledge bases… not as a separate tab.
  • Ready data and permissions: if information is fragmented or inaccessible, AI remains generic.
  • Operational training and habits: moving from “asking questions” to “delegating tasks” with control.
  • Measurement with the right metrics: not just “active users,” but resolution cycle times, ticket handling, conversion rates, rework, incidents, and more.

If any of these points fail, the typical result is what many teams describe: saving minutes here and there, but without systemic change that impacts overall productivity.

The interesting part: expectations remain high

The study itself shows that optimism persists: companies forecast that over the next three years, AI will boost productivity by 1.4%, increase output by 0.8%, and reduce employment by 0.7%. Additionally, a perception gap appears: surveyed employees expect a job increase of 0.5% due to AI, while executives anticipate net cuts.

This mismatch in expectations is important for any organization wanting to avoid internal friction: if management markets AI as a substitute tool and teams see it as a way to “do more with less” (or avoid tedious tasks), tensions grow. And real adoption—the kind that changes metrics—becomes stalled.

What tech teams should do to prevent “AI theater”

From a technological standpoint, the practical takeaway is clear: if a company seeks productivity, don’t ask “which model do we use?” but instead “which process do we redesign?” Here are some priorities that often work:

  1. Select 3–5 measurable workflows (support, sales, engineering, finance) and assign a “owner” for each.
  2. Tackle bottlenecks where human work accumulates: triage, information retrieval, repetitive drafting, classification, validations.
  3. Implement metrics before and after: cycle times, escalation rates, rework, perceived quality, unit costs.
  4. Design controls (security, permissions, traceability, human review) so AI can operate without creating vulnerabilities.
  5. Invest in organizational change: practical training, templates, playbooks, and incentives aligned with real adoption.

The conclusion today is not that AI “doesn’t work.” It’s that, in many companies, it’s still not being used with the necessary intensity, integration, and redesign to truly impact productivity. The technology is here; the operational leap is still in progress.

via: AI in the Office

Scroll to Top