The Exodus from Public Cloud: Why Some Companies Are Going Back to On-Premises (and When It Makes Sense)

Over the past decade, public cloud— led by AWS, Google Cloud, and Azure—has been marketed as the natural path for scaling: pay-as-you-go, an almost infinite catalog of services, and an implicit promise of simplicity. But alongside this, another conversation has been growing—and is less publicized: companies that, after maturing, are reconsidering their bills and accumulated complexity and are choosing to “repatriate” part of their infrastructure.

This isn’t a homogeneous trend nor a total rejection of cloud. Rather, it’s a strategic adjustment: when workloads stabilize, the real cost ceases to be a minor detail, and operational control gains economic value.

Basecamp/37signals: from $3.2 million in cloud to $840,000 annually “all-in”

One of the most frequently cited cases is 37signals (Basecamp/HEY). Its CTO, David Heinemeier Hansson, explained that in 2022, they spent $3.2 million on cloud, with nearly $1 million associated with storing 8 PB in S3 (replicated across multiple regions). The rest—about $2.3 million—was on servers and related services.

Their calculation to “exit” that part of the bill was based on two concrete figures: a hardware investment of around $600,000 (amortized over five years) and ongoing expenses for colocations, power, and connectivity—roughly $60,000 per month for their racks in two data centers. According to their own estimate, this results in an annual cost of about $840,000 “for everything” (bandwidth, energy, and amortized servers), compared to the $2.3 million per year cloud segment they aimed to reduce to zero.

In the same article, 37signals projects a savings of around $7 million over five years, without growing their operations team. The core argument isn’t “cloud is bad,” but that comparing rental costs to ownership when the business is already stable can be costly.

Ahrefs: when “catalog-price” cloud shifts the math

Ahrefs’ case is often viewed as even more extreme, but warrants an important nuance: Ahrefs doesn’t just say “it was too expensive”; they publish detailed technical and financial comparisons.

In a technical article (March 2023), their team estimates that they have saved around $400 million over approximately 2.5–3 years by not basing all their infrastructure on IaaS, and even claim that if their product were 100% on AWS, it might not be profitable or might not exist at all.

In a subsequent piece (May 2024), Ahrefs expands their perspective: they estimate an on-premises accumulated expenditure of $122 million since 2017 and compare that to AWS equivalents, concluding that the “equivalent price” exceeds $1 billion depending on assumptions (such as on-demand instances versus reserved for three years).

The common message from both cases isn’t that cloud is unviable, but that for compute- and storage-intensive businesses, elasticity has a price, and when demand is relatively predictable, ownership and dedicated deployment can offer a strategic advantage.

The typical mistake: designing for “infinite success” prematurely

In many startups, cloud is the right choice for a simple reason: it reduces friction. Getting a product to market and starting sales often outweighs optimizing euros per CPU.

The problem arises when architecture is built with “scale to infinity” from day one. If the company becomes a stable business with predictable loads, it may find itself paying for an elasticity that no longer benefits them.

This is the uncomfortable part: over-optimization and over-designed infrastructure can cost money for years, just as under-dimensioned setups can cost customers. Hence, it’s not about “cloud yes/no,” but about when and why.

Vendor lock-in: the trap is not always technical, but also economic

Another factor prompting reconsideration of cloud is lock-in by design: when a company builds around very specific native services (queues, managed databases, serverless functions, IAM, etc.), moving away isn’t simply “migrating machines”—it becomes “redoing systems”.

Adding to this are economic components like egress fees and data transfer costs that penalize moving data outside a platform. The OECD’s analysis of cloud market competition notes that these pricing models and switching barriers reinforce lock-in and limit true customer mobility.

Market incentives also complicate matters: start-up credits. Programs like AWS Activate offer “up to $100,000” in credits, while Google promotes packages of up to $200,000 (or more in certain cases), and Microsoft offers credits for startups through various programs. Early on, accepting that aid makes sense. But over time, those credits can become the bait making exit too costly.

And security? It’s not as simple as “cloud = secure”

Cloud security often fails where it always does: at the application layer, due to misconfigurations, permissions, and human errors. Cloud offers powerful tools and a solid baseline, but also introduces complexity: accounts, roles, policies, proliferating services, and an expanding attack surface.

Meanwhile, on-premises does not automatically mean more secure: it requires discipline, processes, patching, segmentation, robust backups, and access control. What changes is the division of responsibilities.

A local example: Acumbamail and private cloud via Stackscale

In Spain, some companies opted from the start for private cloud models to balance agility and control. Acumbamail, for instance, is showcased as a use case of private cloud in Stackscale (Grupo Aire): the approach involves dedicated resources for the client, virtualization, and operation independent of the “pay per service” model typical of hyperscalers.

Within this framework, Acumbamail emphasizes the value of network storage and options for geo-replication within Madrid. The goal is to combine the best of both worlds: virtualization flexibility and scalability, with a more predictable cost and control over high-availability architecture across two local data centers.

The practical takeaway: it’s not about “leaving cloud,” it’s about auditing the model

The common thread among these leading cases is a simple but often overlooked habit: doing the math with real data and revisiting assumptions.

For many companies, public cloud will still be the best fit—due to speed, global reach, time-to-market, or reliance on managed services. But others—especially those with stable workloads, CPU/storage investment needs, or sovereignty and predictability requirements—should consider alternatives such as:

  • dedicated private cloud resources,
  • colocation with owned hardware,
  • hybrid models (cloud for peaks and specific services; “bare-metal” for stable core).

The decision is not ideological, but operational and financial: when infrastructure becomes a line item weighing on the P&L, it’s wise to review it like any strategic cost.


Sources

Scroll to Top