For years, the guiding slide in management committees has always been the same: “We don’t want vendor lock-in. We’re going to be multi-cloud.” Translated from the PowerPoint-speak into plain English: tripling complexity and doubling the bill. And, to top it off, when something critical fails, no one raises their hand in time. What was promised as independence often ends up, too often, as technical debt with a premium cost.
This article examines the increasingly less unpopular critique — of how multi-cloud strategy is applied in practice, the European context of digital sovereignty, and a realistic alternative: modern private cloud based on open and European virtualization, with Proxmox as the technological pillar and 100% European providers, including Spanish ones like Stackscale, aligned with this vision.
The foundational myth: “avoiding lock-in”
The promise sounds cautious: by distributing workloads across multiple hyperscalers, dependency risk diminishes and resilience increases. The problem is that true independence is rarely achieved. Real applications cling to specific managed services (queues, datastores, identity services, serverless, AI, analytics). In the end, the pieces that make each provider competitive are precisely what tie you down.
The consequence is familiar on the ground: theoretical portability, practical dependency. Yes, Kubernetes helps… until the app needs managed database X, proprietary feature flags Y, or AI embedding Z. Migrating is no longer just moving containers: it’s redefining services, rewriting integrations, and re-certifying compliance. The cost — financial and organizational — hits when the whiteboard gives way to reality.
The three hidden costs of multi-cloud
1) Ongoing operational complexity
Each cloud introduces its own semantics: networks, security, identity, observability, billing, quotas, with soft and hard limits. Multiplying clouds multiplies failure matrices and incident vectors. Where there was once a runbook, now there are three; where there was a single control panel, now there are four tools and a monitoring bridge nobody wants to maintain on weekends.
2) Teams of “jack-of-all-trades, masters of none”
Building excellence in AWS, GCP, or Azure takes years. Attempting to master three simultaneously often degrades the overall level. Senior talent disperses; mid-level staff gets frustrated; on-call burnout occurs. The end result is the worst-case scenario: diffuse security, uncontrolled costs, and elongated response times.
3) Financial costs not shown in the pitch
The promise to “optimize prices by comparing providers” clashes with the reality of expenses, interconnections, overheads of management, tool duplication, and minimum contracts. Cheap lines often vanish between the lines. In practice, many companies end up paying the same to three providers instead of optimizing spending in one and seriously negotiating discounts or reserved capacity.
When theory collides with reality
In crises, nobody calls “all three.” When a service goes down and threatens the bottom line, the call is to whoever truly bears the load. If data lives in cloud A, the payment pipeline is in B, and orchestration is in C, the incident turns into a blame game. SLAs become poetry, and recovery time— the only thing that matters— skyrockets. The final picture is well known: Frankenstein architecture, hard to audit, expensive to insure, poorly governed.
Is multi-cloud always a bad idea? Not necessarily. But the margin’s narrow.
There are justifiable cases:
- Regulation and compliance with explicit requirements for load separation or jurisdictional safeguards.
- Business continuity with serious plans of active-standby across providers, replicated data, and measured failover exercises based on RTO and RPO, not just slides.
- M&A: for groups with legacy businesses, where heterogeneity is a transitory collateral damage.
- AI and specialized workloads: choosing a provider solely for a specific accelerator or training capacity unavailable elsewhere… consciously accepting the lock-in by value.
But for everything else, well-designed, rock-solid single-cloud (multi-zone, multi-region, IaC, mature observability) and trusted private cloud where it makes sense—low latency, cost control, data residency, or performance requirements—tend to offer more simplicity, control, and predictability.
Europe needs a different path: sovereignty and proximity
The multi-cloud narrative has served as an excuse to avoid making foundational decisions: where data resides, who supports, which jurisdiction applies, and what strategic dependencies we’re taking on. For Europe, and especially Spain, the question is political, economic, and technological: do we want to build our industry or rent it?
Digital sovereignty isn’t just a slogan: it’s real capacity to operate, protect, and scale critical infrastructure without external permission, aligned with GDPR and tightening regulatory frameworks. It means speaking within our time zone, with local engineering and accountable leaders who answer to our laws. It also involves contributing to the European value chain instead of relying on outside sources.
Within this context, opting for 100% European — even Spanish — companies isn’t naive protectionism: it’s industrial strategy. It translates into closeness, predictable latencies, transparent costs, clear contracts, and an ecosystem where provider and client can co-design solutions. Plus, it reduces negotiation asymmetries with hyperscalers.
The pragmatic alternative: modern (and open) private cloud
Far from nostalgic on-premises setups, the contemporary private cloud doesn’t compete with the public cloud: it complements it. For predictable, sensitive, latency-critical, or volatile cost workloads, building or consuming a well-engineered private cloud restores control without sacrificing elasticity within reasonable limits.
Proxmox as an open European stack
Proxmox VE is an open-source hypervisor, developed in Europe, with a mature ecosystem and real capability to sustain production environments:
- KVM virtualization and robust LXC containers.
- Ceph for distributed storage, with replication, self-healing, and horizontal scaling.
- Virtual networks with VLANs, bonding, OVS, and integrated firewall rules.
- Consistent backups, replication, and granular restores (full VMs or disks).
- Automation through API and Terraform providers maintained by community/partners.
All with predictable licensing and no usage tolls — no surprises with expenses and observability under the control of the client or the European operator.
Stackscale: European private cloud with expert support
In Spain, Stackscale (part of Grupo Aire) operates bare-metal infrastructure and private cloud with specialized support in Proxmox — foundational training, hypervisor monitoring, configuration and VM state backups, restoration after virtualization layer failures, and centralized storage capable of rapidly scaling workloads. According to David Carrero Fernández-Baillo, co-founder and VP of Sales & Marketing, their approach is complementary: “bare-metal where maximum performance or tight cost control is needed; Proxmox clusters for private elasticity and high availability; and connectivity to integrate with public clouds when it provides concrete value.” It’s, in essence, about using each piece where it performs best, regardless of the logo, while upholding sovereignty and runbook integrity.