Compal and Verda Accelerate the Sovereign AI Cloud with Liquid-Cooled GPU Servers

The expansion of artificial intelligence no longer depends solely on acquiring GPUs. It also requires complete servers, advanced cooling, regional manufacturing, low-latency networks, and providers capable of deploying infrastructure on increasingly tight timelines. In this context, Compal Electronics and Verda have announced a strategic partnership to accelerate AI infrastructure development in Europe and Asia-Pacific through high-density GPU server platforms with liquid cooling.

The agreement positions Compal, one of the major Taiwanese electronics and server manufacturers, as a provider of next-generation systems for Verda, a European AI cloud company based in Helsinki, formerly known as DataCrunch. The goal is to enhance Verda’s capacity to handle training loads of advanced models and agent inference—areas that are driving demand for localized, efficient computing aligned with data residency requirements.

Liquid Cooling for Increasingly Dense AI

The technical core of the partnership lies in high-density GPU servers with liquid cooling. New AI systems are consolidating more power into less space. Platforms based on cutting-edge accelerators increase rack consumption, generate more heat, and require reevaluation of data center thermal architecture. Air cooling is beginning to fall short in many high-density deployments.

Liquid cooling allows for more efficient heat removal, supporting more compact configurations and reducing operational pressures associated with ultra-powerful racks. This is far from a minor improvement. In AI infrastructure, effective cooling capacity determines how many GPUs can be installed per rack, the sustained performance achievable, and operational costs.

Compal will supply Verda with platforms designed for agent workloads—AI applications that operate with extensive contexts, high concurrency, and chained processes. These applications not only demand fast accelerators but also require thermal stability, networking, storage, memory, and an architecture capable of handling usage peaks without degrading performance.

The partnership also reflects a shift in the role of Taiwanese manufacturers. Companies like Compal, Wistron, Quanta, and Foxconn are no longer just assembly partners behind global brands. They are becoming key industrial players in the new AI infrastructure, designing, integrating, and manufacturing complex systems that neo-clouds and hyperscalers need to rapidly deploy capacity.

Verda Aims to Grow as a European AI Cloud

Verda positions itself as a European AI cloud provider focused on training, inference, and high-performance deployments. The company operates GPU data centers across Europe, emphasizing renewable energy and efficient Nordic locations. Its value proposition is based on a growing need among companies and government agencies: access to powerful AI infrastructure without relying entirely on large U.S.-based clouds or distant regions.

The concept of sovereign AI is gaining momentum in Europe. It’s not just about training national or regional models but also about ensuring that sensitive data, workloads, and computing capacity remain within European regulatory frameworks—providing greater control over data residency, compliance, security, and governance.

The partnership with Compal aims to strengthen this position. Verda requires physical infrastructure to scale, and Compal offers expertise in accelerated computing, advanced thermal design, and systems integration. The joint announcement also highlights the industrial expansion of Compal in Taiwan, Vietnam, and the US—an important detail as supply chains seek greater resilience and less dependence on a single region.

The agreement extends beyond Europe. Both companies also mention Asia-Pacific, signaling a broader strategic expansion. For Verda, growing outside Europe can help attract global clients seeking alternatives to established providers. For Compal, it reinforces its role as a key supplier of complete systems for AI cloud operators who need full solutions, not just components.

Neo-clouds, Sovereignty, and a Tenuous Supply Chain

The rise of neo-cloud providers is one of the clearest trends in the new AI economy. These vendors specialize in accelerated compute, often more nimble than traditional hyperscalers, offering GPUs, bare-metal clusters, managed inference, or dedicated capacity for labs, startups, and companies that don’t want to wait months for hardware.

Verda fits into this category. Its former brand, DataCrunch, was already known for GPU services for machine learning. Rebranding as Verda emphasizes a broader focus on sovereign AI, European cloud, and sustainable deployments. The partnership with Compal shifts the narrative from capacity resale to an industrial vision: not just selling GPU access but building a European platform capable of scaling with cutting-edge hardware.

This is crucial because AI demand is stressing the entire supply chain. There are shortages of GPUs, HBM memory, power chips, network switches, substrates, packaging capacity, and available energy. Liquid cooling joins this list as an increasingly common requirement for new high-density racks. To compete in AI infrastructure, providers must secure many interconnected links.

For European clients, vendors like Verda offer an appealing alternative when data residency, latency, or regulatory compliance matter as much as price. Sectors such as public administration, healthcare, finance, industry, defense, research, and legal cannot always transfer sensitive data freely to any cloud region. Sovereign AI doesn’t eliminate the need for global platforms but creates opportunities for regional specialized operators.

Compal also benefits by gaining visibility in a higher-value segment. While PC and consumer electronics remain important markets, growth is increasingly driven by servers, cloud, automotive, advanced communications, and AI systems. Providing liquid-cooled GPU platforms for a European AI provider positions Compal at the forefront of expanding demand areas.

The partnership between Verda and Compal embodies a recurring theme: AI deployment is not just about models, but about factories, servers, racks, cooling, power, networking, and supply agreements. Europe seeks greater autonomy in AI, but that requires real infrastructure. Regulation and funding of models alone aren’t enough; data centers, cloud providers, manufacturers, integrators, and operational capacity are essential.

This agreement doesn’t solve Europe’s broader challenge of competing with the US and China, but it points in a clear direction. Digital sovereignty also depends on physical servers, and the servers of the near future will be dense, expensive, hot, and complex to deploy. Control over this layer will confer a significant strategic advantage.

Frequently Asked Questions

What have Compal and Verda announced?
They announced a strategic partnership where Compal will supply Verda with high-density GPU server platforms with liquid cooling to accelerate AI infrastructure deployments in Europe and Asia-Pacific.

Why is liquid cooling important for AI?
Because next-generation GPU servers concentrate high power levels and generate significant heat. Liquid cooling enables denser, more efficient racks compared to air-only systems.

What is Verda?
Verda, formerly DataCrunch, is a European AI cloud provider based in Helsinki, specializing in GPU infrastructure for training and inference workloads.

How does this partnership relate to sovereign AI?
It enhances the ability to deploy localized AI compute, with greater control over data residency, regulatory compliance, and security—integral aspects of sovereign AI strategies.

Scroll to Top