Data Center Types in 2025: From Hyperscale to Edge Frontier, What They Build, How They Operate, and What Projects Require

Data centers (CPD or data centers) are the physical backbone of the “cloud”: industrial buildings with technical rooms, redundant power and cooling systems, fiber networks, perimeter security, and operational procedures that enable the functioning of email, e-commerce, banking, social media, video games, or AI analytics. Their design and operation have evolved rapidly: the emergence of large-scale AI, network power shortages, regulatory pressures, and the need for decreasing latencies have given rise to families of data centers with very different missions, sizes, and success criteria.

Below is a map of the infrastructure projected for 2025, with power ranges, availability goals (Tier), efficiency (PUE), and what current operators require in any serious RFP.


1) Hyperscale Campus

Who and for what. Large public clouds (IaaS/PaaS/SaaS) and AI platforms consuming tens or hundreds of megawatts per site. They house regions with multiple availability zones and increasingly GPU farms with direct-liquid cooling at the chip and rack densities much higher than five years ago.

Technical keys.

  • Power: 20–100+ MW per phase; campus of 100–500+ MW.
  • Densities: from 20–60 kW/rack in “classic” IT to 80–200 kW/rack (and even >300 kW) in AI clusters with direct-to-chip or immersion cooling.
  • Availability: designs Tier III or equivalents with multiple electrical redundancy levels (N+1/2N) and distribution rings.
  • Efficiency: PUE target ≤ 1.2 (next-gen campuses with free cooling, thermal containment, high-efficiency UPS, advanced BMS/DCIM).
  • Operation: industrialized design-build, modular pods, micro-grids, and BESS (batteries) for peak-shaving and grid event support.

Examples: AWS regions, Microsoft Azure, Google Cloud, Oracle, and Meta campuses for AI.


2) Colocation Campus (Colo)

Sell space, energy, and connectivity to third parties. They serve as the “public square” of the internet and digital economy: from retail (tens of kW per customer) to wholesale (full rooms with several MW for one or few tenants). They offer meet-me rooms, interconnection fabrics, access to IXP, carriers, and public clouds.

Subtypes.

  • Wholesale: 5–40 MW per phase; contracts 7–15 years; delivery of shell & core or powered shell with fit-out by the client.
  • Retail: sites of 5–20 MW; clients from 1–500 kW with managed services, remote hands, bare-metal, cages, and cross-connects.

Technical keys.

  • Availability: usual goal Tier III (concurrent maintainability, N+1).
  • PUE: new generation 1.2–1.35; retrofit of historic sites 1.4–1.6; traditional on-premises still in 1.8–2.0.
  • Densities: general IT clients 5–15 kW/rack; pods of 30–60 kW/rack for HPC/AI with contained aisles or DLC.

Examples: Digital Realty (Interxion), Equinix, DATA4, Global Switch, Nabiax, Vantage, CyrusOne, STACK Infrastructure, NTT GDC, Colt DCS, Iron Mountain.


3) Edge / Micro-DC

Small and distributed installations near users or network nodes for very low latencies and local traffic (CDN, 5G, IoT, broadcast, last-mile e-commerce). They can be prefabricated modules in business parks, neighborhood telco hotels, or mini-hubs in towers and PoPs.

  • Power: 10–500 kW per site (metropolitan edge 1–2 MW).
  • Availability: from N/N+1 up to Tier III in critical nodes.
  • Use cases: caching, game streaming, video analytics, MEC 5G, industrial control.

Examples: AtlasEdge, EdgeConneX, Cellnex (telco infrastructure and edge).


4) Enterprise On-Prem

Data centers owned by the company on its campus or industrial park (banking, insurance, administrations, defense, trading, R&D, sensitive IP). They coexist with the cloud in hybrid models (sensitive data and workloads on-prem, scaling, and managed services in the cloud).

  • Power: 0.5–10 MW (larger outliers in banking/government).
  • Goals: Tier II–III; DR/BCP with secondary site.
  • Challenges: PUE modernization (1.6–2.0 inherited), limited density (5–10 kW/rack), long investment cycles.

5) Telecom Data Centers

Natively operated infrastructure for their network: mobile core, BSS/OSS, voice and TV platforms, caches, peering. Usually distributed sites of modest size, with strong constraints on location, power, and resilience due to mission-critical 24/7 operations.

  • Power: variable; typically ≤ 5 MW per site.
  • Design: emphasis on electric and optical redundancy and remote operation.

Note: The same group can operate multiple types simultaneously (hyperscale for private cloud, colo for third parties, edge for 5G, on-prem for back-office, etc.).


Availability: Tier and Redundancy (Uptime Institute)

  • Tier I–II: basic/redundant.
  • Tier III: Concurrent Maintainability (N+1) without load impact.
  • Tier IV: Fault Tolerant (2N / 2(N+1)), with compartmentalization and double route independence.

The most common target in new commercial and corporate developments is Tier III design (or equivalent) with comprehensive testing and a continuous TI operation approach.

Table — Common Redundancy Schemes

LevelElectricalCoolingOperational Implication
NOne routeOne trainMaintenance with shutdown
N+1Route + 1 reserveTrain + 1 reserveMaintenance without shutdown (always “one more”)
2NTwo complete routesTwo complete trainsTolerance to total failure of one route/train
2(N+1)Double N+1Double N+1Double resilience and operational flexibility

Efficiency: PUE (Power Usage Effectiveness)

PUE = Total Data Center Energy / IT Energy. The closer to 1.00, the less energy is wasted on “non-IT” loads (climate control, losses, UPS, lighting).

  • Traditional on-prem: 1.8–2.0.
  • Next-generation colo/hyperscale: 1.15–1.25 (favorable climates and free cooling).
  • Current benchmark in Europe: ≤ 1.2–1.3 in new developments; additionally, WUE (water) and Energy Reuse Factor (thermal reuse for district heating) are valued.

Indicative Power Ranges (2025)

  • Hyperscale: 20–100+ MW per phase; campus 100–500+ MW.
  • Wholesale colocation: 5–40 MW per phase; campus 20–150 MW.
  • Retail colocation: site 5–20 MW; client 1–500 kW.
  • Edge/Micro-DC: 10–500 kW per site (urban edge up to 1–2 MW).
  • On-prem: 0.5–10 MW.

Q4 2025 Trends Reshaping Design

  1. High-density AI
    Massive influx of racks ≥ 60–120 kW (and peaks of 200–300 kW) with liquid cooling (direct-to-chip, single or two-phase immersion), manifolds, and advanced water/glycol management. Requires redesigning white rooms, power distribution, ceiling heights, structures, and protections.
  2. Time-to-power and grid capacity
    The bottleneck is no longer the floor, but the grid connection. Transformers, cells, and 132/220 kV substations have long lead times. Operators demand projects with early electrical permitting, secured grid slots, and increasingly, BESS for peak-shaving or grid services.
  3. Sustainability and reporting
    Regulatory pressures (CSRD, EU taxonomy) and the goal of 24/7 clean energy (carbon-free hourly energy). From PUE to portfolio of PPAs, WUE, heat reuse, HVO in gensets, pilot projects with H₂, and transitioning to Li-ion batteries with updated NFPA/IEC frameworks.
  4. Standardization and modularity
    Design libraries, modular prefabricated white rooms, pods that are repeatable, and common engineering criteria per country to accelerate multinational deployments.
  5. Security and compliance
    From CCTV and perimeter to Zero Trust east-west traffic, microsegmentation, secure enclaves for AI, and integrated physical/digital controls (ISO 27001, ISO 22301, EN 50600, PCI DSS, TISAX, ENS in Spain).

What Operators Are Currently Looking for in a Project

  • Actual time-to-power: not just contracted power; electric milestones (OTC, permits, substation, backup).
  • Usage flexibility: adaptable layouts and plate capable of handling hyper-density scaling (from 10 kW/rack to 80–120 kW/rack) without rebuilding the building.
  • Clear delivery model:
    • Shell & Core: structure, enclosure, courtyards, and core building; client performs TI.
    • Powered Shell: shell + distributed power and basic cooling; client configures rooms.
    • Turnkey: fully finished with TI rooms ready to operate.
  • Multinational standardization: consistent design criteria, bill of materials, and playbooks across countries for rapid scaling.
  • Commercial versatility: if no “anchor tenant,” a versatile design that supports multiple operator profiles (cloud, AI, colo, banking) with minimal modifications.

Cheat-sheet for Investors and Project Owners

  • Electrical feasibility: critical. Without a clear connection point and substation schedule, civil schedule doesn’t count.
  • Tier and PUE: necessary but not sufficient. Ask about procedures, comprehensive checks, operational KPIs (MTBF/MTTR, brown-out incidents, cooling SLAs).
  • Densities and cooling: design 20–30% of the room as ready-for-liquid; consider water and layout for manifolds and CDU.
  • Interconnection: plan for double MMR, separate physical routes, and agreements with carriers/IXP from the start.
  • Permits and community: acoustic, water, traffic, heat reuse impacts; early dialogue with authorities; thermal reuse plan.

Frequently Asked Questions

What is the practical difference between a hyperscale and a colo?
Hyperscale is owned by the cloud provider or a major operator and hosts their own regions/platforms; it’s optimized for volume and internal operations. Colo is multi-tenant: it sells space, power, and interconnection to third parties (banks, SaaS, cloud, telcos), with retail (from 1 kW) and wholesale (full rooms) services.

What does “Tier III” really mean, and why is it the usual goal?
Tier III guarantees concurrent maintainability (N+1): planned maintenance can be performed without shutting down the IT load. It’s the cost vs. availability balance for most commercial and corporate uses. Tier IV adds full fault tolerance (2N), with higher cost and complexity, reserved for extreme mission-critical environments.

What PUE is “good” in 2025?
Depends on the mix and climate. A new site in Europe operating at 1.15 to 1.30 is considered very efficient. For retrofits of existing on-prem, values of 1.6–2.0 are still common; modernization plans aim to reduce >0.2–0.4 points via containment improvements, free cooling, and high-efficiency UPS.

What rack density should I expect for AI?
For general IT, 10–20 kW/rack suffices. In next-gen AI, plan for 40–80 kW/rack as base and reserve for 100–150 kW (or more) with liquid cooling. The cost and complexity increase not from the server itself, but from power, cooling, and water distribution.

via: LinkedIN

Scroll to Top