The NorthC Fire in Almere Exposes the Fragility of Physical Cloud

The fire reported on Thursday, May 7th at NorthC’s data center in Almere, near Amsterdam, has once again highlighted a truth often lost amid discussions about the cloud, artificial intelligence, and digital sovereignty: the Internet still relies on buildings, energy, cooling, cabling, generators, and contingency plans that must function when something goes wrong.

NorthC confirmed that the fire started around 8:45 AM at their data center on Rondebeltweg, in the Sallandsekant business park. The building was evacuated after safety protocols were activated, and according to available information, no injuries have been reported. The company explained that the fire was located at the back of the building, where technical installations are housed, and that they could not yet determine the cause or the exact extent of the damage to the infrastructure.

The latest official update provided by NorthC at 8:50 PM local time indicated that the fire had been downgraded to GRIP 1 status and that the NL-Alert sent to the local population had been withdrawn. This means the fire was under control, although the company still awaited authorization to enter the building and conduct an initial inspection. Their technical teams were prepared to assess the damage and determine what would be needed to restore power and connectivity as soon as possible.

An 11 MW data center in a strategic location

The Almere data center is no small facility. NorthC describes it as a 26,000 m² infrastructure with 11 MW of installed electrical capacity, built to Tier 3 standards, powered by certified green energy, and connected to other data centers operated by the company in the Netherlands. The facility is located at Rondebeltweg 62, with direct access to the A6 and within Amsterdam’s metropolitan area, one of Europe’s major digital hubs.

The emergency response was extensive. According to Data Center Dynamics, five fire trucks, three ladder vehicles, and a firefighting robot participated in the operation. A diesel tank was also cooled down preventively with support from a specialized vehicle coming from Lelystad Airport. The situation forced authorities to keep the building inaccessible for hours while emergency personnel worked to control the fire and enable safe entry.

NorthC stated mid-afternoon that their priority was to assess the impact as soon as the building was released by emergency services. Meanwhile, their teams had prepared multiple scenarios to restore power as quickly as possible. This is critical: in a data center, even if server rooms do not suffer direct mass damage, loss or disruption of electrical, cooling, or connectivity infrastructure can interrupt critical services.

The cause of the fire remains unclear. The full extent of damages is also still unknown. Out of caution, any technical conclusions about the origin must wait until investigations by NorthC and relevant authorities are complete.

Public services, universities, and cloud impacted

The incident was not confined within the data center walls. Data Center Dynamics reported impacts to Utrecht University, the regional water authority Hoogheemraadschap De Stichtse Rijnlanden, Transdev, and members of SURF, the Dutch ICT service provider for education and research. IBM Cloud also reported issues related to its Amsterdam 03 data center.

The Next Web went further, highlighting the chain reaction triggered by the fire: Utrecht University was partially offline, SURF alerted to disruptions for academic institutions, and Transdev experienced problems with communication systems for buses and trams in Utrecht province. According to that report, the public transportation control center had servers in the affected facility and had not migrated to a backup location.

This is arguably the most crucial point from a business continuity perspective. The fire might originate with a provider, but the actual impact depends on how clients’ architectures are designed. A data center may have internal redundancy, high-availability standards, and prepared equipment, but if an organization concentrates critical services in a single location without proven recovery plans, risk remains.

This is not a critique solely of the affected entities. It’s a broader lesson for any company, university, government agency, or provider operating critical services. Leasing colocation, cloud, or hosting in a quality data center does not replace a high-availability strategy. Replication, verified backups, recovery plans, periodic testing, and for sensitive services, deployments across multiple physical sites are essential.

Resilience isn’t just bought with certified data centers

Modern data centers are designed to mitigate risks, but they are not invulnerable. Fires, power failures, batteries, generators, distribution panels, cooling systems, human error, flooding, or connectivity incidents can affect an installation. The key is how the damage is contained and how the operator and clients respond.

The Almere case arrives at a time when Europe is accelerating its dependence on digital infrastructure. Cloud services, AI, e-government, connected healthcare, transportation, education, research, and financial services increasingly rely on regional data centers. When one suffers an incident, the physical layer supporting all that digitalization becomes highly visible.

NorthC is a prominent player in the European colocation market. Founded in 2019 after the merger of TDCG and NLDC, it operates data centers in the Netherlands, Germany, and Switzerland. In December 2025, Antin Infrastructure Partners announced the acquisition of NorthC from DWS, as part of a broader push into European digital infrastructure platforms.

The incident does not fundamentally question the security of NorthC or other European data centers. Instead, it opens a more practical conversation about physical dependency. Resilience isn’t just about buildings meeting standards; it’s about ensuring each critical service can survive the temporary loss of an entire facility.

For impacted clients, the coming hours will be decisive. Once NorthC gains access, they need to assess the electrical supply status, connectivity, technical equipment, affected areas, and realistic recovery timelines. For the broader sector, this fire offers a useful warning: continuity cannot be assumed solely because infrastructure exists within a professional data center.

The cloud is flexible, but not insubstantial. It has physical addresses, electrical panels, fuel tanks, fiber routes, cooling systems, and personnel entering buildings when firefighters permit. The Almere fire reminds us that any serious digital strategy must start with recognizing this reality.

Frequently Asked Questions

Where did the NorthC fire occur?
It happened at NorthC’s data center located at Rondebeltweg 62, Almere, within the Sallandsekant business park, near Amsterdam.

Were there any injuries?
According to NorthC, everyone present was evacuated in time, and no injuries have been reported.

What services were affected?
Impacts include Utrecht University, SURF, Hoogheemraadschap De Stichtse Rijnlanden, Transdev, IBM Cloud Amsterdam 03, among others, either directly or indirectly connected to the facility.

What lessons does this incident teach businesses?
That hosting critical systems in a professional data center alone is not enough. Critical services require geographic redundancy, proven recovery, verified backups, and clear contingency plans to operate if one site goes down.

Report from NorthC data centers

Scroll to Top