Europe spent the weekend counting queues, suitcases, and minutes. A “cybersecurity incident” at Collins Aerospace — a key provider of automated check-in and boarding systems — took down cMUSE, their common cloud platform, and forced the disconnecting of kiosks, manual reprinting of tickets, and resorting to backup laptops at several major airports across the continent. The failure, turned into a single point of failure, triggered delays and cancellations from Friday onward and continued to impact operations into Monday, especially in Brussels, Berlin, and London-Heathrow. This incident comes just weeks before NIS2, the European directive expanding the scope of “critical infrastructure” to include IT providers that support essential services — precisely the type of actor that failed.
What is cMUSE (and why its outage paralyzes half the airport)
The affected software is cMUSE, Collins’ common-use platform enabling airlines and airports to share the same check-in counters, kiosks, and boarding gates via a centralized backend. According to the provider, cMUSE offers:
- Web agent stations,
- fleet of kiosks,
- mobile check-in points
… all managed from a shared control plan. The cost-effective approach streamlines updates and simplifies integration with airline DCS (departure control systems). But the flip side is clear: If the central control goes down, the entire check-in front collapses. Not just counters: printers, biometric scanners, and scanners are usually directly integrated with MUSE, making them also vulnerable.
Collins confirmed it was a “cybersecurity-related incident”, though they did not specify the technical vector. The operational symptoms, however, were clear: backend outage and functional loss in kiosks and workstations, with airlines and airports reverting to manual processes to keep the operation going.
Operational timeline: from Black Friday to uneven recovery Monday
- Friday: major European hubs are hit. Heathrow, Berlin, and Brussels are among those affected, while other airports report normal operation.
- Weekend: Berlin and Heathrow report notable improvement on Sunday after deploying manual measures and patches.
- Monday: delays and cancellations persist at multiple points:
- Brussels is the most impacted: on Sunday, airlines were asked to cancel nearly 140 departures on Monday due to the lack of a secure version of the system. By mid-morning, around 60 flights canceled, with Brussels Airlines hardest hit and check-in and boarding only manual. EasyJet and Vueling each cancel six flights.
- Berlin warns of long waits and around one-hour delays on departures, worsened by passenger surge during the Berlin Marathon.
- Heathrow maintains an message of gradual recovery: “Most flights are still operating,” but travelers are advised to check status before leaving and not to arrive more than 3 hours for long-haul or 2 hours for short-haul.
- Dublin (T2) reports additional time needed for check-in and bag drop since some airlines still use manual solutions; T1 operates normally.
Meanwhile, Collins announced on Monday morning that they are in the final stages of updates to rectify the issue.
Manual operations: resilient but not “elegant failure”
The queues with paper forms and handwritten tickets saved the weekend. But, as ground staff admit, the contingency plan has relied on human muscle, not because the systems failed gracefully. In such a tightly integrated environment like the airport process —check-in, luggage tagging, security, boarding, and connection to DCS— each second that the backend is unresponsive multiplies in windows, conveyor belts, and gates. The cloud apps are flexible; operational bottlenecks aren’t.
This incident offers three operational lessons:
- SPOF (Single Point Of Failure): Centralizing data and logic makes life easier… until one falls and everything collapses.
- Graceful degradation is insufficient: Alternatives should detach from the central backend to issue tickets, labels, and board when deemed safe, via cache data.
- Dependency spread: Not only is the kiosk affected, but printers, readers, and biometric devices integrated with MUSE also suffer, chaining the outage.
A regulatory context tightening: NIS2 and EASA Part-IS
The timing is no coincidence. In October, NIS2 will come into force, a European directive expanding the scope of “critical infrastructure” to include IT providers supporting essential services within sectors like aviation. Simultaneously, EASA’s Part-IS aims to adopt “aviation-grade” cybersecurity practices — risk management, reporting, continuity, and supply chain security — for shared platforms: baggage handling, check-in, and common use systems on the ground.
What are the implications for incidents like cMUSE?
- Stricter notification and technical obligations for providers (not only airlines/airports).
- Risk assessment should consider third-party failures as systemic risk.
- Contingency and recovery< /strong> plans should avoid reliance on a single cloud component.
The fine print of NIS2 and Part-IS fits the weekend pattern: failure of a critical provider + pan-European impact + manual contingency = insufficient resilience.
What we know (and don’t) about the attack
- Confirmed: Collins refers to a “cyber-related incident”.
- Not disclosed: attack type, scope of impact on cMUSE infrastructure (cloud and/or on-prem deployments), affected DCS links, or whether data access or exfiltration occurred.
- Established: interruption of the control layer and degradation of kiosks and agent stations, with interdependencies affecting printers and biometrics.
- Operational result: reversion to paper, backup computers, and adjusted slots to maintain schedules.
What operators and airlines can do (today)
- Reinforce graceful fail scenarios
- Local printing of tickets and labels with cached data and post-verification.
- Secure offline boarding, with synchronized lists for when connectivity is restored.
- Local bypass for bag drop and biometrics, with audit trails.
- Agree on realistic RTO/RPO with providers
- Recovery objectives for each function (kiosks, agents, self-bag drop).
- Regular drills that go beyond paper planning: simulations of backend outages.
- Reduce SPOF
- Hybrid architectures: critical functions on-premises + cloud elasticity.
- Cross-region/provider redundancies where contracts permit.
- Segmentation and least privilege
- Ensure a common platform incident doesn’t ripple out into other subsystems (baggage, gates, etc.).
- Telemetry and rate limits on DCS integrations.
- Passenger communication
- Proactive messages: “Arrive X hours early / check in online / confirm flight before leaving”.
- Additional staffing at counters and prioritize critical connections.
Impact at airports (what was communicated to the public)
- Brussels: asked to cancel nearly 140 departures the day before; early morning had around ~60 cancellations and manual operations at various airlines, with Brussels Airlines most affected. Passengers were advised to check flights, only come if confirmed, and check in online.
- Berlin Brandenburg: “Due to a provider outage, there are longer waits,” recommends use of online check-in, kiosks and fast bag drop. Expect delays of ~1 hour during the busy Berlin Marathon period.
- London-Heathrow: “The majority of flights are still operating; we continue resolving and recovering.” Travelers are advised to verify flight status before arriving and not to arrive more than 3 hours (for long haul) or 2 hours (short haul).
- Dublin (T2): some airlines are still using manual workarounds, which causes longer wait times for check-in and bag drop; T1 is operating normally. Passengers should add a buffer of 2 hours for short haul and 3 hours for long haul, especially if checking in at T2.
Is this an isolated incident or a symptom of a fragile architecture?
The cMUSE case is not the first warning. Digital transformation has driven centralization for efficiency: a cloud backend orchestrates everything. Aviation is no exception. The problem is that without resilience design, cost and operational improvements become a systemic fragility.
Three discussion points:
- Elasticity ≠ resilience: Scaling web stations helps, but safe degradation when the central layer fails is equally essential.
- Local autonomous layers: kiosks, printers, and validators should operate in degraded mode without constant reliance on central control.
- Supply chain as attack surface: NIS2 reminds us that the weak link can be a provider; auditing and testing should include external vulnerabilities too.
Collins’ response and the day after
On Monday, Collins stated they are in the final stages of updates to resolve the issue. The industry will wait for technical details, such as the attack vector, scope (cloud/on-prem), containment measures, activated security towers, and patch timelines. Authorities, under NIS2, will demand clarity in notifications, corrective actions, and plans to prevent future breaches.
Passenger message: what to do if you still need to fly
- Check your flight status on your airline’s website/app or at the airport before leaving home.
- Check in online and carry your boarding pass on your phone (and print as backup if possible).
- Allow extra time:
- Heathrow: up to 3 hours (long haul) and 2 hours (short haul). Don’t arrive too early.
- Berlin: plan for about 1 hour of wait during busy periods.
- Dublin T2: add buffer time for check-in and bag drop.
- Luggage: if using self-drop, follow instructions; if not, prepare documentation and labels as requested.
- Patience: some processes may still be manual; ground staff are working hard to maintain operations.
Conclusion: a storm accelerating the cybersecurity resilience agenda
The cyberattack on Collins did not stop Europe but stretched operational capacity and revealed a systemic weakness: excessive reliance on a single backend. Recovery was possible thanks to the skill of thousands of ground professionals and good old paper and pen. In the short term, the sector will patch; in the medium term, NIS2 and EASA Part-IS will drive rethinking fallbacks, contracts, drills, and hybrid architectures. Passengers will demand explanations; industry will seek guarantees. Next time, when common control shakes again, operations should degrade gracefully, not fail outright.
Frequently Asked Questions
What is cMUSE and why does its failure cause long queues?
cMUSE is Collins Aerospace’s common-use platform that centralizes check-in, kiosks, and boarding for several airlines at an airport. When its backend fails, web stations and kiosks stop functioning properly, requiring manual processes that slow down operations.
Which airports remain affected and what measures are recommended?
On Monday, Brussels was the most impacted (nearly 140 cancellations requested the previous day, with around ~60 cancellations); Berlin warned of longer waits (~1 hour delay); Heathrow reported a gradual recovery, advising confirmation before travel; Dublin T2 requested extra buffer time for check-in/bag drop. All recommend verifying flight status and checking in online.
Has this been a confirmed cyberattack? Was data stolen?
Collins described it as a “cyber-related incident” and said they are finalizing updates to rectify it. The attack vector and data exfiltration are not publicly disclosed; investigations will clarify scope and response measures.
What changes with NIS2 for these providers?
NIS2, effective October, extends “critical infrastructure” scope to include IT providers supporting essential services like aviation. It enforces risk management, incident notification, and continuity. In aviation, this aligns with EASA Part-IS, which mandates “aerospace-grade cybersecurity” practices—shared systems like baggage, check-in, and common-use ground systems.
How can airports reduce reliance on a single backend?
Using hybrid architectures (critical on-prem functions + cloud elasticity), degraded modes (offline check-in and boarding), regional/provider redundancies, subsystem segmentation, and regular drills on RTO/RPO. All should be reflected in contracts and SLA agreements.
Sources: euronews