Cisco Raises the Stakes in AI Networks with Silicon One G300 and 102.4 Tbps Switches

The race to scale data centers for Artificial Intelligence is no longer just about who gets the most GPUs, but about who can make them run at “full throttle” without waiting for the network. At that point, the bottleneck shifts from computing power to data movement: latencies, congestion, packet losses, and increasingly complex operations as deployments multiply beyond hyperscale environments.

At Cisco Live EMEA (Amsterdam), Cisco unveiled a new set of solutions aimed at this “second phase” of AI in data centers: the Silicon One G300, a switching chip capable of 102.4 Tbps, new Cisco N9000 and Cisco 8000 systems built around this silicon, high-density optics for next-generation links, and an evolution of their operational layer with Nexus One, designed to simplify deployment and management of fabrics for AI, whether on-premise or in the cloud.

The network is now part of the “compute”

Cisco’s message aligns with a concept almost all major infrastructure providers are emphasizing: in large-scale training and inference, the network ceases to be just an accessory and becomes a core component of the cluster’s overall performance. If data doesn’t arrive on time, the GPU doesn’t produce; and if the GPU doesn’t produce, hourly costs skyrocket.

Cisco quantifies this tension: their proposal for “Intelligent Collective Networking” — integrated into Silicon One G300 — promises 33% improvements in network utilization and a 28% reduction in job completion time compared to scenarios with non-optimized route selection. Business-wise, this means more work completed with the same hardware, resulting in more productivity per GPU-hour.

What is Silicon One G300 and what does it change?

Silicon One G300 is described as a switch silicon designed for massive and distributed clusters, with three main priorities: performance, security, and reliability. Cisco highlights several features targeting classic AI traffic issues (bursty, sensitive to microinterruptions, heavily penalized by packet losses):

  • Shared high-capacity buffer to absorb traffic spikes without causing drops that “stall” jobs.
  • Route-based load balancing to improve traffic distribution and better handle link failures.
  • Proactive telemetry to ensure operations aren’t flying blind as the fabric grows.
  • Programmability to evolve functionalities post-deployment, protecting investments.
  • Hardware-integrated security to apply controls at line rate without sacrificing throughput.

Collectively, this approach aims to address a critical point: as AI “democratizes” and reaches neo-clouds, sovereign clouds, operators, and enterprises, the network must scale without requiring an unfeasible number of specialists.

New N9000 and 8000: from silicon to system (and rack)

Cisco complements the announcement with new N9000 and 8000 systems (fixed and modular) based on G300, designed to meet the thermal and electrical demands of these environments. One of the most notable features is the option for 100% liquid cooling, which Cisco associates with energy efficiency improvements of “nearly 70%” and bandwidth density that can consolidate what previously required multiple systems into a single unit.

The takeaway is pragmatic: when a cluster starts measuring power in megawatts and talking about dense racks, cooling ceases to be just a facilities concern and becomes part of network design.

Optics for the 1.6T era and watt savings

Connectivity is also evolving. Cisco announced two key product lines:

  • OSFP optics at 1.6Tbps, aimed at switch-NIC links of 1.6T and switch-to-server connectivity options at 1.6T/800G/400G/200G.
  • 800G LPO (Linear Pluggable Optics), promising to reduce module power consumption by 50% compared to retimed optics. Cisco adds an important side benefit: combining LPO with the new systems could cut up to 30% of the switch’s total power consumption, improving reliability and sustainability.

In a market where every watt counts — due to cost and electrical supply limits — these figures are more than marketing; they’re a direct lever on total operational costs.

Nexus One and “AgenticOps”: less friction for IT teams

Beyond the hardware, Cisco promotes a key idea: scaling AI can’t be a handcrafted process every time a fabric expands. That’s why Nexus One is positioned as a unified management platform linking silicon, systems, optics, software, and automation under a seamless operational experience.

Among the new features, Cisco mentions:

  • Unified Fabric for deploying and adapting networks even across multiple sites, with API-driven automation.
  • “Job-aware” observability: correlating network telemetry with AI workload behavior.
  • Native integration with Splunk planned for March, especially relevant for sovereign environments: analyzing telemetry “where the data lives,” without moving it outside.

Additionally, Cisco introduces AI Canvas as an interface for “AgenticOps” in data center networks: a guided troubleshooting approach with “human-in-the-loop” conversations translating complex problems into concrete actions.

A strategy beyond hyperscale

Cisco emphasizes that this generation is designed for a broader client landscape: hyperscale providers, neo-clouds, sovereign clouds, operators, and enterprises. To bolster this message, they list partnerships and validations within the ecosystem (AMD, Intel, NVIDIA, NetApp, DDN, VAST, among others), highlighting that in AI, value comes from the integrated ecosystem — combining network, compute, storage, and operations.

Cisco confirms that G300, systems based on G300, and the new optics will ship this year, marking the start of a new phase where AI “backend networking” becomes a strategic product in its own right.


Frequently Asked Questions

What does it mean for a switching chip to be 102.4 Tbps?
It indicates that the silicon can handle switching a massive volume of traffic within the switch, enabling more high-speed ports and greater bandwidth density for large-scale AI fabrics.

Why is liquid cooling now appearing in network equipment?
Because density and power consumption increase with 800G and 1.6T links. Liquid cooling allows sustained higher performance and can improve energy efficiency in demanding deployments.

What is LPO 800G and why is it important for AI data centers?
LPO (Linear Pluggable Optics) reduces internal electronics compared to retimed optics, lowering module power consumption. In networks with thousands of links, this saves heat, reduces wattage, and increases operational margins.

What benefits does unified management like Nexus One bring to an AI fabric?
It reduces deployment and operational complexity, enhances end-to-end visibility (from network to GPU), and facilitates automation and troubleshooting, especially as environments grow across multiple sites or require strict data sovereignty.

via: newsroom.cisco

Scroll to Top