SpiNNcloud Systems is revolutionizing AI with supercomputers inspired by the human brain.

SpiNNcloud Systems has announced the launch of SpiNNaker2, a revolutionary chip inspired by the human brain that promises to transform the fields of artificial intelligence (AI) and high-performance computing (HPC). This development is based on the concept of mesh computing and is designed to provide large-scale, energy-efficient AI for a variety of applications.

The Concept of Brain-Inspired Mesh Computing

The concept of large-scale mesh computing, composed of low-power nodes, was initially developed by the University of Manchester in the UK. This idea was further enhanced by TU Dresden, incorporating functional principles of the human brain in the design of modern chips. Mesh computing is based on the idea that the human brain is composed of a large number of small computing nodes that operate event-based and whose connections adapt to learn new concepts, similar to how the human brain functions.

The “Human Brain” Project

During the inception of the “Human Brain Project” in 2013, Professor Steve Furber, one of the original creators of the first Arm processor, presented these ideas. Subsequently, the research team led by Dr. Christian Mayr at TU Dresden expanded and improved upon these concepts, leading to the establishment of SpiNNcloud Systems in 2021, a startup dedicated to commercializing this innovative technology.

SpiNNaker Products

The first product based on this philosophy was the SpiNNaker1 microchip (Spiking Neural Network Architecture), led by Professor Steve Furber. This Arm-based microchip was used to create a supercomputer with a million cores, supporting real-time modeling of brain simulations. This breakthrough was pivotal in the field of neuromorphic computing and garnered interest from national labs and research and development centers.

With SpiNNaker2, SpiNNcloud took the mesh computing concept to a new level. This next-generation chip, leveraging Arm Cortex-M4 cores in a lightweight on-chip network, promises more efficient algorithms than conventional AI. It also integrates accelerators designed to run event-based Deep Neural Networks (DNNs), combining neuromorphic principles with conventional DNNs for significant savings.

Applications of SpiNNaker2

SpiNNaker2 offers large-scale real-time AI with outstanding energy efficiency for various applications such as production optimization, logistics, pharmaceutical discovery, Large Language Models (LLMs), robotics, smart agriculture, and cognitive cities. The energy efficiency of SpiNNaker2 is impressive, being 10 times more efficient compared to the latest GPUs used in HPC.

Hector Gonzalez, co-founder and co-CEO of SpiNNcloud, emphasizes the company’s strategic role in reducing the energy footprint of AI workloads. “Key strategic partners and the entire Arm technology ecosystem are moving towards a future where AI models are deployed in the most energy-efficient way possible,” Gonzalez states. “This is the value of SpiNNaker2 and the mesh concept for large-scale AI.”

Leadership in Low Power and Flexible Licensing Models

Arm’s technology, known for its energy efficiency, plays a crucial role in SpiNNaker2’s success. Despite the Cortex-M line not being typical for large-scale AI, it was the ideal choice for the low-power mesh concept used by SpiNNaker2. Arm Flexible Access for Startups provided early access to Arm’s intellectual property at no cost, allowing SpiNNcloud to rapidly advance in the development of its innovative processor designs.

Towards a Future of Innovative AI

The story of SpiNNcloud highlights the benefits that Arm offers startups in their early stages. The Arm Flexible Access for Startups program is propelling companies like SpiNNcloud to a technological level comparable to leading tech firms, helping to realize the innovative vision of brain-inspired AI in silicon.

With SpiNNaker2, SpiNNcloud Systems is poised to lead a new era in artificial intelligence, delivering energy-efficient and highly effective solutions for a variety of uses in high-performance computing and beyond.

Scroll to Top