Sabi Challenges Neuralink with a BCI Hat that Wants to Write Thoughts

Sabi has just exited stealth mode with a promise that, on paper, sounds like science fiction: turning inner speech into text without surgery, microphones, or keyboards. The Palo Alto startup aims to do this with a non-invasive wearable in the shape of a cap or hat, based on ultra-high-density EEG, custom ASICs, and a foundational model trained on neural data. The idea is nothing short of ambitious: if it works, computers would no longer rely solely on clicks, voice, or touch, adding a new layer of input directly linked to verbal thoughts.

What makes Sabi interesting isn’t just the headline, but the contrast with the rest of the BCI market. While Neuralink, Synchron, Precision Neuroscience, and Paradromics mainly advance with implantable or minimally invasive interfaces for people with paralysis or severe motor impairments, Sabi wants to operate on a different level: a potentially mass-market, everyday, wearable interface more akin to headphones or a cap than an operating room. This ambition fundamentally changes the scalability conversation.

Sabi’s Bet: Less Precision per Sensor, More Density and More AI

According to Wired, Sabi claims its cap will incorporate between 70,000 and 100,000 miniaturized sensors, a figure far surpassing most conventional EEG devices, which usually range from a dozen to a few hundred sensors. The company’s initial goal is to reach about 30 words per minute, supported by a “brain foundation model” trained on 100,000 hours of brain data collected from 100 volunteers. On its website, the company talks about neuroimaging sensors powered by custom ASICs and a foundation model designed to map brain signals to thoughts.

The technical thesis is logical. The main challenge with non-invasive EEG is that the signal arriving at the scalp is attenuated and blurred after passing through the skull and scalp. Recent literature continues to describe this signal loss, low signal-to-noise ratio, and high variability between users as some of the top bottlenecks for non-invasive BCIs—especially for decoding imagined speech. In other words: the challenge is not just “reading” the brain, but doing so with enough resolution, repeatability, and robustness for a person to use the system naturally.

This is where Sabi seeks to stand out. Instead of competing with Neuralink in raw signals, it aims to compensate for an external wearable’s weaknesses with higher capture density and a large-scale AI layer. It’s a very different strategy from traditional intracortical implants: less fidelity per channel, but more surface area, more data, and, in theory, much lower adoption friction. That also explains why Vinod Khosla, the company’s investor, suggests that a BCI designed for billions of people cannot rely on surgery.

What the BCI Map Looks Like in 2026

Not all companies in the field compete for exactly the same use case, but together they sketch a pretty accurate picture of the state of the BCI market in 2026:

CompanyType of InterfaceInvasivenessMain FocusCurrent StatusSource
SabiEEG wearable in cap/hatNon-invasiveInner speech to text; everyday computer inputStealth exit in April 2026; first product promised by late 2026; initial goal of ~30 words per minute
NeuralinkN1 Intracortical implantInvasiveComputer control and speech restore in paralysis or severe speech lossOngoing clinical trials; implant with 1,024 electrodes in 64 threads, placed via surgical robot
SynchronEndovascular StentrodeM minimally invasiveDigital control for motor-impaired individuals without open skull surgeryClinical platform underway; the interface is implanted via blood vessels, avoiding open brain surgery
Precision NeuroscienceLayer 7 cortical surface arrayImplantable but reversible and less invasive than deep intracorticalHigh-resolution surface cortical capture for future clinical BCIsResearch device; arrays with 1,024 electrodes and modular architecture with thousands of channels
ParadromicsConnexus implantable BCIInvasiveSpeech restoration and computer control in severe motor impairmentFDA IDE approval since Nov 2025 for early human feasibility studies
NeurableEEG in MW75 Neuro headphonesNon-invasiveMonitoring focus, fatigue, and cognitive state, not thought-to-textCommercial product with 12 channels EEG and textile sensors; aimed at productivity and wellness

This table clearly shows an idea: today, the market is split into two main areas. On one side, clinical BCIs pursuing maximum bandwidth at the cost of invasive procedures. On the other, non-invasive wearables—which are more scalable but mostly limited to basic functions like measuring focus, fatigue, or simple motor intent. Sabi seeks a very ambitious middle ground: not entering the skull, but targeting a valuable function like writing with just thought.

What Truly Sets Sabi Apart from Neuralink

The comparison with Neuralink is inevitable, but it shouldn’t be overly simplified. Neuralink remains focused on clinical high-need cases. Its N1 aims to record neural activity with much higher fidelity and translate it into device control for people with tetraplegia or severe speech loss. This approach has a clear advantage: intracortical signals are much richer and more direct than what a cap EEG can capture from outside.

Sabi, on the other hand, starts from the wearable side—not the hospital. That nuance is huge. If Neuralink represents the “max signal, maximum clinical complexity” route, Sabi embodies the “lower fidelity per sensor, but greater social acceptability and consumption potential” path. Wired’s own report frames it thus: if you’re looking for a daily-use interface for the computer, the challenge isn’t just reading the brain well, but making the device frictionless, calibration-free, and user-friendly—so that users aren’t turned into patients.

The Big Question: From Demo to Real Product

This leap remains the ultimate test. Sabi has launched with impressive figures, but hasn’t yet shown public independent validation at a level that would spur a personal computing revolution. The history of BCIs is filled with impressive prototypes that clash with reality: noise, user fatigue, calibration, latency, cost, privacy, and, most critical, the difficulty of replicating results outside highly controlled environments. Recent EEG-based imagined speech research continues to push forward, but also highlights that robust cross-subject generalization remains one of the industry’s biggest open challenges.

Nonetheless, underestimating the movement would be a mistake. While Sabi may not end up being “the non-invasive Neuralink” many headlines suggest, it has introduced a powerful idea: the next major interface doesn’t have to look like a smartphone, a medical implant, or a keyboard. It could resemble a piece of clothing. And if it succeeds in translating verbal thought with reasonable utility, its impact wouldn’t be limited to the clinical market. It would influence productivity, accessibility, AI agents, and ultimately, the very definition of what it means to “use” a computer.

Frequently Asked Questions

Is Sabi directly competing with Neuralink?
Not entirely. Neuralink is focused on clinical implants for people with paralysis or severe speech loss, while Sabi envisions a non-invasive BCI with much greater mass-market potential, enabling interaction with computers through inner speech converted to text.

What makes Sabi technically different from other EEG devices?
Their declared approach combines 70,000 to 100,000 miniaturized sensors with custom ASICs and a foundation model trained on 100,000 hours of neural data. Most conventional EEGs operate with a dozen or few hundred sensors.

Are there any commercially available non-invasive BCIs today?
Yes, but with much more modest goals. Neurable sells headsets with 12-channel EEG for focus and fatigue measurement, and Emotiv markets 14-channel EEG headsets aimed at research and development. Neither currently offers a generalizable thought-to-text system like what Sabi promises.

What’s the biggest obstacle for non-invasive inner speech BCIs now?
The EEG signal arrives attenuated and noisy from outside the skull, and there’s significant variability between users and sessions. That’s why robust decoding of imagined speech remains one of the hardest technical problems in the field.

Scroll to Top