Backblaze Launches B2 Neo for the “Neocloud” Boom: White-Label Object Storage Ready in Weeks

The race for Artificial Intelligence is not won solely with GPUs. As emerging computing platforms—known as neoclouds—compete to deliver training and inference capacity, the bottleneck appears in an unglamorous place: storage. With this in mind, Backblaze has announced B2 Neo, a solution designed specifically to enable these platforms to add enterprise-level object storage to their catalog “in weeks, not years,” without the cost and complexity of building a proprietary backend.

The company, listed on Nasdaq under the ticker BLZE, presents B2 Neo as a high-performance, white-label product: the end customer consumes it as a native service within the neocloud, with endpoints branded under the partner’s name and pricing controlled by the partner. Behind the scenes, Backblaze provides the storage engine, operations, and nearly two decades of cloud storage expertise. This move is supported by a key fact that Backblaze emphasizes from the start: managing more than five exabytes of storage and promising capacity of up to 1 terabit per second (1 Tb/s) of throughput, designed for intensive data flows in AI, HPC, and media.

The driving market: neoclouds on the rise

The announcement is accompanied by an estimate illustrating why the term “neocloud” has entered industry vocabulary: the market is projected from $35.22 billion in 2026 to $236.53 billion in 2031, with a compound annual growth rate (CAGR) of 46.37%. With such growth, the question shifts from whether neoclouds will exist to which ones will become comprehensive platforms capable of supporting workloads and not just renting GPU resources as commodities.

Other sector analyses reinforce the idea that these providers—specialized and more agile than hyperscalers in certain niches—are attempting to move from a “GPU-first” approach toward full-stack platforms, especially for enterprise clients. This transition requires not only compute but also network, orchestration, observability, identity… and storage.

The real problem: GPUs sitting idle due to data “not close enough”

In AI workloads, storage is not just a passive repository. It’s a component that determines training speed, operational costs, and, importantly, the actual utilization of infrastructure. neocloud customers need to store datasets, model checkpoints, and pipeline artifacts (including media). Without integrated storage, teams often move large volumes in and out of platforms, which leads to typical issues: latency, delays, and underutilized GPUs, which are among the most expensive assets in the stack.

Backblaze proposes that for many neoclouds, building and operating scalable object storage directly competes with their main goal: expanding GPU capacity and differentiating through performance and delivery speed. Their thesis is that B2 Neo reduces this distraction, preventing the company from having to allocate capital and engineering effort to a backend that, while critical, does not define their core value proposition.

What B2 Neo offers: white-label, API integration, and partner control

The core of the announcement is the product model: Backblaze aims to be “the storage layer” for neocloud platforms that already have compute, networking, and user experience but don’t want to build storage from scratch.

According to the company, B2 Neo enables:

  • Offering storage as a native service, with branded endpoints and pricing controlled by the partner.
  • Provisioning accounts, managing permissions, and handling billing through existing platform tools—without relying on separate consoles or manual setups.
  • Integrating as a storage layer designed for high-performance data pipelines, with throughput capabilities up to 1 Tb/s.

The company also highlights enterprise-grade credentials to build confidence in a service intended as “foundational infrastructure”: mentions include SOC 2 Type II compliance and a declared durability of 11 nines, aligning with typical production-oriented object storage standards.

Business validation: “multiple platforms” and Backblaze’s largest TCV commitment

Beyond technical figures, Backblaze emphasizes that B2 Neo is not an experiment: it’s built in collaboration with neocloud platforms already running production workloads on Backblaze, with “multiple” major platforms signed on. The company further underscores that these partnerships include its largest total contract value (TCV) commitment to date, demonstrating commercial traction beyond mere product narrative.

The release cites a case involving a global edge services platform that chose Backblaze after an evaluation, now leveraging B2 Neo as a core component of their AI, HPC, and media storage strategies. Meanwhile, Backblaze CEO and co-founder Gleb Budman summarizes their positioning with a phrase aligned with market needs: B2 Neo enables launching storage “in weeks, not years,” allowing neoclouds to focus on their GPU roadmap.

The announcement also includes insights from analyst Rob Strechay (Smuget & theCUBE research), who interprets B2 Neo as an almost immediate “value-add”: enabling cloud storage without the burden of building it, while helping end customers improve ROI on AI projects by reducing data pipeline friction.

A deeper perspective: “full-stack” as a new competitive edge

In today’s landscape, many neoclouds were built to address a specific problem: quick access to GPUs and specialized compute capacity. But the market is shifting toward comprehensive platforms where customers want not only GPUs but a ready-to-operate environment: nearby data, efficient workflows, identity control, and seamless end-to-end experience.

In this context, B2 Neo is Backblaze’s strategic move to establish itself as a leading provider of the often bottlenecked layer: object storage. For neoclouds, it’s a way to compete more effectively without turning their entire roadmaps into storage engineering projects.


Frequently Asked Questions (FAQ)

What is B2 Neo and how does it serve a neocloud GPU platform?
B2 Neo is a white-label object storage offering that enables a neocloud to provide “native” storage for its customers (datasets, AI checkpoints, artifacts) without building a backend from scratch.

Why is object storage critical in AI training and inference?
Because data flows determine GPU utilization: moving datasets outside the platform introduces latency, delays pipelines, and increases overall project costs.

What advantages does white-label storage offer AI and HPC providers?
It allows launching a complete service under the provider’s brand, with control over pricing and integrated management (accounts, permissions, billing)—improving customer experience while reducing operational complexity.

What should a company consider when choosing object storage for AI workloads in a neocloud?
Look at available throughput, API integration, access controls, durability/compliance standards, and the ability to keep data close to compute resources to prevent pipeline bottlenecks.

via: backblaze

Scroll to Top