The promise of enterprise Artificial Intelligence (AI) often clashes with an uncomfortable reality: the closer the data is to its source, the more challenging it becomes to process it with low latency and security guarantees. In this context, Datavault AI Inc. (Nasdaq: DVLT) has announced an expansion of its collaboration with IBM to deploy AI capabilities at the network edge in New York and Philadelphia, leveraging SanQtum AI, a platform operated by Available Infrastructure based on a fleet of synchronized micro edge data centers.
The stated goal is ambitious: enable cyber-secure storage and computing, real-time data scoring, tokenization, and ultra-low latency processing in two metropolitan areas that the companies describe as especially dense in data generation. The infrastructure will run the IBM watsonx product portfolio within a zero-trust network, in an approach aimed at reducing reliance on centralized cloud pipelines for certain sensitive use cases.
From “cloud-first” to “edge-first” in critical workloads
In recent years, most organizations have shifted their data pipelines and analytics toward centralized architectures, typically in public clouds. However, this approach doesn’t always fit when strict requirements for latency, data sovereignty, confidentiality, or integrity are in place. The solution announced by Datavault AI addresses exactly that: process and “valorize” data at the moment it’s created, avoiding the need to first transfer it to an external platform.
According to the announcement, Datavault AI will deploy its Information Data Exchange (IDE) and DataScore agents built with watsonx within the SanQtum AI zero trust environment. The idea is that data will be processed, scored, and tokenized at the edge, so it transitions from “raw input” to a digitally authenticated property almost instantaneously. The company claims this mechanism paves the way for high-security data trading models and scenarios where traceability and tamper resistance are essential, not optional.
Real-time tokenization and “digital ownership”
The term “tokenization” has been overused in the industry, but here it is presented with a practical meaning: representing data as authenticated digital assets, ready to be verified, valued, and shared according to defined access and control rules. The announcement emphasizes four concrete goals for this deployment:
- Reduce dependence on centralized cloud pipelines.
- Eliminate delays between data creation and its monetization or utilization.
- Avoid manipulation by keeping data within a zero trust local network.
- Enable companies to treat data as negotiable digital property in real time.
In corporate language, that “instantaneous” is not just a detail: it’s the core. Nathaniel Bradley, CEO of Datavault AI, links this approach to a shift in the data economy by combining the “intelligence” of watsonx with the “speed” of SanQtum AI. Meanwhile, Available Infrastructure argues that its platform unifies speed, resilience, and protection within a single technological framework.
What does IBM watsonx bring to this equation?
IBM appears as the layer of enterprise AI: models, tools, and watsonx products running on this distributed infrastructure. The announcement also includes a strategic perspective: the deployment aligns with IBM’s “ecosystem approach” to bringing scalable AI to organizations with complex needs.
Additionally, IBM had previously announced that Datavault AI uses watsonx.ai to build agents like DataScore, a component focused on scoring data quality and risk, with references to compliance with regulatory frameworks (such as GDPR or CCPA) in prior IBM materials. Such capabilities are consistent with the project’s central message: it’s not just about inference, but about transforming data into assets with context, controls, and verifiability.
Micro edge data centers: a rapidly accelerating trend
SanQtum AI is described as a cyber-secure AI platform based on instances deployed in ultra-low latency distributed data centers around large urban areas. Available Infrastructure claims its environment incorporates a private mesh zero trust model and quantum-resistant encryption, highlighting a clear positioning: handling sensitive workloads, critical infrastructures, and data that must not “exit” into open environments.
Practically, this approach responds to a growing pressure: AI in production demands proximity to the data, especially for real-time scoring, media analytics, identity verification, credentials, or automation processes that cannot tolerate delays.
Schedule: from Q1 to full deployment
The announcement places the capability to operate at scale in New York and Philadelphia in the first quarter of 2026, with plans to expand into other metropolitan areas. However, Datavault AI has also shared other corporate announcements within the same timeframe, mentioning later milestones such as completion of deployment in the second quarter of 2026, as part of a broader expansion strategy. In any case, the message remains consistent: building an urban AI corridor at the edge and replicating it across new cities.
A true edge approach, not just marketing hype
For years, the industry has talked about the edge, but many initiatives have remained at pilot or hybrid architecture stages. The difference here lies in the integration of three key components: micro data centers (for low latency), zero trust (for security), and watsonx (for enterprise AI), with a specific focus on monetization and data verification.
If successful, this project could become a practical example of how some companies are rethinking the AI landscape: less reliance on macro cloud regions, more computation close to the data, and greater control over how information is authenticated, valued, and exploited in real time.
Frequently Asked Questions (FAQ)
What does deploying “edge AI” with ultra-low latency mean?
It involves running AI models and services near where data is generated (e.g., in urban micro data centers), reducing delays and improving response times for real-time use cases.
What is a zero trust network and why is it important in enterprise AI?
Zero trust is a security approach that does not implicitly trust any user, device, or network segment. In AI, it’s crucial when handling sensitive data, minimizing lateral movement, exposure, and manipulation risks.
What does data tokenization mean in Datavault AI’s approach?
They aim to transform data into digitally authenticated property in real time, enabling verification, valuation, and access control without relying on centralized pipelines that can add latency.
Can this approach replace public cloud in AI projects?
Not necessarily. It is targeted at workloads where latency and security are critical. In many scenarios, it will coexist with public cloud in a hybrid model: edge for real-time and sensitive data, cloud for scalable and less critical analytics.
via: newsroom.ibm

