Taiwan wants to secure a front-row seat in the global race for artificial intelligence. The announcement made during Hon Hai Tech Day 2025 points precisely in this direction: Visionbay.ai, the supercomputing and cloud business unit of Hon Hai Technology Group (Foxconn), will launch what will be the country’s largest GPU cluster and its first supercomputing center based on NVIDIA GB300 NVL72 infrastructure.
The new “AI factory,” publicly unveiled at HHTD25, is set to become a key component of Taiwan’s “sovereign AI” strategy: local computing capacity, controlled by Taiwanese companies, designed to keep sensitive data and intellectual property within the country.
A supercomputing center designed for the era of giant models
According to the company’s own details, the NVIDIA GB300 NVL72 systems will be operational during the first half of 2026. This is NVIDIA’s latest-generation acceleration platform, intended for training and deploying large-scale AI models—from language models to vision systems and advanced agents.
Visionbay operates as a Trusted NVIDIA Cloud Partner (NCP), allowing it to offer accelerated computing services compatible with the NVIDIA software ecosystem, but deployed on infrastructure managed from Taiwan. Based on this, the company has introduced its new “AI Supercomputing Center & Operations Platform,” an end-to-end AI value cycle platform:
- Supercomputing infrastructure with cutting-edge GPUs and high-speed networks.
- Operational platform for workload management, monitoring, security, and billing.
- Application integration services connecting this computing capability with real-world use cases in industry, government, or research.
The goal is clear: to provide an “AI Factory” solution that covers everything from hardware to application layer, enabling companies and organizations to deploy large-scale AI projects without building their own infrastructure from scratch.
Sovereign AI: data stays at home
During his presentation, Visionbay.ai CEO Neo Yao emphasized: if Taiwan wants to remain competitive in the AI era, it needs to “rapidly establish scalable and cost-effective AI infrastructure.”
For Yao, having powerful and accessible computing is a necessary condition to accelerate AI adoption, expand its applications in key sectors, and create an attractive environment for talent and innovation. But the commitment is not just technological; it’s strategic: it’s about building sovereign infrastructure that keeps data resident within Taiwanese territory, protects local companies’ proprietary knowledge, and reduces dependence on foreign computing resources.
In a “fireside chat” with Alexis Bjorlin, NVIDIA DGX Cloud Vice President, both executives highlighted that increasingly, organizations are taking a pragmatic approach: “existing workflows + AI”, meaning injecting AI capabilities into already validated processes instead of reinventing everything from scratch.
To scale this strategy effectively, they pointed out three key ingredients:
- High-performance local computing
- User-friendly platforms for integration with corporate systems
- Security and regulatory compliance guarantees for data
Visionbay aims to precisely address these three points.
An “AI Factory” with GPUaaS and an AI app store
Visionbay’s business model revolves around the concept of AI Factory: a digital factory where the “product” is trained, fine-tuned, and deployed AI models for solving specific problems.
Among the planned services are:
- GPUaaS (GPU as a Service): flexible, on-demand rental of computing capacity for training, fine-tuning, and inference.
- NVIDIA native software solutions, integrated with the GB300 infrastructure to facilitate model and data pipeline deployment.
- An “AI App Store” in the cloud: ready-to-use applications, assistants, and specialized models for sectors like manufacturing, healthcare, finance, and public administration.
The aim is to minimize entry barriers: companies of all sizes can access the same supercomputing technology used by large global players, without making significant hardware investments or building internal teams of infrastructure experts.
Vertical integration: from factory to data center
The Visionbay project directly leverages Foxconn’s historical strengths. The group integrates within the same organization:
- Component manufacturing (motherboards, servers, cooling systems)
- Design and assembly of high-performance servers and racks
- Global supply chain management, optimized over decades for consumer electronics
- Experience in developing large-scale IT infrastructures for clients worldwide
This vertical integration, according to the company, helps mitigate three major challenges faced by organizations deploying large-scale AI today: GPU shortages, rising costs of on-premise projects, and lack of enterprise-ready integration platforms.
Digital sovereignty and industrial transformation from Taiwan
Visionbay situates its initiative within Foxconn’s “3+3+3” strategy guiding its long-term growth: three key industries (electric vehicles, digital health, robotics), three enabling technologies (artificial intelligence, semiconductors, next-generation communications), and three intelligent platforms (smart factory, smart vehicle, smart city).
The new supercomputing center aligns with these priorities. On one hand, it provides the necessary computing power to train AI models that improve manufacturing, logistics, and electric mobility systems. On the other, it serves as national infrastructure to support sovereign AI projects, in collaboration with government, universities, startups, and large local corporations.
The company emphasizes its mission to “Empower the Future of AI,” promising to continue developing full-stack service capabilities—from hardware to applications. The ambition is for Taiwan to shift from being merely a chip manufacturing hub to also becoming an AI supercomputing and innovation hub for all of Asia.
Frequently Asked Questions about the Visionbay Supercomputing Center
What does it mean that Visionbay’s center is an “sovereign AI” infrastructure?
Sovereign AI refers to scenarios where core computing capacity, data, and models are hosted in country—and governed by local regulations. In the case of Visionbay, the GB300 GPU cluster is deployed in Taiwan and managed by a Taiwanese company, making it easier to meet data residency requirements and reduce reliance on foreign resources.
What types of companies can use these services of the “AI Factory”?
The platform is designed for a wide range of organizations: from large industries needing to train their own models, to SMEs or public agencies wanting to use ready-made applications from the “AI App Store.” Thanks to GPUaaS, there’s no need for upfront hardware investments; computational capacity can be rented on demand.
How does this project differ from a traditional data center?
While it draws on classic data center concepts, the focus is specifically on AI workloads. This entails high GPU density per rack, low-latency internal networks, advanced cooling systems, and software optimized for training and deployment of models. Additionally, the integration with NVIDIA’s ecosystem simplifies the use of proven AI tools.
What benefits does Taiwan gain by having its own state-of-the-art GPU cluster?
Beyond enhancing technological independence, Taiwan gains the capacity to attract research projects and international companies requiring regional compute resources. It also fosters an environment conducive to talent development in AI, supports startups, and accelerates digitalization across strategic sectors like electronics, automotive, and healthcare.

