Sure! Here’s your translated text:
Given the rapid adoption of artificial intelligence technologies across multiple sectors, companies face new challenges in terms of performance, data sovereignty, and cost control. In response to this reality, QNAP® Systems, Inc. has introduced its new Edge AI Storage Server solution, a high-performance, secure, and operationally flexible edge computing platform.
This proposal integrates key functions such as data storage, virtualization, GPU acceleration, and advanced resource management into a single system, allowing organizations to deploy powerful and scalable AI infrastructures directly on their premises. Its main applications include AI data storage, model inference, smart manufacturing, and video analysis, with the added advantage of reducing security risks and cloud-related costs.
QNAP’s Edge AI Server supports virtual machines and containers, facilitating the implementation of private large language models (LLMs) and other AI-related workloads. This flexibility makes the solution ideal for sectors such as smart surveillance, retail, manufacturing, and smart offices, where local data processing and information protection are critical.
“The goal of AI is no longer just to create models, but to create the right infrastructure,” stated CT Cheng, product manager at QNAP. “What’s really important for companies adopting LLMs, generative AI, or virtualization is having a platform capable of managing large data sets, protecting data security, and delivering reliable performance. Our Edge AI Storage Server is much more than just a storage solution. It combines AI inference capabilities, virtualization, and backup to help businesses securely and flexibly implement AI applications at the edge.”
Key Benefits of QNAP’s Edge AI Storage Server
- Enhanced Security and Compliance
Stores and runs LLM/AI models and sensitive data solely on-premises, avoiding cloud transmission and ensuring compliance with specific industry regulations, making it ideal for sectors such as finance, healthcare, and manufacturing. - Integrated Platform with Lower TCO
Combines storage, virtualization, GPU acceleration, and data protection into a single system, simplifying deployment and reducing long-term maintenance costs. - Dedicated Resource Allocation
Supports GPU and PCIe passthrough, SR-IOV for network optimization, and upcoming CPU isolation (coming soon) for precise system resource allocation. This ensures near-native performance for virtual machines, with low latency and high stability. - Virtualization and Container Deployment
Compatible with QNAP’s Virtualization Station and Container Station, enabling rapid adoption of various AI environments for model deployment, intelligent application development, or virtual machine server backup. - Optimized Deployment of Open-Source LLMs
Easily deploy open-source models like LLaMA, DeepSeek, Qwen, and Gemma through Ollama for the development of AI tools, chatbots, and internal knowledge search, lowering barriers to AI adoption.