Microsoft and NVIDIA announce key integrations to accelerate generative AI for businesses worldwide.

At Monday’s GTC event, Microsoft Corp. and NVIDIA have deepened their long-standing collaboration with new powerful integrations leveraging NVIDIA’s latest generative AI and Omniverse™ technologies on Microsoft Azure, Azure AI services, Microsoft Fabric, and Microsoft 365.

“Together with NVIDIA, we are realizing the promise of AI, helping to drive new benefits and productivity gains for people and organizations worldwide,” said Satya Nadella, CEO of Microsoft. “From incorporating NVIDIA’s GB200 Grace Blackwell processor into Azure to new integrations between DGX Cloud and Microsoft Fabric, the announcements we are making today will ensure that customers have the most comprehensive platforms and tools at every layer of the Copilot stack, from silicon to software, to build their own groundbreaking AI capability.”

“AI is transforming our everyday lives, opening up a world of new opportunities,” said Jensen Huang, founder and CEO of NVIDIA. “Through our collaboration with Microsoft, we are building a future that unlocks the promise of AI for customers, helping them deliver innovative solutions to the world.”

**Advances in AI Infrastructure**

Microsoft will be one of the first organizations to bring the power of NVIDIA Grace Blackwell GB200 and the advanced NVIDIA Quantum-X800 InfiniBand network to Azure, delivering cutting-edge trillion-parameter foundational models for natural language processing, computer vision, speech recognition, and more.

Microsoft is also announcing the general availability of its Azure NC H100 v5 virtual machine based on the NVIDIA H100 NVL platform. Designed for mid-range training and inference, the NC series of virtual machines offers customers two classes of VMs with one to two NVIDIA H100 94GB PCIe Tensor Core GPUs and supports NVIDIA Multi-Instance GPU (MIG) technology, allowing customers to split each GPU into up to seven instances, providing flexibility and scalability for various AI workloads.

**Advances in Health and Life Sciences**

Microsoft is expanding its collaboration with NVIDIA to transform healthcare and life sciences through the integration of cloud, AI, and supercomputing technologies. By harnessing the power of Microsoft Azure along with NVIDIA DGX™ Cloud and the NVIDIA Clara™ microservices suite, healthcare providers, pharmaceutical and biotechnology companies, and medical device developers will soon be able to innovate rapidly in clinical research and care delivery more efficiently.

Industry leaders such as Sanofi and the Broad Institute of MIT and Harvard, industry ISVs like Flywheel and SOPHiA GENETICS, academic medical centers like the University of Wisconsin School of Medicine and Public Health, and health systems like Mass General Brigham are already leveraging cloud computing and AI to drive transformative changes in healthcare and improve patient care.

**Industrial Digitization**

NVIDIA Omniverse Cloud APIs will be available first on Microsoft Azure later this year, allowing developers to enhance data interoperability, collaboration, and physics-based visualization in existing software applications. At NVIDIA GTC, Microsoft is showcasing a preview of what is possible using Omniverse Cloud APIs on Microsoft Azure. Using an interactive 3D viewer in Microsoft Power BI, factory operators can view real-time factory data overlaid on a 3D digital twin of their facility to gain new insights that can accelerate production.

**NVIDIA Triton Inference Server and Microsoft Copilot**

NVIDIA GPUs and the NVIDIA Triton Inference Server™ help serve AI inference predictions in Microsoft Copilot for Microsoft 365. Copilot for Microsoft 365, soon available as a dedicated physical key on PCs with Windows 11 keyboards, combines the power of large language models with proprietary enterprise data to deliver contextual intelligence in real-time, enabling users to enhance their creativity, productivity, and skills.

**From AI Training to Deployment**

NVIDIA NIM™ inference microservices will come to Azure AI to accelerate AI deployment. Part of the NVIDIA AI Enterprise software, also available on Azure Marketplace, NIM offers cloud-native microservices for optimized inference on over two dozen popular foundational models, including models built by NVIDIA that users can experience on ai.nvidia.com. For deployment, microservices provide pre-built containers ready to run anywhere, powered by NVIDIA AI Enterprise inference software — including Triton Inference Server, TensorRT™, and TensorRT-LLM — to help developers accelerate the time to market of optimized production AI applications.

**Software and Expert Support to Scale AI Production**

All NVIDIA DGX platforms include NVIDIA AI Enterprise software for enterprise-level development and deployment. DGX customers can expedite their work with NVIDIA’s pre-trained foundational models, frameworks, toolkits, and the new NIM microservices included in the software platform.

NVIDIA DGX experts and selected NVIDIA-certified partners support DGX platforms at every stage of deployment, enabling customers to quickly move AI to production. Once systems are operational, DGX experts continue to support customers in optimizing their AI pipelines and infrastructure.

The collaboration between Microsoft and NVIDIA is a testament to the power and promise of generative artificial intelligence. With these new integrations, companies of all sizes are better positioned to harness the latest innovations in AI, accelerating the development of AI solutions and transforming how they operate in an ever-evolving digital world.

Scroll to Top