Fortinet and NVIDIA Bring AI Security to Runtime

Fortinet has strengthened its integration with NVIDIA to protect AI workloads, data, and autonomous agents across data centers, cloud environments, hybrid setups, and edge locations. The centerpiece is FortiAIGate, a solution that sits between applications and AI models to inspect traffic, enforce policies, detect abuse, and reduce the risk of data leaks without relying solely on external inference controls.

This move comes at a time when many companies are moving from chatbot testing to deploying more complex, agent-based AI systems. In these scenarios, models do more than answer questions: they query tools, call APIs, interact with MCP servers, access corporate data, and perform tasks within business processes. This shift makes runtime security—the point where the model receives instructions and takes action—an increasingly critical layer.

A security gateway for models, data, and agents

FortiAIGate is presented as a dedicated security gateway for AI environments. Its role is to monitor model usage, control inbound and outbound traffic, log suspicious incidents, and enforce guardrails against threats such as prompt injection, unauthorized content generation, tool abuse, or data exfiltration.

The NVIDIA integration aims to resolve a practical challenge: securing AI without adding latency that would hinder production deployment. To achieve this, Fortinet supports FortiAIGate on GPU-accelerated platforms, including NVIDIA Blackwell and Hopper architectures, and leverages the open framework NVIDIA Dynamo for distributed inference serving. Dynamo is designed to deploy generative models in distributed, low-latency environments with optimized memory management, request routing, and scaling across GPU fleets.

The goal is for security inspection to operate at the performance levels required by modern AI applications. Traditional controls often rely on CPU-based processes designed for web traffic, APIs, or conventional enterprise loads. Generative AI and agents introduce new patterns: long prompts, lengthy responses, tool calls, persistent context, and chained decisions. Without matching protective measures, these patterns can lead to controls being disabled, bypassed, or positioned too far from the actual risk point.

Fortinet emphasizes inline deployment, meaning the solution can be placed directly in the communication path between the application and the model. This enables blocking or modifying interactions before a malicious query reaches the model or before a response containing sensitive data leaves, although effectiveness depends on configuration, policies, and understanding of the enterprise context.

Sovereign AI and infrastructure control

A key argument in this announcement is AI sovereignty. Fortinet positions FortiAIGate as a solution deployable on private infrastructure, in the cloud, hybrid environments, or at the edge, with options for physical appliances, virtual appliances, or containers on NVIDIA-certified systems.

This aligns with growing concerns in Europe and regulated sectors. Companies want to leverage language models and agents but cannot always send sensitive data to external services without controlling its residency, traceability, provider, jurisdiction, or compliance. A self-hosted gateway allows organizations to enforce corporate policies on AI use within their technical and legal boundaries.

Additionally, Fortinet mentions using NVIDIA Nemotron security models to assist in interaction monitoring. Practically, this approach separates two layers: one handling business tasks, the other evaluating risks, content, suspicious instructions, or potential leaks.

The company also links FortiAIGate with multi-tenant environments, particularly relevant for service providers, large enterprises, and AI data centers. Techniques like NVIDIA Multi-Instance GPU enable partitioning a single GPU into isolated instances with quality of service and separation. For shared deployments, this can prevent one AI application from interfering with another or ensure different business units share infrastructure without data or execution overlap.

However, sovereignty isn’t just about location. Hosting AI on your own or national data centers isn’t sufficient if models, training data, logs, credentials, or connectors aren’t well-governed. True sovereignty depends on architecture, contracts, audits, operational controls, and incident response capabilities.

The new frontier: protecting acting agents

AI cybersecurity can no longer be limited to filtering offensive prompts or preventing a chatbot from saying something inappropriate. The greater risk emerges when an agent has permissions to act—opening tickets, querying databases, placing orders, modifying configurations, or invoking connected tools in internal systems.

Attacks such as indirect prompt injection, tool misuse, secret extraction through seemingly legitimate responses, manipulation of RAG contexts, or misuse of MCP servers become critical concerns. Fortinet has experience inspecting MCP interactions via FortiWeb, and FortiAIGate extends that logic to a broader ecosystem protection model for AI environments.

The NVIDIA collaboration aims to embed this protection into an enterprise performance layer. Organizations deploying serious AI solutions want security and speed—balancing Zero Trust policies, auditing, data control, and real-time response with low latency and high inference volume.

Fortinet’s commercial messaging is ambitious, but the problem it addresses is real. Many companies are adopting AI faster than their security teams can manage. Some users connect external tools unchecked. Others develop internal agents with excessive permissions. Many still lack the ability to log, audit, or explain what an agent did in a series of decisions.

FortiAIGate seeks to fill this gap: a governance and security layer for production AI that acts beyond the development phase. Its success will depend on how well it integrates with existing security infrastructure, the quality of policies, visibility it provides, and adaptability to rapidly evolving models, agents, and workflows.

The market trend is clear. Just as organizations adopted firewalls, WAFs, EDR, CNAPP, or DLP for earlier security layers, specialized controls for AI are emerging. Unlike traditional tools, agents are not just new apps—they can become intermediaries between humans, data, and critical systems. Protecting them requires understanding context, permissions, intent, sensitive data, and the potential consequences of every action.

Frequently Asked Questions

What is FortiAIGate?

FortiAIGate is a Fortinet solution designed to protect AI workloads, data, and autonomous agents. It is deployed as a gateway between applications and models to enforce policies, monitor usage, and block threats during runtime.

What does the NVIDIA integration add?

The integration enables acceleration of FortiAIGate on NVIDIA platforms, including Blackwell and Hopper GPUs, and leverages NVIDIA Dynamo for distributed inference. The goal is to provide low-latency AI security that scales in data centers and cloud environments.

What threats does it aim to block?

FortiAIGate aims to mitigate risks such as prompt injection, data leaks, unauthorized model use, toxic or forbidden responses, tool abuse, and policy violations by AI agents.

Why is it called sovereign AI?

Because FortiAIGate can be deployed on private infrastructure, cloud, hybrid, or edge systems, helping organizations keep prompts, responses, data, and logs under their control, complying with legal and regulatory standards.

via: fortinet

Scroll to Top