F5 and Forcepoint have announced a partnership aimed at addressing one of the most sensitive fronts of AI adoption in enterprises: data security and AI system security from pre-production stages through daily operation. The proposal, unveiled during RSA Conference 2026 in San Francisco, connects Forcepoint’s ability to discover and classify sensitive information with F5’s tools to monitor, test, and protect applications and models once they are operational.
This move comes at a particularly critical moment for many organizations. After months of pilots, internal testing, and limited deployments, an increasing segment of the market is starting to bring AI copilots, assistants, and automated workflows into real-world business environments. The challenge is that this transition often involves data stored across multiple repositories, security policies separated by tools, and limited visibility into how an AI application truly behaves when interacting with users, APIs, and corporate systems.
A Response to an Increasingly Visible Problem
The partnership between these companies stems precisely from this diagnosis. According to the announcement, Forcepoint will contribute its DSPM capabilities—Data Security Posture Management—to locate, classify, and prioritize sensitive data in cloud, SaaS, endpoints, and enterprise systems. Meanwhile, F5 will add its AI Red Team and AI Guardrails technologies within its ADSP platform to enforce runtime controls on applications, APIs, models, and AI agents.
In other words, Forcepoint focuses on understanding data and the associated risks, while F5 provides active protection once AI systems are in operation. This approach aims to address a very specific reality: knowing which information is critical is not enough if there’s no clear way to prevent leaks, prompt abuses, anomalous behaviors, or unsafe interactions once the system is live.
This is arguably the most significant aspect of the announcement. In practice, the security industry has long treated data governance, application protection, and model oversight as separate domains. The messaging from F5 and Forcepoint suggests that, with generative AI and software agents, this separation becomes less useful. If a company does not know which data is accessible to an assistant, or cannot control how that assistant responds in real time, its security policies are only partially effective.
From Data Discovery to Real-Time Control
The commercial message is ambitious, but the technical logic makes sense. Since 2025, Forcepoint has been strengthening its “Self-Aware Data Security” strategy, focusing on classifying sensitive information and continuously adapting controls. Its DSPM aims to identify where data resides, who can access it, and its exposure level. Simultaneously, F5 has accelerated its investments in AI security with products like AI Guardrails and AI Red Team, officially launched in January and already integrated into its application delivery and security platform.
Building on this, the partnership promises a more comprehensive chain: first identifying which data should or shouldn’t feed AI systems; then assessing the riskiness of specific use cases; and finally, applying controls when these systems interact with people, applications, or automated agents. Threats mentioned in F5’s documentation include prompt injection, jailbreaks, data leaks, and malicious use of APIs and models.
It is important, however, to distinguish between what has been confirmed and what is suggested. The announcement refers to a technological collaboration, not a new all-in-one platform built from scratch. No detailed information has been provided about commercial integration, deployment timelines, pricing, or potential packaged offerings. In this sense, the announcement is more of a strategic message to the market than the launch of a fully defined, ready-to-use product.
Implications for Companies Bringing AI into Production
Even with this nuance, the partnership reflects a broader trend beyond F5 and Forcepoint. AI security is shifting from being solely about “models” to becoming an enterprise architecture issue. The risks are no longer limited to chatbots making mistakes or hallucinating; they now include what data they can access, what external tools they can invoke, what logs they retain, how their responses are validated, and what controls exist when systems behave unexpectedly.
For companies, this translates into practical considerations. Many already have solutions for DLP, classification, WAF, API security, or monitoring, but these are often siloed. The F5-Forcepoint alliance aims to offer the opposite: a path to leverage existing solutions without waiting for a single “magic” platform that solves everything. This can be particularly attractive for large organizations and highly regulated sectors, where AI deployment requires traceability, data control, and continuous oversight.
There is also a clear competitive dimension. F5 seeks to strengthen its role as a security and application delivery provider in the AI era, while Forcepoint aims to establish itself as a relevant player in data security and risk mitigation in GenAI. The partnership enables them to speak a more comprehensive language to CISOs and platform managers: not just perimeter protection or file classification, but supporting the entire AI adoption cycle.
Although many promises remain somewhat vague, the announcement clarifies the industry’s direction: toward continuous, connected protection models that are more aligned with how systems actually behave in production. That understanding may be crucial for the next phase of corporate AI deployment.
Frequently Asked Questions
What exactly have F5 and Forcepoint announced?
They announced a partnership to combine Forcepoint’s data discovery and classification capabilities with F5’s tools for protecting AI applications and systems at runtime.
What is DSPM and why is it important for AI projects?
DSPM stands for Data Security Posture Management. It helps identify where sensitive data is stored, how exposed it is, and who can access it—key information before linking this data to AI models, assistants, or agents.
What does F5 contribute to this AI security collaboration?
F5 offers tools like AI Red Team and AI Guardrails within its ADSP platform to test models against attacks, detect misuse, and enforce security controls once AI is in production.
Is this a new closed platform or an integration of existing technologies?
Based on current information, it’s an alliance and integration of existing capabilities from both companies, not a brand new platform built from scratch.

