F5 has expanded its partnership with Red Hat to bring new security capabilities to Kubernetes environments, AI-based applications, and modernized IT operations. The announcement focuses on two main areas: protecting applications and APIs on Red Hat OpenShift using NGINX Gateway Fabric, and creating reference architectures for deploying AI workloads with enhanced controls from the start.
This move comes at a time when many companies are deploying critical applications on Kubernetes, while experimenting with internal chatbots, agents, generative models, and AI workflows connected to sensitive data. This combination significantly broadens the attack surface. Protecting a monolithic application behind a traditional firewall is no longer enough. Organizations must defend APIs, microservices, gateways, models, prompts, integrations, pipelines, and traffic between rapidly changing components.
WAF for Kubernetes on NGINX Gateway Fabric
The first innovation is F5 WAF for NGINX on NGINX Gateway Fabric, available through a certified operator for Red Hat OpenShift. This solution aims to deliver Layer 7 protection to native Kubernetes workflows, with security controls for applications and APIs that can be managed declaratively, closer to the daily operations of DevOps and platform teams.
Practically, this means that companies deploying applications on OpenShift can embed Web Application Firewall rules within their Kubernetes architecture without necessarily disrupting their automation workflows. F5 mentions coverage against the OWASP Top 10 and modern API security—two areas especially sensitive in environments where services are frequently exposed, versioned, and connected.
Using a certified operator for OpenShift also has business implications. While Kubernetes has gained significant adoption, managing it remains complex for organizations requiring support, governance, and repeatability. Operators facilitate installing, updating, and managing components within the cluster through more automated patterns. For security teams, this can shorten the gap between corporate policy and actual deployment.
This announcement also underscores NGINX’s role within F5’s overall strategy. Since acquiring NGINX, F5 has sought to unify traditional application delivery and security with cloud-native architectures. NGINX Gateway Fabric fits into this transition by aligning with Kubernetes standards and supporting a model where configurations increasingly live in manifests, repositories, and pipelines.
The key will be ensuring these capabilities are not just additional layers but are integrated into development processes. Many organizations face security issues not due to lack of tools but because controls arrive late, require manual configuration, or create bottlenecks for deployment teams. Security as code aims to address this by enabling repeatable, reviewable control definitions.
AI Security Comes to OpenShift
The second part of the announcement targets artificial intelligence directly. F5 and Red Hat have collaborated on an AI quickstart for Red Hat OpenShift AI, featuring predefined blueprints that allow teams to rapidly set up and explore AI architectures without starting from scratch. F5 mentions solutions like F5 AI Guardrails and F5 AI Red Team, designed to add controls and risk assessment layers to AI-based applications.
This is important because enterprise AI doesn’t operate in isolation. A corporate chatbot might connect to databases, internal documents, customer systems, CRMs, or support tools. An agent can perform actions. An AI application might access documents, interpret images, interact with APIs, or provide sensitive information. Each of these capabilities introduces specific risks: data leaks, prompt injections, misuse of tools, unauthorized responses, or internal data exposure.
Red Hat OpenShift AI provides a platform to build, train, deploy, and operate AI and machine learning applications on OpenShift. The collaboration with F5 seeks to add a security and validation layer tailored to the needs of regulated sectors or companies handling sensitive data. The goal isn’t just “using AI,” but deploying it with more repeatable architectures and built-in controls from the outset.
Quickstarts can be especially helpful for organizations in the early adoption phase. Many companies have teams interested in AI but lack clear patterns for moving from proof of concepts to controlled production deployments. Having validated architectures can accelerate learning and reduce initial errors, provided these templates are adapted to each company’s specific requirements.
Kubernetes, APIs, and AI: A Shared Risk Surface
F5 and Red Hat’s announcement reflects a key convergence: Kubernetes, APIs, and AI are no longer separate entities. Modern applications are deployed in containers, exposed via gateways, communicate through APIs, and increasingly incorporate AI components. Managing security across all these layers in isolation can lead to governance challenges.
For CISOs, this raises a practical question: how to apply consistent policies across hybrid and multi-cloud environments where traditional apps, microservices, public APIs, AI models, and internal services coexist. Relying solely on perimeter controls isn’t enough. Visibility, declarative policies, CI/CD integration, runtime protection, and platform tools that enable teams to operate without hinderance are essential.
F5 positions its Application Delivery and Security Platform (ADSP) as a solution to this complexity. Red Hat offers OpenShift as the enterprise Kubernetes platform and OpenShift AI as the foundation for AI workloads. Combining these approaches makes sense for companies aiming to standardize deployments while preventing security from being an afterthought.
Open standards also play a role. Both F5 and Red Hat emphasize scalable, open architectures—crucial for customers wary of vendor lock-in. In Kubernetes environments, this concern is common: businesses want to leverage the ecosystem without sacrificing support or governance.
Adoption won’t be automatic, however. Integrating WAFs, gateways, API security, and AI controls requires process changes. Development teams must understand what is validated, how deployments are made, and what errors block promotion to production. Security teams need to shift from late-stage reviews to defining reusable policies. Platform teams should provide templates that make it easy to do the right thing.
The true value of this announcement lies in its focus: not just offering another isolated tool, but in bringing together security, application delivery, and AI workloads within a unified operational platform. As intelligent agents and applications begin touching critical processes, this integration can be crucial to enabling controlled adoption and avoiding a proliferation of unmanageable pilots.
AI is accelerating application modernization but also exposing vulnerabilities in APIs, permissions, data, and cloud-native operations. The partnership between F5 and Red Hat points to an emerging trend: enterprises don’t just need to deploy AI faster—they need to deploy it within architectures capable of being defended, audited, and maintained.
Frequently Asked Questions
What did F5 and Red Hat announce?
F5 expanded its solutions for Red Hat OpenShift with F5 WAF for NGINX on NGINX Gateway Fabric and AI quickstarts for Red Hat OpenShift AI.
What does F5 WAF for NGINX in OpenShift provide?
It enables Layer 7 protection for applications and APIs in Kubernetes environments with declarative management tailored for DevOps workflows.
What role does NGINX Gateway Fabric play?
It serves as the foundation for managing traffic and gateways in Kubernetes, facilitating the integration of security and application delivery capabilities in cloud-native architectures.
Why is this important for AI applications?
Because AI applications often connect to data, APIs, and internal tools, requiring controls to prevent data leaks, API abuse, prompt injections, and unauthorized use.
via: f5.com

