Radware Launches a “Firewall for LLMs” and Takes AI Security to the Prompt Level

Radware has announced LLM Firewall, a new layer of protection designed for companies already integrating language models (LLMs) into products, assistants, and internal workflows. The concept is straightforward: block attacks and abuses “at the prompt” (before the request reaches the model), at a time when the adoption of generative AI is accelerating and, with it, the surface of risk.

According to the company, LLM Firewall is offered as an “add-on” to all levels of their Cloud Application Protection Services suite, and is conceived as the first phase of a broader protection approach aimed at “agentic AI” scenarios (agents acting semi-autonomously). Radware also emphasizes that the approach is model-agnostic and designed to integrate seamlessly without hindering deployment.

From “Traditional” WAF to “Prompt WAF”

In its communication, Radware frames the product as a conceptual evolution: if for years the WAF protected web applications from HTTP-level attacks, the leap now is to defend against natural language attacks, typical of the behavior and integration of LLMs. The company mentions threats like prompt injection, jailbreaks, and resource abuse, with real-time detection and blocking.

At the same time, this movement aligns with the ongoing conversation in infrastructure and operations teams: Gartner has identified agentic AI as one of the top trends impacting 2026, alongside AI governance platforms and disinformation security, noting that innovation is outpacing traditional controls.

What Data Is Being Protected (And Why It Matters)

One of the most sensitive points in enterprise environments is the leakage of personal or confidential information through prompts, responses, or connected tools interacting with the model. Radware claims that its LLM Firewall is designed to detect and block data exfiltration attempts before the request reaches the client’s LLM, linking it to regulatory and compliance frameworks such as GDPR and HIPAA.

This focus also aligns with the risk frameworks being solidified in the industry. For example, OWASP includes risks like Prompt Injection and issues related to exposure of sensitive data in its Top 10 for applications with LLMs, reflecting that the threat vector is no longer just “traditional software,” but also linguistic interactions and AI integration into real-world processes.

A Market Moving Rapidly

Radware is not alone in this direction. In recent weeks, major security and cloud providers have intensified similar messages: Palo Alto Networks and Google Cloud announced an expanded partnership to ensure the development and deployment of AI solutions, focusing on “code-to-cloud” protection, AI posture, runtime security, and agent-specific security.

The pattern is clear: companies want to harness AI… but not at the cost of creating new entry points for data leaks, impersonations, malicious automation, or uncontrolled decisions.

What Companies Should Do Beyond the Headlines

The promise of a “prompt firewall” sounds good, but practically, it works best as part of a layered strategy. Operationally, the minimum for security and platform teams typically includes:

  • Comprehensive Inventory: identify where LLMs are deployed (apps, internal copilots, support chatbots), what data they handle, and what tools they run.
  • Data Policies: define what information can be included in prompts, what must be anonymized, and what should never leave the perimeter.
  • Identity and Permission Controls: specify who can invoke each agent, with what scopes, and under which contexts.
  • Specific Monitoring: telemetry and alerts for anomalous patterns (prompt spikes, prompts intended for extraction, automated behaviors).
  • Offensive Testing: red teaming and testing batteries focused on prompt injection/jailbreaks, not just traditional CVEs.

Within this framework, products like Radware’s new offering aim to fill a gap many companies realize too late: AI security isn’t solved solely with network firewalls or “best practices” in development. Language governance is also crucial.

via: radware

Scroll to Top