NVIDIA Strengthens Security in Generative AI with NIM Microservices and NeMo Guardrails

Artificial intelligence (AI) is revolutionizing the productivity of knowledge workers worldwide through autonomous agents capable of performing complex tasks. However, the safe and reliable implementation of these agents poses significant challenges related to trust, security, and regulatory compliance. To address these challenges, NVIDIA has launched NIM microservices as part of its NeMo Guardrails collection, designed to safeguard generative AI applications.

NIM Microservices: A Solution for Safer AI Agents

NIM microservices are optimized and portable inference models that enable companies to develop more trustworthy and secure AI agents. These services help prevent inappropriate or out-of-context responses and protect applications from malicious manipulation attempts, also known as “jailbreaks.” These agents are already being used in sectors like finance, healthcare, automotive, manufacturing, and retail, enhancing customer satisfaction and trust.

Among the new microservices launched by NVIDIA are:

  1. Content Moderation: Designed to avoid biased or harmful responses, ensuring alignment with ethical standards.
  2. Topic Control: Keeps conversations focused on approved topics, preventing inappropriate deviations.
  3. Jailbreak Attempt Detection: Reinforces the integrity of AI models by identifying and mitigating manipulation attempts.

How NeMo Guardrails Works

NeMo Guardrails is a platform within NVIDIA’s NeMo ecosystem that allows developers to manage security policies in applications using large language models (LLMs). These tools are essential for AI agents to operate at scale while maintaining controlled and safe behavior.

A key aspect of NeMo Guardrails is its focus on smaller, specialized models, which ensures low latency and efficient operation even in resource-limited environments, such as hospitals or warehouses. Additionally, the tool is compatible with other AI observation and control systems, such as ActiveFence, Hive, Fiddler AI, and Weights & Biases, creating a comprehensive framework for the development and oversight of AI applications.

Use Cases and Industry Adoption

Leading companies like Amdocs, Cerence AI, and Lowe’s are already using NeMo Guardrails to ensure the safety of their AI applications.

  • Amdocs is employing NeMo Guardrails in its amAIz platform, enhancing its “trusted AI” capability to provide safer and more accurate customer interactions.
  • Cerence AI, specializing in intelligent assistants for vehicles, uses these tools to ensure contextually appropriate interactions and avoid inappropriate responses in its line of language models.
  • Lowe’s is applying the technology to enhance the experience of its in-store associates, facilitating safe and reliable responses to customer inquiries.

These implementations highlight the versatility of NeMo Guardrails, which has also been integrated into solutions from consultancies like Taskus, Tech Mahindra, and Wipro to create more controlled and trustworthy generative AI applications.

Open-Source Tools to Test AI Security

NVIDIA has also launched Garak, an open-source tool for assessing the robustness of LLM models and AI systems. Garak allows developers to identify vulnerabilities such as data leaks, prompt injections, or coding errors. This way, developers can strengthen the security and performance of their applications before deployment.

Availability

NVIDIA’s NIM microservices, along with NeMo Guardrails and the Garak tool, are already available for developers and companies. With this technology, NVIDIA provides a robust framework for AI agents to be not only more effective but also safer and more trustworthy.

NVIDIA reaffirms its commitment to AI innovation while addressing the critical security and compliance challenges that businesses face on their path to adopting autonomous agents.

via: Nvidia

Scroll to Top