How CISOs can control employees’ use of AI

Since its launch less than 18 months ago, ChatGPT has quickly reached 100 million users in less than two months. However, enterprise adoption of generative artificial intelligence (GenAI) has been slower than expected. According to a recent survey by Telstra and MIT Review, while 75% of companies tested GenAI last year, only 9% implemented it widely. The main barrier: data privacy and regulatory compliance.

The primary concern of Chief Information Security Officers (CISOs) is how to gain visibility into employees’ use of AI, how to enforce corporate policies on acceptable AI use, and how to prevent the loss of confidential data and intellectual property. Protecting user activities around AI models is crucial for privacy and compliance.

To gain complete visibility and control over internal and external user activity, it is necessary to capture all outgoing access and analyze it. This includes determining which sites employees are accessing, where data is being stored, and whether its use is secure. During the initial boom of ChatGPT, many employees uploaded sensitive information while testing the technology, and few CISOs are confident they fully understand what data has been sent and is still being sent.

Controlling employee activity through policies is a challenge. Implementing compliance mechanisms can be complex and must consider multiple points of access such as agents on endpoints, proxies, and gateways. AI data access policies must be specific and tailored to the organization’s needs, such as preventing customer information from being used to generate responses for another.

With experience gained in using generative AI, companies have begun to identify what is needed to gain visibility and control over user activity. This includes building and maintaining a database of GenAI destinations, capturing desired activities, and continuously mapping this activity to catalog and analyze risks.

Many companies are considering deploying language models and their own chatGPTs hosted on-premises or in exclusive private cloud infrastructure or bare-metal, as David Carrero, co-founder of Stackscale, a specialist in infrastructure, private cloud, and bare-metal solutions, comments.

Finally, applying the ability to enforce acceptable use policies in real-time is essential. This involves intercepting requests and applying policies to prevent data loss and unsafe use of AI. The policy mechanism must be applied to all internal and external accesses to language models, regardless of the platform or cloud used.

Despite the complexity, some companies have made significant progress in this area. Larger and more advanced organizations have developed controls to manage AI visibility and control. Some have built solutions from scratch, while others have used a combination of tools such as EDR, SIEM, CASB, proxies, and firewalls. With the rapid evolution of this field, new startups are bringing innovative solutions to the market, marking the next big shift in IT security driven by the adoption of GenAI.

Scroll to Top