Meet Portal26 at RSA in San Francisco, May 6 – 8

State of Generative AI | Interactive Survey Results

What is AI TRiSM And Why is it Critical to Your Generative AI Program?

It is almost with 100% certainty that you didn’t wake up this morning asking, “If only there was yet another tech acronym for me to learn and understand.”

And while that may be the case, it is more likely you may have been thinking about how to bring Generative AI into your organization to improve its productivity in a secure and responsible way.   You would not be alone.  According to a recent Accenture Pulse of Change Survey of CEOs, 99% plan to grow their investments in this transformative technology.  (We suppose the other 1% are looking for a new job).  And there is corroborating data from organizations from Gartner to McKinsey that have confirmed that despite the potential risks and new threat surfaces,  Generative AI will be essential to organizations across the spectrum of industries to be competitive.  

 Which is brings us back to the newest tech acronym you need to know: AI TRiSM. Artificial Intelligence Trust, Risk and Security Management.  Commit it to memory because you’ll be seeing it a lot as enterprises tackle implementing mission critical companywide Generative AI programs securely and responsibly against an entirely new set of threats.   

Unpacking Gen AI Risks

Gen AI’s 3 Categories Of Risks

So, let’s dive in and break down AI TRiSM and the three core categories of risk it covers as defined by our friends at Gartner:

1. Anomaly Detection (For inputs and outputs) 

Essentially, this is getting a grip on the prompts your employees are using as inputs and the quality of the outputs – is their malicious intent?  Is there bias?  Is it confidential?  Is the output accurate, legal, infringing on copyrights?  Will the output aid or compromise decision-making? 

2. Secure AI Applications

This concept focuses on how protected are your Generative AI applications and your susceptibility to adversarial prompting attacks, vector database attacks and hacker access to model states and parameters.

3. Data Protection

The nature of Generative AI can make the enterprise more susceptible to data loss and leakage.   Additionally, there are potential data privacy issues around assessing compliance with regulations when using an external third-party model.

One might ask “aren’t these threats covered with your current security providers?”.  The short and long answer is, no.  You are not. 

LLM Security: Exploring The Risks Introduced By Large Language Models (LLMs)

As stated earlier in this article, Generative AI creates new threat surfaces for both the creators and users of the program.  For example:

1. Data Training

In building your LLM (large language model), you not only have to ensure the training data is accurate, you also have to ensure that data is not “poisoned” with incorrect data, either inserted maliciously or inadvertently.

2. Data Usage/Prompt Engineering

On the user side, employees will most likely use prompts and proprietary data that they shouldn’t cause significant business, privacy or security risks.  How do we know this?  Let’s just say that our collective experience in cyber security shows that no matter how successfully you block outside threats it is often a lack of compliance and diligence by employees that enable those with malicious intent to penetrate your defenses.   Generative AI will be no different, and in many ways, potentially more dangerous. 

3. Data Extraction/Storage

With Generative AI, you are not only using existing data for prompts and training, by the nature of the application, you may be creating new intellectual property or proprietary data.  Current security measures are insufficient to protect this type of data use and creation. 

Responsible GenAI & The Vital Need For Risk Mitigation 

At this point, you may be asking, “So what are the components of AI TRiSM that I need to put in place?”.  To some degree, this part will seem more familiar to your experience with cyber security.  

Here’s what we’ve learned from CISOs, CTOs, and CIOs on the necessary requirements to effectively manage a responsible Generative AI program.

1. Visibility

Critical to creating and managing a program is the ability to see and understand usage throughout the organization by geography, department and individuals.  You can’t have anomaly detection without it. And, GenAI visibility is not just about isolating malicious or inadvertent bad behavior.  It can also highlight where innovation and usage are taking place.  Finally, this function allows for monitoring progress as well as creating useful analytics, for example, how your Generative AI use is evolving. As in traditional cyber security, you can’t manage what you can’t see.

2. Policy

The ability to quickly deploy and enforce policy around Generative AI is another critical initial step.  It is more than just pushing policy and measuring acceptance of that policy – it’s the ability to flag “out of policy” behavior in real time and educate your employees to ensure they are in compliance with all your data, privacy and security protocols. This can be broad, granular, or even by department function. At these early stages of Generative AI applying guardrails and governance for employees is the first and best line of defense. 

3. Security

As outlined earlier, Generative AI provides its own unique security challenges.  It is critical that you upgrade your cyber security to minimize the risk to proprietary data, privacy and IP created by these new threat surfaces. As you look at options, your Generative AI security should be able to answer and address key questions like; What is inside prompts? Are prompts being used for data exfiltration? Do prompts violate DLP policy? Are prompts creating compliance or IP risk?

4. Risk

According to the latest Salesforce State of IT survey, risk is one of the key components that is holding back organizations from fully embracing AI. For example, Is there risk from using specific AIs? From not reading the policies? IP, privacy, compliance risk? When looking at potential tools, having risk metrics, automated triggers for risky activities  and real time notifications is critical.

5. Education

While policy adoption is important, employee education is an important component of enhancing security and minimizing risk.  To be intentionally redundant, Generative AI is new to everyone, so mistakes will be made.  The ability to educate, especially in real time, within your AI TRiSM instance can not only minimize risk, it can increase productivity and innovation within your organization.  Department based training, AI tool ratings, usage and risk based training, even a listing of “sanctioned” tools, are all an important part of a holistic approach to bringing Generative AI to your organization.   

Securing The Future Of Enterprise Generative AI Adoption

We hope this primer on AI TRiSM the latest acronym in a deep, rich list of technology abbreviations, can be useful as you embark on building your Generative AI program. 

If you would like to implement AI TRiSM in your organization, Portal26 provides a rich full-featured TRiSM platform that can become the foundation for your enterprise GenAI program. 

Portal26’s AI TRiSM platform provides you with Visibility, Governance, Security and Education features to secure your enterprise’s Gen AI program today, please book a demo. 

Schedule A Demo & Explore Portal26’s AI TRiSM Solution >

Download Our Latest Gartner Report

4 Ways Generative AI Will Impact CISOs and Their Teams

Many business and IT project teams have already launched GenAI initiatives, or will start soon. CISOs and security teams need to prepare for impacts from generative AI in four different areas:

  1. “Defend with” generative cybersecurity AI.
  2. “Attacked by” GenAI.
  3. Secure enterprise initiatives to “build” GenAI applications.
  4. Manage and monitor how the organization “consumes” GenAI.

Download this research to receive actionable recommendations for each of the four impact areas.