Portal26 selected as a finalist for 2024 AI Trailblazer award

Transform Your Secure Web Gateway into a Powerful GenAI Governance and Security Platform

BYOAI & Shadow AI: Threats, Innovation & Risk Management

The unsanctioned usage of AI technology, referred to as shadow AI, is an ongoing challenge that enterprises face. Employees don’t always participate in shadow AI with the awareness of it being a breach of internal governance standards; sometimes they’re simply seeing this type of usage as an extension of individual curiosity and to enhance personal productivity. 

In these instances, employees may be carrying out meaningful research in order to find the tools that would serve the biggest benefits for their job. We’re describing this scenario as ‘BYOAI’ – bring your own AI, where unsanctioned usage comes as a result of employees that are using their own initiative or curiosity; but it still leaves enterprises in a difficult position.

The challenge of shadow AI

BYOAI is a similar principle to Shadow AI, as it involves employees using unauthorized GenAI tools outside of IT governance. As we’ve outlined, it often comes as a result of individuals that are looking to adapt their capabilities using modern, cutting-edge technologies – whereas Shadow AI threats come as a direct result of unsanctioned usage.

Shadow AI is essentially an organizational ‘blind spot’, and a report from Cyberhaven concluded that there’s been a 156% increase in sensitive data input into untrustworthy AI tools by employees. The study cited some of the top ‘offending’ inputs being sensitive customer support data, legal documents, source code, and research and development data.

In this way, one of the most pressing Shadow AI threats is the gaps for exposure that it creates; and while BYOAI may demonstrate an employee using their own initiative, they’re still risking falling foul of these data security black holes. Some of the main risks of BYOAI that enterprises must manage include:

Security breaches due to unvetted tools

Every type of technology tool is built with varying security protocols, and to the average user, the extent of this will be almost impossible to gauge. Even tools that have the most seemingly legitimate user interfaces can cause serious security breaches.  If a tool is going to be used by an enterprise, it will need to be sufficiently vetted from a security perspective, and this will be accounted for as part of the wider GenAI budget strategy. This type of rigorous testing is exactly what will safeguard both the individual and the input/quality of the output.

Data privacy concerns

As we’ve touched on already, often a user can be blindsided by a meticulously thought out visual interface. The AI tool that they have discovered might have all of the surface-level hallmarks of being secure, but there’s really no way to know for sure. This makes it easy for sensitive information to interact with unauthorized shadow AI, and in turn, this data is jeopardized.

Compliance

The AI tools that we really ought to be wary of will pose serious issues in terms of compliance, but without a clear understanding of the standards that they should be meeting, users can be totally oblivious. An employee could easily stumble across a tool that they’d like to leverage, but they aren’t likely to be considering its features or capabilities in relation to internal governance standards. This can only really be tackled when employees are given complete clarity through targeted GenAI education, as this way, they’ll have a deeper oversight of potential compliance ‘red flags’.

Compatibility problems

The process of an enterprise choosing to invest in an AI tool is often long drawn-out, in-depth, and at some stage, involves consulting various C-suite figures; whereas an employee can make a decision on an interface in a matter of moments. This superficial decision making process opens up the possibility of compatibility issues, as the individual is unlikely to account for anything beyond their own initial usage. 

The perceived potential that the tool could provide is rendered useless by its integration challenges, sending the employee back to the drawing board.

Lack of oversight

As described, BYOAI tends to include limited oversight, and this can be understood in a few different regards. With this type of limitation, the risk of biased or inaccurate AI outputs increases, as the standard and scope of the input hasn’t been considered in relation to the capabilities of the tool. Having a lack of oversight is one of the biggest GenAI observability mistakes that an organization can make.

When an enterprise is investing in an AI tool, it will gain a deep understanding of exactly what it can and can’t do – as well as learning the quality of the input required. In taking these types of oversights into account, organizations know exactly what to expect from their AI usage.

Types of bring your own AI

BYOAI manifests in a few different ways, and ordinary users interact with it for a range of different purposes.

Personal AI assistants

When used for work purposes, personal AI-powered assistants help users to manage daily tasks by organizing schedules, setting reminders, and providing real-time information alerts.  They might also assist with communication by answering questions, sending messages, and making calls, all on behalf of the user.

These systems can also learn user preferences over time to offer more personalized recommendations, and streamline routine activities.

Cloud-based GenAI tools

With cloud-based GenAI tools, users can generate text, images, code, and other content on demand, and accessible through the cloud. They enable users to collaborate and create without needing powerful local hardware, as the processing occurs remotely. These tools are often used for automating creative tasks, enhancing productivity, and developing AI-driven applications.

Many cloud-based AI tools offer freemium plans, making them a viable option for BYOA usage.

Open-source AI frameworks

Open-source AI frameworks provide tools and libraries for developers to build, train, and deploy machine learning models. They are freely available, allowing for community collaboration and innovation, while offering flexibility for customization. These frameworks are widely used for research, experimentation, and creating AI-powered applications across various industries.

Why BYOAI is a sign of progress

All of the tools that we have described serve as opportunities for employees to harness their own capabilities of AI, using it for tangible business purposes; but BYOAI still needs to be governed. 

AI has entered the mainstream for many corporations, but one of the main narratives to emerge amongst employees is the threat that it may pose to their own roles and responsibilities. To challenge this, employee experimentation should be championed, and BYOAI serves as a prime opportunity for this. Finding ways to build using AI innovation will only enhance the capabilities of the individual. 

BYOAI also presents the potential to identify useful new AI tools before official adoption, and this kind of scope can be sought out without risking any security or GenAI governance protocols. This is exactly where GenAI visibility comes into play, and when it is fused with sufficient GenAI employee training, enterprises can ensure that BYOAI experimentation happens in accordance with their own regulations.

How enterprises can encourage BYOAI in light of shadow AI threats

By taking proactive steps to manage and curb BYOAI and shadow AI for the sake of security and safety, enterprises will be able to nurture employee curiosity in their favor. This also creates an environment where AI experimentation can be openly explored, and opportunities to learn and grow can be seized effectively. 

One way to approach this could be through developing and implementing a concise BYOAI policy, where acceptable practices are explained, and  by risk management practices. This leaves employees with the perfect balance; they’ll have the autonomy that they need to explore tools, and it’ll be paired with a critical outlook that they can use to discern between viable options and interfaces to avoid. It may also be helpful for organizations to provide their own list of vetted, secure, approved GenAI tools, as this will give employees a good grounding of the standard necessary to meet governance requirements.

Manage shadow AI threats & embrace BOYAI with Portal26

Here at Portal26, our role is to act as a partner for enterprises, helping them to facilitate safe, secure GenAI adoption. Our GenAI TRiSM platform is an all-in-one plugin, designed to govern AI usage, including BYOAI; all while giving you unrivaled oversight in managing, monitoring, and measuring activities.

With our solution, organizations benefit from having rich visibility into usage while mitigating risks associated with shadow AI threats. To experience our innovative system, arrange a demo online now.

Schedule A Demo >