Explore our Breakthrough Shadow AI Discovery Engine Feature

Announcing the Third Annual Champions in Security Honorees

The Explosive Growth of GenAI Tools: What Organizations Need to Know

Generative AI (GenAI) is here to stay. While we’re still in the hype stage, adoption rates are skyrocketing (research says 80% of organizations already have a GenAI strategy in place), so much so the GenAI market exceeded $128 billion in 2024. Our customer data and inquiries at Portal26 reveal that public GPT usage is doubling every 3 months – a staggering pace that highlights the unprecedented velocity of adoption.

More than that, according to McKinsey, organizations are ‘beginning to create structures and processes that lead to meaningful value from gen AI’. That is officially. Organizations are looking at risk, upskilling and gaining proper benefit from the growth of GenAI. In keeping up with GenAI, however, there is a largely ignored aspect to the technology adoption.

Statistics don’t quite capture the unofficial adoption (shadow GenAI) that is taking place at department and individual level within an organization. Employees are using a variety of GenAI tools to make both their personal and professional lives easier, everything from Google Notebook and Fireflies.AI to Chat GPT and Claude. These tools aren’t vetted by IT or security teams and can create a host of issues for management. And more importantly, they aren’t always visible to the organization. 

In this article, we’ll look at the explosion of GenAI, the types of tools being used, the danger of shadow AI, why traditional security measures aren’t working and how organizations can build a better foundation for GenAI visibility.

Understanding the GenAI Tool Explosion

Adoption rates of GenAI change on a daily basis. As previously mentioned, 80% of companies are said to have implemented a GenAI strategy. To put that into context, think of supply and demand. . From our own data, we’re seeing public GPT usage doubling every 3 months – illustrating the exponential trajectory we’re witnessing – not linear growth, but a compounding revolution that organizations must prepare for immediately.

A significant indicator of this shift comes from a recent technological investment study conducted by Amazon Web Services. Their analysis found that generative AI technologies have now surpassed cybersecurity as the top budget priority for global IT leaders heading into 2025. Based on feedback from nearly 4,000 senior technology decision-makers across nine countries, approximately 45% of organizations now intend to allocate more resources to generative AI than to conventional technology investments such as security infrastructure (30%). This represents a fundamental realignment of corporate technology strategies as companies accelerate efforts to leverage AI’s business-transforming capabilities.

The regional variation in adoption is particularly noteworthy, with Asian markets showing stronger implementation rates. According to the same research, India currently stands as the global frontrunner with 64% AI adoption, while South Korea follows at 54% – both significantly outperforming Western economies in their embrace of these technologies.

GenAI tools are also developing at an explosive rate. Iterations of existing tools (just think about the evolution and functionality of Chat GPT from its introduction in 2022 to its latest release GPT-4o) are skyrocketing and the launch of new tools is incessant. In fact.There are more than 2,000 GenAI tools already on the market with numbers increasing daily. This is spurred by the use of open source development which lowers barriers to entry, offers diverse perspectives from a global community, and allows for quicker innovation thanks to a collaborative environment.

The categories of emerging GenAI tools includes:

  • Content generation tools (text, images, audio, video)
  • Domain-specific enterprise tools (legal, financial, healthcare)
  • Development and coding assistants
  • Data analysis and insights tools
  • Customer service and support tools
  • Cross-platform integration tools and extensions

Add to that the growth in funding into GenAI startups – investment hit more than $56 billion in 2024, which was almost double from the year before. This growth is also reflected in the demand for data center services and capacity, which according to McKinsey, will likely more than triple by 2030.

Accessibility and adoption factors

The escalating accessibility of GenAI tools is a significant driver of their rapid adoption across diverse user segments. Browser extensions are playing a crucial role in making AI capabilities ubiquitous, seamlessly integrating functionalities like content generation, summarization, and grammar checking directly into users’ daily online activities. This ease of access lowers the cognitive barrier to using these powerful tools. Furthermore, the prevalence of free tiers and freemium models significantly reduces the financial barriers to entry, allowing individuals to experiment with and experience the benefits of GenAI without spending any money.

Beyond desktop environments, mobile applications are extending the reach of GenAI to smartphones, enabling users to leverage AI-powered creativity and productivity on the go. This mobile-first approach caters to a broad audience, making sophisticated AI features readily available – for use in personal lives and in a professional environment. Complementing this is the rise of no-code interfaces, which are democratizing AI by empowering non-technical users to interact with and build upon GenAI models without requiring programming skills. Finally, API-driven tools are proving instrumental in facilitating adoption by enabling seamless integration of GenAI capabilities into existing workflows and software ecosystems, enhancing productivity and unlocking new possibilities within familiar operational contexts.

The Enterprise Security Blind Spot

While more organizations are putting plans and programs in place to capitalize on the benefits of the growth of GenAI tools, staff are taking matters into their own hands by using tools for work that they are most likely also using in their personal lives. Whether that is Google’s Gemini to help with travel planning or using ChatGPT to practice another language. In much the same way as shadow IT evolved as more employees began using their own (unsanctioned) devices – smartphones, tablets, etc., – and applications, which increased risk and impacted compliance, the GenAI explosion is having a similar effect on organizations and IT departments. However, the far-reaching implications are much greater and more dangerous than that of shadow IT.

How shadow AI is infiltrating organizations

According to IBM, shadow AI is ‘the unsanctioned use of any artificial intelligence (AI) tool or application by employees or end users without formal approval or oversight of the information technology (IT) department’. While this sounds the same as shadow IT, why is it so much more dangerous?

The main threat is to cyber security. The use of these tools and applications creates gaps in security and provide additional attack vectors for cyber criminals. The result: data breaches, malware attacks, ransomware. To put this risk into context, in 2023 Samsung banned the use of ChatGPT after sensitive source code was leaked after an engineer uploaded it to the service. 

The growth of GenAI and resulting shadow AI also has massive implications for data privacy and compliance. There are also ethical and accuracy considerations – least of which is plagiarism when creating content, for example. 

The tools themselves are innocuous and as mentioned help employees be more efficient and productive and can include anything from personal AI assistants, such meeting transcription services, email filtering or calendar management; to marketing automation tools and data visualization applications.

There is little doubt that the tools do deliver significant benefits to employees (and ultimately the organization in terms of efficiency and productivity gains), the risk needs to be carefully balanced. And this can only be done if the organization and IT teams are aware of which tools or applications are being used. But as it stands, the very principle of shadow AI means that these tools are invisible and not picked up by traditional security measures.

Why Traditional Security Methods are Failing

Keeping up with GenAI and the tools being constantly released is nothing short of complex. There are limitations when it comes to using traditional security strategies and programs to detect the use of GenAI or to detect new tools on the market.. Looking at conventional cyber security tools, they are more often than not used to monitor network traffic, endpoint activity and applications. Many GenAI tools are cloud-based or use browser extensions which appear as normal web traffic. Add to that, it is near impossible to distinguish sanctioned tools from their unsanctioned counterparts.

The technical challenge of GenAI detection 

Traditional URL filtering primarily operates by categorizing and blocking access to known malicious or inappropriate websites. However, many GenAI tools are accessed through legitimate and widely used domains. For instance, a user might interact with an LLM through a well-known cloud platform or a seemingly innocuous browser extension. The underlying AI processing happens dynamically, often through secure HTTPS connections to the same trusted domains used for legitimate business purposes. Therefore, simply blocking access to broad categories of websites or specific popular platforms won’t effectively prevent the use of GenAI tools without severely hindering legitimate workflows. The AI functionality is often embedded within these trusted environments, making URL-based blocking a blunt and ineffective instrument.

Many GenAI tools are designed to integrate seamlessly into existing digital environments, and, as mentioned, often presenting themselves as standard web applications or browser extensions. They use common web protocols and communication methods, making their network traffic appear indistinguishable from that of everyday browsing or legitimate SaaS applications. For example, a GenAI-powered writing assistant browser extension communicates with its backend servers to process text, but this communication might look very similar to a standard request made by any other web service. This “camouflage” allows GenAI tools to evade detection by traditional network monitoring tools that primarily look for anomalous traffic patterns or connections to suspicious or unknown hosts.

The pace of change

There is also the ‘moving target’ problem when it comes to keeping up with GenAI; the rate at which new tools are launched, and existing ones updated is rising exponentially. This makes it exceedingly difficult for IT and security departments. The fact that many of these tools themselves evolve to bypass detection methods just adds to the complexity of detection.

White labeling is also a problem; with many companies integrating AI tools into their own processes, products and services, distinguishing between this and the unsanctioned use of other GenAI tools is challenging. In addition, traditional security often relies on maintaining lists of allowed or blocked applications, websites, or network traffic. However, the sheer number of GenAI tools, their rapid evolution, and the fact that they often operate within legitimate domains make maintaining accurate and effective allow/block lists impossible. 

Resource limitations

In addition to the changing nature and development of GenAI tools and the associated technical challenges, there are also in-house limitations – time, skills and money. Manual tracking is impossible with the GenAI explosion and IT / security teams are already time poor with a competing list of priorities. There is also a lack of skills and expertise within security teams, as well as investment. Organizations don’t have enough room within their budgets for specialized detection solutions.

Building a Foundation for GenAI Visibility

The growth of GenAI will only continue. Managing both the value of GenAI and the risk requires one key ingredient: visibility. Knowing which GenAI tools are being used by whom, and for what purpose. This includes all GenAI tools and applications – not just those already integrated into an organization’s workflows, products and operations. The question is how. Using traditional security tools is not the answer, as detailed above. Instead, organizations should use monitoring tools specifically designed and developed to cope with the GenAI explosion. 

That is where Portal26 comes in; our AI TRiSM platform has been created specifically to cope with the GenAI explosion, the risk associated with adoption and help organizations gain better control over the new technology. Within the platform, our GenAI Visibility software delivers not only full visibility of GenAI usage across the organization, but also provides the governance, security and training essential for getting the most value and least risk from using GenAI.

Next steps

There is little doubt GenAI is being used in your organization today. The question is: to what extent. Assessing your current GenAI usage is the first step in ensuring you’re optimizing this new technology while managing the risk. The second step is understanding your visibility capabilities. 

If the outcome of either of these steps is surprising to you, get in touch with us today to discuss your challenges and set up a demo.

Schedule A Demo >