Shadow AI is the unsanctioned installation, integration, and use of AI tools by staff who often seek nothing more than to improve their efficiency. Many free AI tools can easily be gained, installed, and used without the necessary technical understanding or training in AI. But that lack of technical understanding also means that users are neither aware of nor appreciate the hidden dangers. And the ease of doing so often means it happens without oversight or governance from IT and the security team.
The problem itself is not new. It’s a combination of Shadow IT and the open source software (OSS) supply chain (Shadow AI is effectively a new sub-set of Shadow IT). But both sets of problems are now writ larger: Shadow AI is even less understood than the wider Shadow IT; while supply chain problems are harder to detect than those found in traditional OSS code libraries.
Shadow AI is introducing new risks of data leakage and compliance violations; danger of malicious code introduction and new vulnerabilities caused by ungoverned AI integrating with traditional infrastructure; dangerous and false responses caused by unknown bias in the algorithms; and a lack of transparency and visibility. And if you cannot see it, you cannot secure it; the danger with Shadow AI is the threats are more complex and the visibility more absent.
The security industry is already responding to these issues, with new companies and new services attempting to solve the open source code side of the AI problem – but this does little to solve the problem of installed and misused Shadow AI. This is now being addressed separately. In the last few days, two companies have introduced extensions to their existing platforms specifically to tackle the invisibility and wrongful use of Shadow AI: Valence Security and Endor Labs.
Valence Security has expanded its SaaS risk platform to include the discovery of both Shadow IT and Shadow AI, which it describes as ‘the long tail of SaaS applications’.
For AI, the Valence tool promises to discover unseen gen-AI within the SaaS ecosystem, and provide visibility into the access permissions granted to it. It allows the company to align the AI tool usage with organizational policies and industry regulations, and to proactively identify and address potential security threats – ultimately, if necessary, to remove any integrated AI tools that contravene company policies.
The Valence approach is to find, evaluate, and allow redress for unsanctioned and potentially harmful Shadow IT specifically including Shadow AI within SaaS.
Endor Labs is approaching the discovery and remediation of Shadow AI threats from an OSS lifecycle management direction; that is, the use of AI OSS in the development of proprietary in-house AI applications.
Advertisement. Scroll to continue reading.
The extent of the problem is described in a blog post: “Hugging Face hosts over 1 million AI models and more than 220,000 datasets… As developers increasingly adopt these open source models, application security teams face new challenges managing the associated risks. AI models present unique risk patterns because they combine code, weights, and training data that may come from multiple sources.”
An extension to the Endor platform allows organizations to discover the AI models already in use across their applications, and to set and enforce security policies over which models are permitted. Its goal is similar to that of Valence: to discover, evaluate, and enforce (for example, by setting guardrails for use, warning developers about policy violations, and blocking the use of high risk AI models).
Varun Badhwar, co-founder and CEO of Endor Labs, explains the new threat of Shadow AI introduced as OSS. Most software composition analysis (SCA) tools are designed to track open source packages, meaning they cannot identify the risks coming from local AI integrated into applications. “Meanwhile,” he added, “product and engineering teams are increasingly turning to open source AI models to deliver new capabilities for customers.”
Endor primarily detects the existence of Hugging Face models by examining the code used in the in-house developments. A range of different code patterns can indicate the presence of a downloaded model. If any of these patterns are found, they can suggest the need for closer examination – aided by correlating the model concerned to the Hugging Face security scoring results introduced in October 2024. Endor suggests that whenever a model with a low score, perhaps 7 or less, is found, it could warrant more detailed examination.
The list of code patterns is a work in progress – Endor does not claim that it is complete. “This list is continually evolving and expanding over time and currently does not encompass all possible methods for using or loading models from Hugging Face in code. Since most of these functionalities are derived from the transformers library, which is specifically tailored for Python, our discovery capabilities are presently limited to Python source code.”
Nevertheless, recognizing that the solution is not finalized is good in such a fast moving arena – it means the product is continually evolving and improving. As with all security controls, knowing that something can and will be improved, is no argument for not examining what is currently available.
Related: Cyber Insights 2025: Open Source and Software Supply Chain Security
Related: How to Eliminate “Shadow AI” in Software Development
Related: Beware Of Shadow AI – Shadow IT’s Less Well-Known Brother
Related: Shadow AI – Should I be Worried?