Beware Of Shadow AI – Shadow IT’s Less Well-Known Brother

3 days ago 8
News Banner

Looking for an Interim or Fractional CTO to support your business?

Read more

Shadow IT is a fairly well-known problem in the cybersecurity industry. It’s where employees use unsanctioned systems and software as a workaround to bypass official IT processes and restrictions. Similarly, with AI tools popping up for virtually every business use case or function, employees are increasingly using unsanctioned or unauthorized AI tools and applications without the knowledge or approval of IT or security teams – a new phenomenon known as Shadow AI.
 
Research shows that from 50 to 75% of employees are using non-company issued AI tools and the number of these apps is growing substantially. A visibility problem emerges. Do companies know what is happening on their own networks? According to our research, beyond the popular use of general AI tools like ChatGPT, Copilot and Gemini, another set of more niche AI applications being used at organizations include:

  • Bodygram (a body measurement app)
  • Craiyon (an image generation tool)
  • Otter.ai (a voice transcription and note taking tool)
  • Writesonic (a writing assistant)
  • Poe (a chatbot platform by Quora)
  • HIX.AI (a writing tool) 
  • Fireflies.ai (a note taker and meeting assistant)
  • PeekYou (a people search engine)
  • Character.AI (creates virtual characters) and 
  • Luma AI (3D capture and reconstruction).

Why Shadow AI Is A Major Cybersecurity Risk

Even though AI brings great productivity, Shadow AI introduces different risks:

Data leakage: Studies show employees are frequently sharing legal documents, HR data, source code, financial statements and other sensitive information with public AI applications. AI tools can inadvertently expose this sensitive data to the public, leading to data breaches, reputational damage and privacy concerns (e.g., Samsung).

Compliance risks: Feeding data into public platforms means that organizations have very little control over how their data is managed, stored or shared, with little knowledge of who has access to this data and how it will be used in the future. This can result in non-compliance with industry and privacy regulations, potentially leading to fines and legal complications.

Vulnerabilities to cyberattacks: Third-party AI tools could have built-in vulnerabilities that a threat actor could exploit to gain access to the network. These tools can lack security standards compared to an organization’s internal security systems. Shadow AI can also introduce new attack vectors making it easier for malicious actors to exploit weaknesses. 

Lack of oversight: Without proper governance or oversight, AI models can spit out biased, incomplete or flawed outputs. Such biased and inaccurate results can bring harm to organizations. An employee using an unsanctioned tool might produce results that are contradictory to the results produced by official company systems. This can cause errors, inefficiencies, confusion, and delays, proving  costly for the business.

Legal risks: Unsanctioned AI might access intellectual property from other businesses, making the organization liable for any resulting copyright infringement. It could generate biased outcomes that violate anti-discrimination laws and policies or produce erroneous results that are shared with customers and clients. In all of these cases, organizations could potentially face penalties, be held accountable and liable for any resulting violations and damage caused. 

How Can Organizations Mitigate The Risks Of Shadow AI?

Advertisement. Scroll to continue reading.

Listed below are some recommendations and best practices that can help organizations mitigate risks of Shadow AI:

Robust AI governance policies: Establish a comprehensive AI policy that outlines which AI tools and platforms are approved for use within the organization; that explains how employees can request access to new tools; that specifies dos and don’ts and includes guidance on data privacy, security and other ethical considerations. Data from an ISACA poll suggests that only 15% of organizations have a formal AI policy in place. 

Employee training on safe and responsible use of AI: Build awareness among employees about AI tools and the risks of using unauthorized platforms. Make them aware of the issues pertaining to biases, fairness, and accuracy. Promote safe and responsible use, using only those platforms which are approved by the organization, ensuring that sensitive data such as PII, financial details, source code or other proprietary information is not inputted in these platforms.

Granular access controls and policy enforcement: Monitor and track usage of AI applications, deploy granular access control and block access to unnecessary AI applications. Ensure security policies are strictly enforced across devices, platforms and networks. Disparate tools will not help. Organizations will need a unified security system like single-vendor SASE that has visibility into applications and network flows, that can detect unauthorized sharing of sensitive data and prevent employees from deploying potentially risky Shadow AI applications. 

Frequent security audits: Proactively identify and address risks by conducting regular security audits. Assess the usage of AI tools within the organization and ensure that only authorized, secure, compliant and ethical platforms are being implemented. Ensure that data access, storage and processing of data meets necessary security and compliance standards. Audits will help verify if AI models are functioning properly, whether they are free from bias, and if data protection laws are being adhered to.

The OODA Loop: Security teams can leverage the OODA Loop (Observe, Orient, Decide and Act) U.S. military mental model to deploy comprehensive Shadow AI governance. Observe – obtain visibility of all of Shadow AI across the organization (IT teams will need a platform that has visibility into all network flows). Orient – understand the context (i.e., who is the user, their location, their device, the type of application being accessed). Decide – implement a policy for Shadow AI (e.g., block shadow apps regardless of user, location, device). Act – the ability to enforce granular control over Shadow AI.

The rise of Shadow AI applications poses a unique challenge for organizations. While these tools can enable employees to be innovative and productive, significant data privacy risks can stem from their usage. Organizations can approach this dilemma by establishing strong AI policies and governance, deploying a unified security system, conducting frequent security audits,  and leveraging the OODA Loop. By doing so, they can unlock the benefits of AI while minimizing the risks associated with unauthorized use. 

Related: Shadow AI – Should I be Worried?

Read Entire Article