How to Eliminate “Shadow AI” in Software Development

3 weeks ago 10
News Banner

Looking for an Interim or Fractional CTO to support your business?

Read more

In a recent column, I wrote about the nearly ubiquitous state of artificial intelligence (AI) in software development, with a GitHub survey showing 92 percent of U.S.-based developers using AI coding tools both in and outside of work. Seeing a subsequent surge in their productivity, many are taking part in what’s called “shadow AI” by leveraging the technology without the knowledge or approval of their organization’s IT department and/or chief information security officer (CISO).

This should come as no surprise, as motivated employees will inevitably seek out technologies that maximize their value potential while reducing repetitive tasks that get in the way of more challenging, creative pursuits. After all, this is what AI is doing for not only developers but professionals across the board. The unapproved usage of these tools isn’t exactly new either, as we’ve seen similar scenarios play out with shadow IT, and shadow software as a service (SaaS).

However, even if they circumvent company policies and procedures with good intentions in a “don’t ask/don’t tell” manner, developers are (often unknowingly) introducing potential risks and adverse outcomes through AI. These risks include:

  • Blind spots in security planning and oversight, as CISOs and their teams are not aware of the shadow AI tools and, therefore, cannot assess or help manage them
  • AI’s introduction of vulnerable code that leads to the exposure/leakage of data
  • Compliance shortcomings caused by the failure of AI usage to align with regulatory requirements
  • Decreased long-term productivity. While AI provides an initial productivity boost, it will frequently create vulnerability issues, and teams wind up working backward on fixes because they weren’t addressed from the start.

What’s clear is that AI on its own is not inherently dangerous. It’s a lack of oversight into how it is implemented that reinforces poor coding habits and lax security measures. Under pressure to produce better software faster than ever, developer team members may try to take shortcuts in – or abandon entirely – the review of code for vulnerabilities from the beginning. And, again, CISOs and their teams are kept in the dark, unable to protect the tools because they aren’t even aware of their existence.

So how do CISOs bring AI-assisted coding out of the shadows, to get the most out of productivity benefits while avoiding vulnerabilities? By embracing it – as opposed to blanket suppression – and pursuing the following three-point plan to establish reasonable guardrails and raise security awareness capabilities among development team members:

Identify AI implementations. CISOs and their teams should map out where – and how – AI is deployed throughout the software development lifecycle (SDLC) … Who is introducing these tools? What is their security skill set? What steps are they taking to avoid unnecessary risks? How are we implementing impactful training to raise the skills/awareness levels of developers whose AI-assisted code is often found with vulnerabilities?

By mapping out the SDLC, security teams can pinpoint which phases – such as design, testing or deployment – are most susceptible to unauthorized AI usage.

Cultivate a “security-first” culture. It’s essential to drive home the message that a “proactive protection” mindset from the very beginning will actually save developers time in the long run rather than adding to their workloads, with the elimination of “work backwards” fixes down the road. To get to this state of optimal—and safe—coding, team members must commit to a security-first culture that does not blindly trust AI output.

With this culture fully taking hold – strengthened by regular training – these professionals will acknowledge that it really is best to ask for permission rather than forgiveness. They’ll understand that they need to let CISOs know what they want to use and why, in building a case for the acquisition of a tool. CISOs must then clearly illustrate the potential risk consequences of the tool, and explain how these considerations contribute to the approval or disapproval of its deployment. If security-first thinking developers conclude that adopting an inadequately vetted AI product will lead to more trouble than it’s worth, they’re more likely to respect their CISO’s decision.

Advertisement. Scroll to continue reading.

Incentivize for success. When developers agree to “take AI out of the shadows,” they are adding value to their organization. That value should be rewarded, in the form of promotions and more creatively appealing/challenging projects. By establishing benchmarks to measure team members’ security skills and practices, CISOs will be able to identify those who have proven themselves as candidates for greater responsibilities and career advancement, and make such recommendations to fellow managers and executives.

Indeed, with a security-first culture fully in play, developers will view the protected deployment of AI as a marketable skill, and respond accordingly. CISOs and their teams, in turn, will be able to stay ahead of risks instead of literally getting blindsided by shadow AI. As a result, organizations will see their coding and security teams working together to ensure software production is better, faster – and safer.

Read Entire Article