AI – Implementing the Right Technology for the Right Use Case

1 month ago 38
News Banner

Looking for an Interim or Fractional CTO to support your business?

Read more

If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.

Don’t get me wrong, organizations are already starting to adopt AI across a broad range of business divisions:

  • Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
  • Employees are using third party GenAI tools for research and productivity purposes
  • Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
  • Companies are building their own LLMs for internal use cases and commercial purposes. 

AI is still maturing

However, just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity. One of the most well-known models to measure technology maturity is the Gartner hype cycle. This tracks tools through the initial “innovation trigger”, through the “peak of inflated expectations” to the “trough of disillusionment”, followed by the “slope of enlightenment” and finally reaching the “plateau of productivity”.  

Taking this model, I liken AI to the hype that we witnessed around cloud a decade ago when everyone was rushing to migrate to “the cloud” – at the time a universal term that had different meaning to different people.  “The cloud” went through all stages of the hype cycle and we continue to find more specific use cases to focus on for greatest productivity.  In the present day many are now thinking about how they ‘right-size’ their cloud to their environment.  In some cases, they are moving part of their infrastructure back to on-premises or hybrid/multi-cloud models.

Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization. This is a theme that also emerged  as cybersecurity automation matured – the need to identify the right use case for the technology, rather than try to apply it across the board..

AI is a scale function

That said, AI is and will continue to be a useful tool. In today’s economic climate, as businesses adapt to a new normal of continuous change, AI—alongside automation—can be a scale function for cybersecurity teams, enabling them to pivot and scale to defend against evermore diverse attacks.  In fact, our recent survey of 750 cybersecurity professionals found that 58% of organizations are already using AI in cybersecurity to some extent.  However, we do anticipate that AI in cybersecurity will pass through the same adoption cycle and challenges experienced by “the cloud” and automation, including trust and technical deployment issues, before it becomes truly productive.

The fear, uncertainty, and doubt around AI is well founded. It could have a significant detrimental effect if used incorrectly or if the AI model doesn’t do what it should. This fear is akin to cybersecurity professionals’ views on cybersecurity automation over time. Earlier research studies we carried out into automation adoption revealed a lack of trust in automation outcomes, but the latest research, which we have just published, shows greater confidence as automation has matured.

Advertisement. Scroll to continue reading.

This is why many organizations are creating steering committees to better understand the use of AI across the different business divisions. There is also regulation that will come into force, such as the EU AI Act, which is a comprehensive legal framework that sets out rules for the development and use of AI.

Understanding what data is being shared

This is a fundamental issue for security leaders, identifying who is using AI tools and what they are using AI for.  What company data are they sharing with external tools, are these tools secure, and are they as innocent as they seem? For example, are GenAI code assistants, that are being used by developers, returning bad code and introducing a security risk? Then there are aspects like Dark AI, which involves the malicious use of AI technologies to facilitate cyber-attacks, hallucinations, and data poisoning when malicious data is input to manipulate code which could result in bad decisions being made.

To this point, a survey (PDF) of Chief Information Security Officers (CISOs) by Splunk found that 70% believe generative AI could give cyber adversaries more opportunities to commit attacks. Certainly, the prevailing opinion is that AI is benefiting attackers more than defenders. 

Finding the right balance

Therefore, our approach to AI is focused on taking a balanced view.  AI certainly won’t solve every problem, and it should be used like automation, as part of a collaborative mix of people, process and technology. You simply can’t replace human intuition with AI, and many new AI regulations stipulate that human oversight is maintained, so it is about finding the right balance and using the technology in the right scenarios for the right use case and getting the outcomes that you need.

Looking to the future, as companies better understand the use cases for AI, it will evolve from regular Gen AI to incorporate additional technologies as well. To date, generative AI applications have overwhelmingly focused on the divergence of information. That is, they create new content based on a set of instructions. As AI evolves, we believe we will see more applications of AI that converge information. In other words, they will show us less content by synthesizing the information available, which industry pundits are aptly calling “SynthAI”. This will bring a step function change to the value that AI can deliver – I’ll discuss this in a future article.

Read Entire Article