There has been a lot of focus on AI since the start of the year with the creation of a new company, called Stargate, to grow artificial intelligence infrastructure in the United States. Stargate claims to be the largest AI infrastructure project to date and has the backing of tech companies such as OpenAI, Softbank and Oracle. The vision is to build “the physical and virtual infrastructure to power the next generation of AI,” including data centers around the country.
However, this news was quickly eclipsed by the new AI chatbot from China, DeepSeek, which sent the US stock market tumbling as its apparent performance on a small budget has shaken up the tech landscape.
AI still needs a safety net
And it is only February – I’m sure we will see more AI news hitting headlines as we go through the year. And while it’s crucial to recognize the expanding attack surface that AI may bring, it also has the ability to supercharge human productivity, optimize processes and save costs. However, I caution that any AI technology still needs a safety net – a human-in-the-loop.
I say this because despite all its advantages (and there are many) there are still technology issues, such as hallucinations, and human issues, such as trusting AI outcomes. Hallucinations are a phenomenon wherein AI perceives patterns or objects that are nonexistent or imperceptible to humans, creating outputs that are nonsensical or altogether inaccurate.
Even giants such as Amazon and Apple are not immune. Amazon had been planning to roll out a new Alexa powered by generative AI in October 2024, but that didn’t happen. Amazon delayed releasing its new voice assistant until sometime this year because it still needed to overcome AI hallucinations. Amazon felt that hallucinations must be close to zero because, with people tending to use Alexa throughout the day, it could end up providing false information – which at best undermines trust in the technology, and at worst could be dangerous. A similar issue has affected Apple’s generative AI powered news assistant, which has been withdrawn after disseminating untrue stories that appeared to come from genuine news outlets such as the BBC.
Mirroring our trust in automation
Trust in newer technologies (or the lack of it) is a recurring narrative that we have seen over the years. Like AI now, automation was front and center the past 4 years. Remember everyone previously talking about the risks and benefits of the autonomous SOC? Still, it has not happened. We do need automation as a scale function, to enable productivity with fewer people enabled to do ‘more with less’. But, if you want to truly trust the outcomes generated through automation, you need a human-in-the-loop. What I mean by this is that you need the context of security with human expertise and judgment actively involved in the decision-making during a security process. This mitigates potential risks and ensures that critical security functions are not solely reliant on automated systems. Essentially, humans must be present to review, validate, and intervene when necessary, providing a crucial layer of oversight and accountability. Same holds true for AI today. Yes, it is a powerful technology, but we still need a human to make sure the output is accurate and will not cause operational problems.
Advertisement. Scroll to continue reading.
Upon joining ThreatQuotient in 2016, we promoted the need to “empower the human element of cybersecurity.” Several years later, the RSA Conference theme was “The Human Element.” The whole point being that we can’t (yet) treat any form of automation as a complete replacement but instead we should view it as a tool that can be utilized to deliver more efficiency, effectiveness, productivity and scale and handle more of the workload. However, while I say “not yet”, who knows how this will continue to evolve and what we might be able to do in 5-10 years’ time?
Today AI systems can sometimes struggle with complex or nuanced situations, so human intervention can help identify and address potential issues that algorithms might not. Without a doubt, the role of the expert human-in-the loop will change over time. Research that we did recently highlighted that individuals trust in some automation use cases, but not others, for example. All respondents believe cybersecurity automation is central to security, and while earlier surveys revealed a great lack of trust in automation outcomes, our most recent edition showed more confidence, with 20% of respondents now reporting a lack of trust as a key challenge to implementation which was a drop from 31% last year. In 2023, there was significant concern around trust, bad decisions, slow user adoption and lack of skill, but we saw these concerns had abated in 2024.
What, when and how
As we move forward, the key question will be around what to automate and when, when to use AI, and how to get humans involved. One example is our recently announced partnership with Ask Sage, which will enable government organizations to securely train several supported AI models using curated threat intelligence, generating reports, and quickly and easily building threat insights based on an organization’s specific requirements.
This means threat analysts using the platform for threat intelligence and conducting investigations can select data for AI training and run reports on specific threats. Additionally, Ask Sage will continuously train data as new information becomes available, enabling customers to generate reports on any given threat targeting their organization at any time. This means security and analyst teams don’t have to dig through volumes of data and undertake tedious manual tasks. They can cut through the noise and build a threat profile quickly and easily. All the data can also be consolidated into an easy-to-read summary and analysts also have the ability to verify and validate outputs, to help avoid issues like AI hallucinations, making the analyst’s job so much easier and more productive.
Striking the right balance
This is just one example, I am sure there are many more, and according to Gartner at least 15% of day-to-day decisions will be made autonomously through agent-focused AI, up from zero percent in 2024. The decision of whether an AI system needs a human-in-the-loop is contingent on the specific application, context, and ethical considerations of the task it is undertaking (and we’re seeing this reflected in regulation such as the EU AI Act). While human oversight can enhance accuracy, mitigate biases, and handle unforeseen scenarios, it also presents challenges such as increased cost, dependency on human availability, and the potential for human error. Striking the right balance between machine intelligence and human expertise is essential for creating AI systems that are not only efficient but also ethical and aligned with human values. As AI technology continues to evolve, the role of human-in-the-loop remains a dynamic and evolving aspect of responsible AI usage in operations.