Cyber Insights 2025: Artificial Intelligence

1 day ago 2
News Banner

Looking for an Interim or Fractional CTO to support your business?

Read more

Cyber Insights 2025 examines expert opinions on the expected evolution of more than a dozen areas of cybersecurity interest over the next 12 months. We spoke to hundreds of individual experts to gain their expert opinions. Here we discuss what to expect with Artificial Intelligence.

Artificial intelligence burst into public consciousness in November 2022 when OpenAI made ChatGPT available over the internet. ChatGPT is a specialized form of machine learning (ML) known as a generative pre-trained transformer (GPT) working with a large language model (LLM). The bottom line is that a user can interact with the AI using natural language and receive output delivered in natural language.

The accuracy of the output depends upon the quality of the AI algorithms and the depth and accuracy of the training data used by the LLM. Most of the primary LLMs available today have been trained on vast amounts of data scraped from the internet. Put simply, you can ask an LLM any question and receive a response based on the wisdom of the internet within seconds; and you can ask it to perform any task that its algorithms understand, again based on the wisdom of the internet.

It was always understood that cyber adversaries would engage these abilities to enhance their own activities as soon as they understood how to best use the new capabilities. 2023 was the year we all waited to see what would happen. 2024 demonstrated signs of the malicious use of AI. 2025 is the year we expect to witness a huge increase in malicious AI.

It is now fair to say that artificial intelligence is upending cybersecurity. It is used by adversaries in their attacks, and by defenders (primarily through enhanced ML front ended with LLM natural language abilities) in their defense. The offensive and defensive application of AI within cybersecurity will appear throughout the topic discussions in this series of Cyber Insights 2025.

In this article, we will discuss AI itself and how it is likely to evolve in the coming year

Trust

“Trusting AI in 2025 won’t be easy,” comments Dan Clarke, president of Truyo. “AI is probabilistic, not deterministic, so it’s still inherently prone to bias and mistakes.” Bias has always been a problem for AI. The algorithms are developed by humans, and humans have always been subject to their own unconscious biases and prejudices. Developers attempt to solve this by making the algorithms visible and iterative – but even the judges have their own hidden biases.

Mistakes are most visible in the phenomenon that became known as hallucinations. If gen-AI is asked a question, it must answer. But depending on the quality of the data it is trained on, it may not have any answer or may not have the correct answer. In the former case, it will ‘make up’ a response. In the latter case, it will present a wrong answer but with great authority.

Advertisement. Scroll to continue reading.

Kai Roer, CEO and founder at Praxis Security LabsKai Roer, CEO and founder at Praxis Security Labs

This will remain a primary problem for gen-AI through 2025: “Its inaccuracies, its inability to know the difference between a fantasy and reality, and its tone of voice that is built to convince instead of built to expand and learn,” comments Kai Roer, CEO and founder at Praxis Security Labs. “This means we will see many more examples of professionals trying to be more efficient by generating reports, legal documents and so forth, without taking the time (or worse, without having the competence) to review and edit all the errors generated by AI. This is a classic example of technology optimism… We will see lawsuits on this topic for sure.”

Ethics

Ethical AI is now demanded and claimed. But technology in this world is a single global village – and a multicultural village. Western cultures focus on individualism, allowing individuals to accumulate vast amounts of wealth through the system known as capitalism. Many eastern cultures focus on the team rather than the individual. Tech-savvy individuals from deprived areas and poor families see dubious businesspeople making money through dubious practices and see little wrong in doing similar through hacking. Different peoples and different places have different moralities.

It is not possible to develop a single set of ethical values that will satisfy everyone in this global village. “This is where it gets complex,” warns Augusto Barros, VP of product marketing at Securonix. “Ethics are defined based on multiple factors, and cultural, societal, and individual values all play a role.  Whose ethics should AI follow? This is a significant ongoing debate. We can see a strong example in how OpenAI develops its models versus how Elon Musk is conducting the research for Grok.”

That doesn’t mean we shouldn’t try to improve the moral principles underlying the development and use of AI; but we need to recognize that our principles might be different to other principles. “Chasing a ‘perfectly ethical’ AI is not achievable,” says Muj Choudhury, CEO at RocketPhone. “Instead, a more productive approach is to make sure AI is safe and accessible without burying it in excessive regulation… It’s like building a car: focus on making it safe and reliable instead of promising it will never fail.”

Against this background we should be aware that claims of ethical AI could be little more than the latest form of greenwashing – but we can equally be certain that many companies will claim that epithet.

Criminal access to gen-AI

“By the end of 2025, it’s reasonable to assume that criminal organizations and adversarial nation-states will have developed their own generative AI systems similar to ChatGPT but devoid of ethical safeguards,” warns Kevin Robertson, co-founder and COO at Acumen Cyber

“These ungated AI models could be exploited to scrape vast amounts of data from platforms like LinkedIn, as well as compile credentials from dark web listings. The convergence of such technology with malicious intent may enable the production of finely targeted spear-phishing campaigns executed at unprecedented speed and scale.”

Melissa Ruzzi, director of AI at AppOmni, believes that criminals will use available AI. “It takes tremendous effort and skill to develop original AI models. Instead, I expect criminals will continue to use available models, particularly those with the least security guardrails.”

Adversarial nation states, however, have both the resources and skills to develop their own models. The global geopolitical environment is already dire and may even worsen through 2025 with the potential for new tariff-driven trade wars. We should assume that these nation states already have, or will have, advanced AI systems – and their respective APTs will have access to them.

Malicious use of multi-modal gen-AI

What started as basic text-based natural language artificial intelligence has rapidly developed into multi-modal AI. It can now also process and generate voice and music, images and videos. While these capabilities have many beneficial applications, they most concern cybersecurity through the ability to generate deepfakes.

Mike Britton, CIO at Abnormal Security.Mike Britton, CIO at Abnormal Security.

Deepfakes are not new – they predate ChatGPT. Now, however, multi-modal gen-AI can produce better deepfakes, faster, cheaper and at scale. Deepfakes are widely considered to be one of the most pressing concerns for 2025. But artificial intelligence is not the creator of badness, it is a facilitator of better badness made available to unskilled operators.

“Some of the most immediate and concerning use cases we could see may involve the use of deepfakes in legal proceedings and forensics, as CCTV footage and other evidence become much more easily manipulated,” suggests Mike Britton, CIO at Abnormal Security.

Ann Irvine, chief data & analytics officer at ResilienceAnn Irvine, chief data & analytics officer at Resilience

Ann Irvine, chief data & analytics officer at Resilience, fears more successful attacks against business. “I think 2025 will be the first year in which we see a successful deepfake attack on a Fortune 500 company, and that will be just the start of many more high-profile AI-powered attacks. Every organization – big and small – is at risk, and will need to prepare for more frequent, personalized attacks and the inevitable financial damages that come with them.”

Shadow AI

“Shadow AI will prove to be more common – and risky – than we thought,” says Akiba Saeedi, VP of IBM security product management. Shadow AI is the deployment and use of unsanctioned AI models by staff in a manner lacking proper company oversight and governance. 

The underlying problem is not unique to AI. Employees have always sought to improve their value by adopting new technologies, but often without waiting for formal approval. Shadow AI started in 2024. “Employees selected tools that fit their needs faster than the enterprise could react. They went to great lengths to get a productivity boost and bypass traditional security measures, because companies couldn’t move fast enough,” explains Randy Birdsall, CPO at SurePath AI.

“The impact to their work is so significant they were willing to flagrantly bypass compliance policy and engage in risky behaviors even at the highest and most secure levels of a company.” The huge number of different specialized models available from HuggingFace makes this an easy and tempting option, and the adoption of Shadow AI will only increase through 2025.

“Shadow AI presents a major risk to data security, and businesses that successfully confront this issue in 2025 will use a mix of clear governance policies, comprehensive workforce training, and diligent detection and response,” suggests Saeedi.

Attacks against AI

The accuracy of AI response is entirely reliant on the training data used. If malicious actors can corrupt the training data, they could affect the validity of the AI model, or even manipulate its response to suit their own purposes. The process is generally known as data poisoning.

“In 2025,” warns Daniel Rapp, chief AI and data officer at Proofpoint, “we will start to see initial attempts by threat actors to manipulate private data sources. For example, we may see threat actors purposely trick AI by contaminating private data used by LLMs – such as deliberately manipulating emails or documents with false or misleading information – to confuse AI or make it do something harmful.”

Paul Schmeltzer from the Clark Hill Law firm expects data poisoning to rise. “Attackers may exploit the reliance on vast, open datasets to subtly skew models, introduce backdoors where the LLM behaves maliciously when triggered by specific inputs, degrade utility, or spread harmful content. This is especially the case in the finance and healthcare sectors where an attacker’s poisoning of a financial or healthcare dataset could lead to erroneous predictions or fail to recognize critical symptoms due to corrupted training data, leading to harmful misdiagnoses.”

The threat of data poisoning is allied to the practice of malicious prompt injection, or jailbreaking, used to avoid guardrails placed around user access to the LLM. At one level, prompt injection can be used to gain access to data that should not be accessed; but coupled with poisoning, an adversary’s pre-defined prompt could be the trigger to make the AI perform a malicious task.

Agentic AI

Hao Yang, VP of artificial intelligence at SplunkHao Yang, VP of artificial intelligence at Splunk

 2025 is likely to be the year of agentic AI. “Previously, we were focused on AI assistants that could respond to prompts or inputs from a user,” explains Hao Yang, VP of artificial intelligence at Cisco-owned Splunk. “Now we’re looking at agentic AI tools (or AI agents) that can make decisions and carry out a series of complicated tasks on behalf of the user. Next year, I expect that we’ll see new frameworks introduced that will help developers build new agentic AI applications.” Agentic AI can transform gen-AI from a fun toy into an automated application.

Such agents are already being used for different purposes, automating gen-AI outputs into actions, and even improving the gen-AI itself. “The next frontier in AI is agency,” adds Eleanor Watson, IEEE member and a member of the AI faculty at Singularity University: “systems that can independently assess situations and determine action plans. These systems can function as a concierge, fixing problems for us like scheduling, logistics, planning and research.”

But as always, with every new solution comes new threats. If the underlying LLM has been poisoned, agents could automate malicious activity without the owning company realizing it. 

They could even simply run amok. “They can take unexpected initiative, seek undesirable shortcuts, and even recognize when they’re being tested while concealing that awareness,” warns Watson, making today’s science fact sound like yesterday’s dystopian science fiction. “They may decide to work to rule, making an uncharitable interpretation of instructions, or deciding that lying to others or railroading them is most expedient, even to their own users themselves.” 

If we thought gen-AI was complex, this will be a new world of complexity. “For this to be made possible,” explains Raul Pradhan, VP of product & strategy at Couchbase, “agentic systems require a compound AI system using multiple models that are moved closer to data sources, within security parameters. The systems also need to handle both structured and unstructured data at low latency – all in real-time – to make meaningful, context-aware decisions on the fly.” 

This, he adds, “requires seamless integrations across unstructured data processing, vector databases and transactional systems for efficient storage and retrieval of diverse data types. The companies that will excel in providing these robust integrations and infrastructures will be uniquely positioned to drive the next wave of innovation and value in the AI sector.”

Nicole Carignan, VP of strategic cyber AI at Darktrace, believes 2025 will see the emergence of ‘agent swarms’ “where teams of autonomous AI agents work together to tackle more complex tasks than a single AI agent could alone.” But she adds, “Attacks such as data poisoning, prompt injection, or social engineering could all be an issue for AI agents and multi-agent systems. And these are not issues that traditional application testing alone can address.”

Nicole Carignan, VP of strategic cyber AI at DarktraceNicole Carignan, VP of strategic cyber AI at Darktrace

“The challenge,” adds Neil Thacker, EMEA CISO at Netskope, “is that agentic AI will require fully automated protections to support business autonomy, but for many organizations, automated protection is still an aspiration.”

Apple, Google and Samsung are all promoting AI agents on our mobile phones that promise to connect different parts of our lives and become perfect personal assistants. This is a completely new level of complexity in IT, and as all security people understand, complexity breeds risk. “And users are unfazed,” says Ilse Funkhouser, CPO & head of AI engineering at Careerspan.

Content credentials

The idea that current big tech LLMs have not broken privacy and copyright laws in their making (by scraping the internet and social media) is a stretch. But it’s the perfect illustration of the regulators’ dilemma: do you protect the people or protect innovation (and by extension, the economy)?

Where AI is concerned, the result has been a fudge – basically, the regulators appear to be saying, ‘we’re not going to look too deeply into whether you have broken the law, but don’t break it any more.’ Going forward, the focus is now on copyright, driven by the threat of deepfakes and AI-generated misinformation. ‘Watermarking’ is the solution.

“In Europe, the EU AI Act encourages watermark labeling to be part of the AI vendor output to address concerns like misinformation and deepfakes,” explains Sharon Klein, a partner at Blank Rome law firm. “California also recently passed the California AI Transparency Act requiring developers of widely used AI systems to provide certain AI-detection tools and watermarking capabilities to help identify AI-generated content.” The Act was signed into law by Governor Newsom on September 19, 2024; and will come into effect on January 1, 2026.

The technical term is content credentials, and it will be a major research area in 2025. “We’ll see an unprecedented push for transparency in digital content as the private and public sectors recognize the critical need for an industry-wide content provenance standard,” explains Andy Parsons, senior director of the Content Authenticity Initiative at Adobe.

“Content Credentials will play a pivotal role in this shift, acting as a ‘nutrition label’ for digital content that allows brands and creators to gain attribution and protect their work, while providing consumers a renewed sense of clarity and safety online… This shift represents more than just technological progress – it’s a reimagining of how we establish and sustain trust in the digital ecosystem.”

But it’s not an easy cure. Stuart McClure, CEO at Qwiet AI, sees increased adoption, partial success, but with limitations. “Watermarking can help in some scenarios, particularly in identifying the origin of content generated by legitimate tools. It could aid in forensic analysis and attribution.”

But he adds, “We need to remember that bad actors are resourceful. They might find ways to remove or alter watermarks, or simply build their own tools without such safeguards…It’ll be an ongoing cat-and-mouse game.”

AGI

Gen-AI only works with the data we give it. In this sense it is more artificial than intelligent. It does not create anything new – it can only rearrange, correlate, and draw new insights into what we already know.

The next, perhaps impossible, step for artificial intelligence is from gen-AI to artificial general intelligence, or AGI. AGI will be capable of independent reasoning – able to perform any intellectual task that a human mind can perform. Whether this is truly possible is still under debate; but we know two things: research into AGI will continue throughout 2025 and it will not be achieved in 2025

“I think it’s possible, but AGI isn’t just around the corner like some people think,” comments Joseph Ours, AI Strategy Director at Centric Consulting. “True intelligence isn’t just about processing data – it’s about understanding context, learning across different domains, and making creative connections that aren’t explicitly programmed.”

Augusto Barros, VP of product marketing at SecuronixAugusto Barros, VP of product marketing at Securonix

We may, however, witness some early and perhaps imaginative claims. “AGI implies human-level cognitive abilities, which remains a distant goal,” says Augusto Barros. “However, the evolution of AI technologies should push the technology to points where it will be hard to discern between real AGI and ‘looks like AGI’. Is AGI possible? It’s still hard to say if it is or not.”

 Claims will precede reality. “I think next year you will see more and more agentic based systems pretending to be advanced LLMs with AGI,” suggests Ryan Ries, chief AI & data scientist at Mission Cloud. “None of this will be true AGI, but it will be useful.”

We may even get a step closer and call something AGI. “But then we’ll have to come up with a new acronym for when the real AGI arrives,” says Melissa Ruzzi, director of AI at AppOmni. “This is a moving target and we’re not there yet, but with each passing year we get closer.”

Sharon Klein, a partner at Blank Rome LLP, believes, “AGI is definitely a possibility, but not in 2025.” Eli Vovsha, manager of data science at Fortra, believes gen-AI will continue its incremental improvements over the next few years. “Scaling alone yields low hanging fruit,” he says, “but we will need several more years after that to convert some major leaps that have yet to materialize before we can see a clear path to AGI.”

If AGI is genuinely possible – and it is not clear that it is – we must hope that the scientists can give us ample warning on what to expect. If gen-AI has already rocked the boat, being unprepared for AGI could capsize it.

The game of leapfrog

Cybersecurity has always been a game of leapfrog advantage, with the attackers being proactive and defenders being reactive. It is the same with AI, but the scale and pace is increasing dramatically. Attackers will find or develop an innovative attack methodology, and defenders will react defensively. But it will all happen faster and possibly invisibly because of agentic AI.

Christian Borst, EMEA CTO at Vectra AI, likens this to the Red Queen theory (taken from Lewis Carroll’s ‘Through the Looking-Glass’). “We’re in a new world, taking part in a constantly accelerating race. It’s not enough to simply keep pace anymore, and those who do will face extinction,” he warns “Organizations must be laser focused on optimizing their security stack, ensuring they are focusing on solutions that can cut through all of the noise and help them to identify and respond to threats more quickly going forward.”

Christian Borst, EMEA CTO at Vectra AIChristian Borst, EMEA CTO at Vectra AI

History suggests that this is a never ending game of leapfrog. The advent of AI doesn’t change the underlying structure of cybersecurity other than raising the stakes; attackers will attack more, and defenders will need to deploy (and pay for) more sophisticated defensive tools.

We’ll see this play out in the process of content credentials watermarking. These are a good guys’ response to the bad guys’ deepfakes and misinformation. “The watermarks should work for now,” comments Barros, “but we should also expect innovation from threat actors about how to manipulate them. It’s the old game of cat and mouse, which we should see in this area too.”

Absent AGI, gen-AI creates nothing. It sorts through our existing knowledge, it sees connections we may have missed, it repeats what we already do but much faster and with fewer mistakes. Deepfakes are not new; phishing and spear-phishing is stock malicious activity; generating malware from clues within vulnerability disclosures is standard criminal behavior.

What gen-AI provides is high-speed automation of existing human knowledge and behavior. This is fundamentally how cybercriminals will use AI. It is the speed and scale of attacks that will change – gen-AI is automating cybercrime. The only way in which defenders will keep up with these attacks is by automating cyberdefense – again with the use of AI.

The benefit of AI to business will not be in cybersecurity (which will remain the same old same old at speed and with higher stakes); it will be in its ability to automate internal business processes. Companies that fail to use AI as an automation tool will fall by the wayside as also rans.

More artificial than intelligent

Artificial intelligence remains more artificial than intelligent. One of the biggest dangers is that we become seduced into assuming that anything touched by AI is, in fact, gospel. It isn’t; but it is what we’ve got, and it isn’t going away.

Attitudes toward AI vary from ‘what threat?’ to Doomsday. “I’m over the machine learning (ML) and artificial intelligence (AI) hype – it was overblown in 2024,” says Paul Laudanski, director of security research at Onapsis. “While there are real concerns… it will not impact business-critical applications. As long as companies are able to rapidly implement patches, there isn’t an increased risk to SAP security due to AI advancements.”

AI is not creating new threats – it is scaling and strengthening existing known threats. The challenge is to match the scale and speed that will come from the malicious use of AI – and the only way to do that will be with our own AI-driven defenses.

We must learn to live with AI. To accept its benefits and reject its warts. That will require a deeper understanding of its strengths and weaknesses than we currently exhibit. 

As Julian Brownlow Davies, VP of advanced services at Bugcrowd says, “AI literacy isn’t optional anymore; it’s essential. This includes gaining proficiency in machine learning, deep learning, and natural language processing. These skills are crucial for understanding how AI tools function, and how they can be effectively applied in cybersecurity.”

Related: WhiteRabbitNeo: High-Powered Potential of Uncensored AI Pentesting

Related: Google SynthID Adding Invisible Watermarks to AI-Generated Content

Related: IBM Boosts Guardium Platform to Address Shadow AI, Quantum Cryptography

Related: AI Weights: Securing the Heart and Soft Underbelly of Artificial Intelligence

Read Entire Article