Cyber Insights 2025: Social Engineering Gets AI Wings

4 weeks ago 11
News Banner

Looking for an Interim or Fractional CTO to support your business?

Read more

Cyber Insights 2025 examines expert opinions on the expected evolution of more than a dozen areas of cybersecurity interest over the next 12 months. We spoke to hundreds of individual experts to gain their expert opinions. Here we discuss what to expect in Social Engineering.

Social engineering underpins the greater part of criminal cyber activity. We are yet to find a solution, because social engineering is hard-wired into everyone’s psyche.

The internet introduced the citizen journalist. Artificial intelligence introduces the citizen social engineer. Anyone can be a social engineer – in fact, everyone is a social engineer. The problem is that regardless of personal skill levels, AI gives the social engineer wings. What has always been bad, will inevitably get worse.

“Social engineering is a human trait, a feature of being a species that lives in groups,” explains Kai Roer, CEO and founder at Praxis Security Labs. “It is not new, and it is not going away. It is also not something you can patch, nor can you ‘fix the problem’ – because it is not a problem.”

Kai Roer, CEO and founder at Praxis Security LabsKai Roer, CEO and founder at Praxis Security Labs

It is, he continued, an aspect of humanity. “Social engineering is a critical skill that helps individuals and groups build and enhance relationships and communicate and signal intent. It is a two-way street where signals are being sent and received so that the signal can be interpreted and acted upon.” 

You can test this everywhere. Every time you visit a supermarket, you are being socially engineered to be drawn deeper into the store and to buy more product. Every advert you see, on billboards and television, is an exercise in social engineering; when you meet someone, you employ your own subtle social engineering to improve the relationship – it’s an important part of daily life.

“This skill is so ingrained in human nature that researchers have shown that babies are able to identify power structures in groups of humans even before they know how to speak,” adds Roer.

But when security people talk about social engineering, they talk about criminals using natural human interaction embroidered with lies, falsehoods, fake promises and more to obtain illicit benefit from an unsuspecting victim. 

“Because of these falsehoods, combined with how most humans are wired, the criminals succeed in their goals,” concludes Roer. “This is not the fault of the employee – as some discourses in our industry are claiming – instead it is the fault of the criminal. Ask yourself: who is at fault of a rape? The rapist or the rapee? Most people will agree that the attacker is to blame, not the victim.”

Advertisement. Scroll to continue reading.

The underlying problem with preventing malicious cyber social engineering is that we can only treat the victim and not the aggressor. And in ‘treating’ the victim through solutions like ‘awareness training’, we are really trying to get victims to deny a fundamental aspect of their humanity while having no effect on the aggressor.

The result is that malicious social engineering persists and will continue to persist (and worsen) simply because it is part of human nature to engineer and be engineered. It’s what makes us social animals.

The social engineering threat in 2025 is succinctly summarized by Kevin Tian, CEO and co-founder of Doppel. “In 2025, social engineering will cement itself as the top security threat – supercharged by generative AI. Criminals won’t just rely on phishing emails anymore. They’ll unleash dynamic, real-time campaigns across SMS, deepfake voice calls, and even social media personas, adapting on the fly. It’s multichannel, multimodal, and a whole new level of danger.”

Michael Adjei, director of systems engineering at IllumioMichael Adjei, Director of Systems Engineering at Illumio

AI will take social engineering to new levels. “Ordinary users will, in effect, become unwitting participants in mass attacks in 2025,” warns Michael Adjei, director of systems engineering at Illumio. “Social engineers will exploit popular applications, social media features, and even AI tools to deceive people into inadvertently running exploits for web-based or script-based vulnerabilities.”

He is not alone in foreseeing these new levels of AI-assisted social engineering attacks. Brian Fox, CTO at Sonatype, warns that although we dodged an XZ Utils bullet in 2024, 2025 could be catastrophic. 

“The attempted XZ Utils attack was uncovered in 2024, but it was a sophisticated social engineering campaign that was initiated years in advance,” he says. “While its shocking discovery signified the start of a new trend, I know similar campaigns are already well underway. XZ Utils wasn’t an isolated event, and while it may take time for the more sophisticated campaigns to be discovered, less sophisticated copycats will be prevalent.”

Meanwhile, since the arrival of ChatGPT and the subsequent deluge of gen-AI models, we have expected and been waiting for cybercriminals to use AI to improve and scale their existing attacks. At first the expectation was for more sophisticated and larger scale email phishing. It didn’t happen immediately but has steadily grown through 2024.

But with the arrival of multi-modal gen-AI, the threat now includes the additional potential for voice and video supported deepfake phishing. A phishing argument supported by a known voice and face is far more compelling than simple text. AI will boost malicious social engineering in 2025 by tapping more directly into our natural acceptance of and participation in human-to-human social engineering.

The overall current effect of advances in AI-enhanced social engineering is to place the attacker in the sweet spot of the ‘uncanny valley’; that area where belief is easy, but before too much perfection starts to rebuild distrust, suggests Avishai Avivi, CISO at SafeBreach.

“We expect malicious actors to build on their current capabilities and increase the number and sophistication of attacks leveraging AI technologies for social engineering to affect business email compromise (BEC) and account takeovers. Specifically,” he continues, “we expect malicious actors to leverage newer alternative channels like voice, SMS, and targeted videos to attack their designated targets. These AI-assisted attacks can also take massive amounts of publicly available data on their targets and synthesize it in a way not seen before.”

Most of the attributes of social engineering have been in use for years – including deepfakes – but the return on effort for pre-AI deepfakes has been relatively low (one to one rather than the one to thousands in an email text phishing campaign). It is this return on effort that is being upended by AI. “The use of deepfakes is a worrying trend that will likely grow exponentially in 2025,” warns Boris Bohrer-Bilowitzki, CEO at Concordium. “Deepfakes will become even more sophisticated and one of the main attack vectors for cybercriminals… making it easier for threat actors to deceive people and causing huge implications and costs for modern society.”

David Neeson, senior SOC analyst at Barrier Networks, agrees with this assessment. “Tied with AI and deepfakes, threat actors could soon start spoofing the identities of individuals and doing video interviews with prospective employers. These attacks will be hard to detect and could result in a steep rise in espionage campaigns, where organizations accidentally employ state-sponsored actors through being fooled by AI and deepfakes.”

David Neeson, senior SOC analyst at Barrier Networks

The employment of foreign actors with foreign priorities is already an issue. The use of AI-assisted social engineering to embed foreign agents for espionage or disruption could increase during 2025. The return on effort would not be as great as a typical AI-assisted criminal bulk phishing campaign but could certainly be attractive to nation state adversaries (with more time available now that election-based misinformation efforts are less important).

However, the democratization of AI means that its use will not be limited to APTs. Its ability to increase the sophistication and scale of bulk phishing campaigns means that erstwhile non-technical wannabe criminals will run rampant. At the same time, newly sophisticated high value spear phishing is also likely to grow. 

On the dark web, “There is already a thriving ‘marketplace’ for malicious actors that specialize in this area. Convincing and effective audio or video ‘fakes’ can be generated for low-cost… and even free from some services,” notes Jim Walter, senior threat researcher at SentinelLabs.

This is part of the continuing growth of professionalism in the criminal underground – in this case offering deepfake production as part of a malware-as-a-service (MaaS) operation. See Criminal Gangs for more information on underground crime services.

“This technology will impersonate critical individuals such as CEOs, government officials, or even loved ones, making it nearly impossible to distinguish between genuine and fabricated communications,” warns Irfan Shakeel, VP of training and certification services at OPSWAT. “The implications are vast, from financial fraud – where scammers use fake video calls to request funds or sensitive information – to a general erosion of trust in digital interactions.”

James Imanian, US federal technology office senior director at CyberArk, expands: “This approach will bleed into other areas also – next-gen money transfer attacks, IP theft, and espionage will target busy execs and top-tier employees. Enterprises and individuals are already getting duped by criminals who spend months on social engineering scams dubbed ‘pig butchering‘. Get ready for a storm of personalized, automated AI agent attacks.”

In short, the arrival of AI will transform social engineering – until now largely considered just a vehicle for phishing – into a foundational element of diverse attacks that are more sophisticated, more compelling, better disguised, and delivered at greater scale than we have ever seen.

A new industry has been born over the last year: AI-defense to counter AI-aggression. This industry is, of course, promoting itself aggressively as the best way to counter AI-assisted attacks; but there is no simple proof to these claims. Technology can and is being used to detect deepfake voice (relatively easily) and deepfake video (more difficult and improving); but there is no technology able to modify human nature. We want to believe what we are told and see – and if the context is reasonable, we are likely to believe it.

Eric Avigdor, chief of product at Votiro suggests AI defense should concentrate less on detecting an AI social engineering attack, and more on preventing bad outcomes from a successful attack. “A much better option would be to assume that individuals will receive many such messages and to help prevent that person from making the mistake of sharing sensitive information,” he says. “For example, by detecting that a person is sharing their credentials or sensitive data in a situation and context that does not seem right and then blocking or alerting the individual to potential fraud.”

Roer has similar concerns. “In 2025, we will see more evidence that phishing assessments and security awareness training programs are not the right way to mitigate social engineering attacks.” His concern is that such practices do not effectively change user behavior to mitigate the risk. At the same time, awareness training is often required by regulations.

“I predict that a growing number of organizations will recognize that security awareness training and phishing assessments are a mere compliance exercise,” he continues. “As such, they are a necessary evil, but not something that is going to change behavior. Organizations are likely to reduce the volume of both training and assessments to the minimum, checking that box, and freeing up budget and resources to implement security measures that actually change behaviors of the employees without creating resentment and negative outcomes.”

He does not specify what security measures could change behaviors when awareness training cannot – but his firm is involved with the emerging field of human risk management. He would not be alone in promoting this approach to fight social engineering. Chris Madeksho, lead cybersecurity analyst at the University of Tennessee Health Science Center wrote about the concept for Educause in September 2024 – and Mika Aalto, co-founder and CEO at Hoxhunt, is also an adherent.

Mika Aalto, co-founder and CEO at HoxhuntMika Aalto, Co-founder and CEO at Hoxhunt

“The old-school security awareness training model was officially disrupted this year with the recognition of human risk management as its own category by multiple analysts,” he explains. “These HRM platforms are powered by AI and are designed to change behavior and augment the SOC with automatic threat data orchestration.”

An additional approach to the social engineering threat may be found in the science of psycholinguistic analysis. The idea is not entirely new to security, but has so far primarily focused on analyzing the psychological state of employees. Here the theory is that developing employee dissent will show through monitoring internal (and even social media, but that borders on the creepy) communications. The belief is that ‘unhappiness’ can be detected and addressed before the employee becomes a malicious insider.

The question for social engineering is whether malicious intent can be detected in a single communication rather than unveiled over a series of communications. Matt Cooke, cybersecurity strategist at Proofpoint, believes there is some potential. “Psycholinguistics can play a critical role, especially when combined with advanced machine learning (ML) techniques,” he says. 

“Tools like transformer-based models (for example, BERT and GPT), which are adept at understanding complex relationships between words, are already being used to analyze email content for threats like phishing. However, it’s important to note that detecting threats purely from content isn’t foolproof. Contextual factors like sender authentication (such as DMARC) and behavioral patterns also enhance detection.”

Paige Schaffer, CEO of Iris Powered by Generali

In short, psycholinguistics supported by additional context could help in detecting malicious content. Aalto agrees. “These technologies can analyze the linguistic and emotional cues within messages to identify signs of deception or manipulation. For example, they can detect unusual urgency, fear appeals, or inconsistencies that are common in phishing and scam communications. Tools already exist that detect aberrant meta data in deepfakes to distinguish those messages.”

But he also agrees that the process benefits from additional help. “While AI-driven content analysis will play a crucial role in detection, it will be most effective when combined with other security measures like user education, behavioral analytics, and robust authentication protocols.”

Always remember, however, that what is sauce for the goose is also sauce for the gander. Criminals can use AI to scrape the digital world to create highly personalized – and believable – messaging, says Paige Schaffer, CEO of Iris Powered by Generali. “We’ll also likely see more effective social engineering attacks as criminals grasp a better understanding of our decision-making process and exploit biases and other psychological factors,” she warns.

“Similarly, by analyzing large datasets, AI systems can help criminals identify psychological vulnerabilities – or even certain individuals – who are more susceptible to social engineering attacks.”

Danny Jenkins, ThreatLockerCEO and co-founder at ThreatLocker

Any idea that defenders can stay ahead of the attackers who use AI by using their own AI in defense is a fools’ paradise. We couldn’t do it pre-AI, and we won’t do it post-AI. At best, our own use of AI will match current criminal use of AI. When that happens, criminals and nation states will amend their behavior, and we’ll start the catch-up process again. 

Danny Jenkins, CEO and co-founder at ThreatLocker, points to Goodhart’s Law: ‘When a measure becomes a target, it ceases to be a good measure.’ Applied here, he says “it means that once certain markers are identified as bad, malicious actors can change their techniques to avoid using them and therefore avoid detection. Yes, it may improve detection, but it won’t solve it, like how malware detection does not detect all malware.”

The implication is that whenever we successfully react to a new attack methodology, the attackers will amend their methodology – and we will need to amend our defense strategies. Our use of AI for defense is important but not game changing: AI will give social engineering wings. We can never win the battle for security; so, we need to survive the insecurity. Business resilience must be the ultimate purpose of all the security controls and processes we employ, because we will never conclusively defeat or protect ourselves from social engineering.

Related: The AI Threat: Deepfake or Deep Fake?

Related: Phishing: The Silent Precursor to Data Breaches

Related: 50 Servers Linked to Cybercrime Marketplace Seized

Related: Microsoft Disrupts ONNX Phishing Service, Names Its Operator

Read Entire Article