How Agentic AI will be Weaponized for Social Engineering Attacks

2 weeks ago 11
News Banner

Looking for an Interim or Fractional CTO to support your business?

Read more

Social engineering is the most common initial access vector cybercriminals exploit to breach organizations. With each passing year, social engineering attacks are becoming bigger and bolder thanks to rapid advancements in artificial intelligence.

How is AI Advancing Social Engineering Attacks?

AI is helping cybercriminals advance their social engineering campaigns in multiple ways:

  • Personalized Phishing: AI algorithms can analyze data from social media (such as background, interests, employment, connections, associations, location, etc.) and various OSINT sources to create more personalized and convincing spear phishing attacks.
  • Local and Contextual Content: Tools like ChatGPT, Copilot and Gemini can help draft phishing emails that are grammatically correct, contextually appropriate, and translated into any local language. AI can be prompted to mimic a specific writing style or tone, and phishing emails can be drafted in accordance with a recipient’s response or behavior.
  • Realistic Deepfakes: Threat actors use deepfake tools to create fake virtual personas and audio clones of senior executives and trusted business partners. Deepfakes are used to convince employees into sharing sensitive information, transfer money, or grant access to an organization’s network.

AI’s Latest Evolution Amplifies Social Engineering Risks Even Further

November 2022 saw the introduction of the first Large Language Model (LLM), freely released to the public. In 2023, the world began using generative AI tools and developers rolled out a range of features and functionalities built on top of these LLMs. By the second half of 2024, a new iteration rapidly emerged—AI-powered agents (“agentic AI”) that can act autonomously and execute complex tasks.

Since AI is available to everyone, we can expect cybercriminals to exploit agentic AI technology for malicious purposes. Listed below are a few cases that explain how bad actors will weaponize agentic AI to launch social engineering attacks:

Self-improving, Adaptive, and Relentless Threats: One of the key benefits of agentic AI is that it contains memory and therefore possesses the ability to learn and improvise. As the AI interacts with more victims over time, it gathers data on what types of messages or approaches work best for certain types of demographics. Thus, it adapts itself, refines its future phishing campaigns, making each subsequent attack more powerful, convincing, and effective.

Automated Spear Phishing: Non-agentic AI is essentially prompt-based; cybercriminals have to provide specific inputs for the AI to create a phishing email. In the new world order, malicious AI agents will autonomously harvest data from social media profiles, craft phishing messages, tailor them to specific individuals or organizations, and disseminate them till they achieve a desired result.

Dynamic Targeting: AI agents might dynamically update or alter their phishing pitch based on a recipient’s response or location, or things like holidays, events, or the target’s interests, marking a significant shift from static phishing attacks to highly adaptive and real-time social engineering threats. For example, if a phishing message is ignored, the AI might send a follow-up message with a more urgent tone.

Advertisement. Scroll to continue reading.

Multi-stage Campaigns: Agentic AI can potentially be orchestrated to deliver complex and multi-stage social engineering attacks. In simpler terms, AI can be told to leverage the data from one interaction to drive the next one. For example, a phishing attack can lure someone into disclosing a small bit of information in the first round of attacks. The AI can then use that information to chart its next course of action.

Multi-modal Social Engineering: An autonomous AI agent might go beyond email and use or combine other communication channels such as text messages, phone calls, or social media in its phishing attempts. For example, if a phishing email message gets ignored, the AI could make a follow-up call using an audio or video deepfake to improve the chances of the target responding.

Key Takeaways for Organizations

Below are some best practices and recommendations for organizations:

Fight Agentic AI with Agentic AI: To combat advanced social engineering attacks, consider building or acquiring an AI agent that can assess changes to the attack surface, detect irregular activities indicating malicious actions, analyze global feeds to detect threats early, monitor deviations in user behavior to spot insider threats, and prioritize patching based on vulnerability trends.

Leverage AI-based Security Awareness: Security awareness training is a non-negotiable component to bolstering human defenses. Organizations must go beyond traditional security training and leverage tools that can do things like assign engaging content to users based on risk scores and failure rates, dynamically generate quizzes and social engineering scenarios based on the latest threats, trigger bite-sized refreshers, etc.

Prepare Employees for Agentic AI Social Engineering: Human intuition and vigilance are critical in combating social engineering threats. Organizations must double down on fostering a culture of cybersecurity, educating employees on the risks of social engineering and the impact on the organization, training to identify and report such threats, and empowering them with tools that can improve security behavior. Gartner predicts that by 2028, a third of our interactions with AI will shift from simply typing commands to fully engaging with autonomous agents that can act on their own goals and intentions. Obviously, cybercriminals won’t be far behind in exploiting these advancements for their misdeeds. Organizations must shore up defenses to prepare for this eventuality by deploying their own AI-based cybersecurity agents, leveraging AI-based security training, and instilling a sense of security responsibility.

Related: Cyber Insights 2025: Artificial Intelligence

Related: Cyber Insights 2025: Social Engineering Gets AI Wings

Read Entire Article