• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Large Language Models Can Be Used To Effectively Scale Spear Phishing Campaigns

June 16, 2023

🔬 Research Summary by Julian Hazell, a Research Assistant at the Centre for the Governance of AI and an MSc candidate in Social Science of the Internet at the University of Oxford.

[Original paper by Julian Hazell]


Overview: Large language models (LLMs) like ChatGPT can be used by cybercriminals to scale spear phishing campaigns. In this paper, the author demonstrates how LLMs can be integrated into various stages of cyberattacks to quickly and inexpensively generate large volumes of personalized phishing emails. The paper also examines the governance challenges created by these vulnerabilities and proposes potential solutions; for example, the author suggests using other AI systems to detect and filter out malicious phishing emails.


Introduction

Recent progress in AI, particularly in LLMs such as OpenAI’s GPT-4 and Anthropic’s Claude, has resulted in powerful systems that are capable of writing highly realistic text. While these systems offer a variety of beneficial use cases, they can also be used maliciously. One such example is using LLMs to spear phish, which describes a type of cyber attack where the perpetrator leverages personalized information about the target to deceive them into revealing sensitive data or credentials.

In “Large Language Models Can Be Used To Effectively Scale Spear Phishing Campaigns,” the University of Oxford’s Julian Hazell explores the usefulness of integrating LLMs into spear phishing campaigns. To explore this, he uses OpenAI’s GPT-3.5 to generate personalized phishing emails for over 600 British Members of Parliament using background information scraped from Wikipedia, where he concludes that such emails are highly realistic and can be generated cost-effectively.

Key Insights

A step-by-step look at how LLMs can help scale spear phishing attacks

Phase 1: The Collect Phase

LLMs can be used in a cyberattack’s “collect” phase, where the attacker gathers information about targets. Spear phishing attacks are often more effective than regular phishing attacks on a per-target basis precisely because they are personalized. However, spear phishing traditionally requires the hacker to research targets and spend extra effort customizing the messages, which is time-consuming and resource intensive. LLMs can aid in this phase by generating target biographies using unstructured text data as input, thus making it much easier and more cost-effective for cybercriminals to create personalized phishing messages. 

Phase 2: The Contact Phase

LLMs can also aid hackers during a spear phishing attack’s “contact” phase. LLMs can assist cybercriminals in writing spear phishing emails by suggesting qualitative features that define a successful attack, such as personalization, contextual relevance, psychology, and authority. By combining these principles with the target’s personal information, LLMs like GPT-4 can generate highly targeted phishing emails at scale. To test this, 600 emails targeting British Members of Parliament were generated, each costing a fraction of a cent and taking 14 seconds to generate on average.

Phase 3: The Compromise Phase

Even with safeguards put in place by AI labs, carefully crafting the right prompt can get an LLM to generate basic “malware,” a term that describes software capable of compromising a system once executed. Pretending to be a “cybersecurity researcher” conducting an “educational” experiment, the researcher was able to successfully prompt GPT-4 to generate a basic malware file.

LLMs alleviate three key difficulties faced by cyber criminals

LLMs can assist cybercriminals in scaling spear-phishing campaigns by reducing cognitive workload, financial costs, and skill requirements. These systems can generate human-like emails without fatigue and process significant volumes of background data on targets. They also significantly lower the cost per email and enable even low-skilled attackers to create convincing phishing emails and malware, allowing them to focus on strategic planning and target identification instead.

Possible solutions

The researcher explores two possible solutions to this problem. The first is implementing “structured access schemes” for language models, such as application programming interfaces (APIs). These schemes control how people interact with and use the systems and can help identify and prevent cases of misuse. This could allow tracking of malicious uses back to individuals so that they can be banned or otherwise sanctioned.

The second solution is to develop LLMs specifically focused on cyber defense that could detect spear phishing emails or other forms of malicious content. For example, specialized LLMs can be trained to analyze incoming emails for suspicious features like deceptive URLs (“Gooogle.com” versus “Google.com”). By training the model on previous examples of cyberattacks, these defensive systems can potentially identify sophisticated phishing attacks and help overcome human attention limitations.

As cybercriminals gain access to increasingly advanced AI, cybersecurity experts, and policymakers must find ways to balance promoting the benefits of language models with restricting opportunities for misuse. “As these systems continually improve,” the researcher argues, “it is crucial that AI developers work to proactively ensure their technologies are not exploitable for malicious ends.”

Between the lines

Recent advancements in AI capabilities, particularly in the domain of natural language, have marked the beginning of a new era in cybersecurity. As AI systems become proficient enough to enhance the effectiveness of cyberattacks like spear-phishing meaningfully, we must adapt to a rapidly evolving threat landscape. The findings highlight the unsettling possibility that cybercriminals can use AI to convincingly impersonate individuals and automate hacking campaigns.

More concerningly, AI systems could soon advance to the point of automating cyber crimes with even less human involvement. Experimental systems like Auto-GPT provide a glimpse into AI systems that can pursue goals autonomously. Such systems could be tasked with pursuing open-ended goals, like “send a spear phishing email to every US member of Congress,” further increasing the scalability of cyber attacks. Through natural conversation, AI agents might be able to gain trust before attacking. Without defensive measures, agentic systems could become formidable adversaries.

Future research could also focus on exploring other communication channels and attack vectors that could be exploited by cybercriminals utilizing AI. For instance, how might AI manipulate visual or audio media to deceive targets? 

Finally, this paper raises questions about the balance between AI’s positive and negative impacts. How can developers ensure that AI advancements do not inadvertently enable harmful activities? Can AI systems be designed to detect and counteract such misuse? These questions emphasize the need for further exploration into the ethical and practical implications surrounding AI’s development and deployment in the context of cybersecurity.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

related posts

  • Human-AI Collaboration in Decision-Making: Beyond Learning to Defer

    Human-AI Collaboration in Decision-Making: Beyond Learning to Defer

  • Measuring Disparate Outcomes of Content Recommendation Algorithms with Distributional Inequality Met...

    Measuring Disparate Outcomes of Content Recommendation Algorithms with Distributional Inequality Met...

  • Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

    Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

  • The Unnoticed Cognitive Bias Secretly Shaping the AI Agenda

    The Unnoticed Cognitive Bias Secretly Shaping the AI Agenda

  • Research summary: What does it mean for ML to be trustworthy?

    Research summary: What does it mean for ML to be trustworthy?

  • The Bias of Harmful Label Associations in Vision-Language Models

    The Bias of Harmful Label Associations in Vision-Language Models

  • AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries

    AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries

  • Modeling Content Creator Incentives on Algorithm-Curated Platforms

    Modeling Content Creator Incentives on Algorithm-Curated Platforms

  • Why AI Ethics Is a Critical Theory

    Why AI Ethics Is a Critical Theory

  • Trustworthiness of Artificial Intelligence

    Trustworthiness of Artificial Intelligence

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.